Hai. This is a sample of the work in progress for a book about free will. I claim full copyright of this sample. "Sample of a Book about Free Will" copyright Junichiro Swanson. When the book is done, I will put the book under one of the Creative Commons licenses, which means renouncing some of the terms of the copyright. Not with this sample. This needs massively reworked. Almost all the main ideas are in, but they're in a scattershot, nearly random order (my process is extremely scattershot before the part of the process that's like solving a jigsaw puzzle). Much or all of this will be turned into dialog format. In a lot of places in this sample, I haven't yet put the quotation marks around the things the characters are saying, so it's full of direct contradictions with no indications of which things I mean to support or knock down which other things, or what I agree or disagree with. This form of the draft (pre-draft?) has a portion of it almost well-grouped into topics, and the first-level headings of that are in alphabetical order - that's roughly the first 40% of this document. The rest is in unsorted piles. Some of the writing in those piles I did before the part that's in headings and some of it later. After this is reworked, the resultant book does not have the ideas in anything like the same order. Lots of the parts of this are in need of having style added and the standard improvements of editing. I'm just as good at that part too. All the parts of this are to look way different after the process. Sample of a Book about Free Will [] First-Level Heading [][] Second-Level Heading (First-Level Heading / Second-Level Heading) [][][] Third-Level Heading (First-Level Heading / Second-Level Heading / Third-Level Heading) [][][][] Fourth-Level Heading (First-Level Heading / Second-Level Heading / Third-Level Heading / Fourth-Level Heading) [] Compatibilism [][] Do Not Step in the Quagmire: Compatibilism and the Hidden Conditional (Compatibilism / Do Not Step in the Quagmire: Compatibilism and the Hidden Conditional) Not all of compatibilism is a quagmire of evasion, but some of it sure is. Let's use 'could' for one thing and 'would' for another thing. When you say "would have done otherwise" without an 'if' clause then you see that something is off. "Bob went to the tavern yesterday but he would have gone to the dog park" is a grammatically complete sentence, but it's not even a complete statement. Would if what? 'Would' wants an 'if'. If 'could' is a conditional, then "Bob went to the pub yesterday but he could have gone to the dog park," is similarly an incomplete statement. This use of 'could', it turns out, is quite deceptive. It's used to make an incomplete statement look like a complete statement. If I'll take issue with one matter of definition in this whole book, it's this one, because the use is outright deception, and it has led to a lot of confusion. (I do say plenty about definitions in this book, but when I do, it's either to distinguish them or to say that disagreeing about definitions is silly, with the above one exception). "What the hell happened?" "It was terrible. My car and my phone both broke yesterday." "Why didn't you meet up with my yesterday at the time we had agreed on?" "I would have met up with you." "Would if what?" "I would have if my car wasn't broken. But I told you, my car broke." "Why didn't you at least call?" "I would have called you." "Would if what?" "I would have if my phone wasn't broken. But I told you, my phone broke." In this example were have two 'would' statements that make enough sense given the context, but in general a 'would' statement without an 'if' clause has no meaning. The sentence "Bob went to the tavern yesterday but he would have gone to the dog park" expresses exactly zero complete thoughts (propositions) just like a sentence fragment such as, "Bob is employed as a". The sentence "Bob went to the tavern yesterday but he would have gone to the dog park if the weather were nicer," is a complete sentence, and a complete statement, and it expresses a complete idea (proposition). "The hidden conditional" is like a fantastic beast that many compatibilists summon to do all manner of supernatural deeds. This subset of compatibilists also dedicate incredible amounts of their most valuable resources to feed the fantastic beast. Sadly, the hidden conditional produces only confusion for them, not real deeds of any kind. It usually ends up eating them as well. It starts with saying something like, "'could' means 'could, if his desires had been different'". To be more thorough about this, a compatibilist should say something more like "When I say 'could', I mean 'could, if his desires had been different', so from now on, when I say 'could' the 'if' clause is implied, and the 'if' clause refers to a hypothetical world, not this world." But they say it the quick way instead, and then they make all kinds of logical blunders based on conflating the real world with a hypothetical world. There are entire essays written by accreddited philosophy professionals about which you could say, "every argument in this essay is based on conflating real and hypothetical worlds, and all invalid" and that's the end of everything productive about them. Many times I've been reading something, and my effort in trying to understand what the heck was said comes to completion when I realize, "Oh, he thought there was this problem, or this solution to a problem, but the whole thing only arises when earlier in the reasoning you get mixed up between the two separate worlds that the hidden 'if' clause refers to." Argument 1 P1: In Bizarro world, Superman is Lex Luthor. P2: In this world, Superman is flying. C: Therefore, In this world, Lex Luthor is flying. This is of course an invalid argument. The three statements (two premises and one conclusion) could be any combination of true and false, including all true premises and a false conclusion. Argument 2 P1: Superman is Lex Luthor. P2: Superman is flying. C: Therefore, Lex Luthor is flying. This is of course a valid argument. It can't have all true premises and a false conclusion. It however only works when all three statements are referring to the same world. If you meant to express the ideas in Argument 1, and the way you wrote them was as I've written Argument 2, then you haven't written it right, because you left out relevant information. There's a common mistake in reasoning that consists not in thinking Argument 1 and writing it as Argument 2, but of a similar nature. It happens when you confuse things in your thinking before you've even written anything. In general, the mistake is to confuse a multi-world consideration with a single-world consideration and get mixed up between the two. And this trick can ensnare you when you read a piece of someone else's writing that also mixes these things up in ways you don't notice. Often the person who wrote it didn't notice the mistake either. The amount that this error has managed to proliferate is alarming. It's all over the place now. You would normally never think of saying a sentence like "Bob went to the tavern but he would have gone to the dog park" with no context that provides the 'if' clause. It doesn't ever occur to you to say it because it expresses zero thoughts. The 'if' clause doesn't refer to actual possibilities if you're a determinist. If you take the 'if' clause to refer to real possibilities and you call yourself a determinist, then you're just mistaken about what you're calling yourself. The correct term for that is "free will libertarian". The confused person is someone who says first, "There are laws of physics, and there are no uncaused causes in any relevant sphere, so I'm some kind of determinist." Second, "There are meanings of the term 'free will' that are relevant to moral reasoning but that don't mean the negation of determinism." Third, "Okay, this works if 'could have done otherwise' means 'could have done otherwise if his desires had been different'." Fourth, [things including 'could have done otherwise' without the hidden 'if' clause]. Next, he forgets that there ever was a hidden 'if' clause. Fifth, [things that only a free will libertarian would say]. At this point, he's a free will libertarian who calls himself a soft determinist or a compatibilist. Most people who have a stance on free will identify as soft determinists or compatibilists. A big portion of those people are actually free will libertarians who are categorizing themselves mistakenly. What I'm doing right now is not disagreeing about definitions. It's not disagreeing about definitions because by any accepted definition of soft determinist or compatibilist, certain things must hold, and this portion of people do not hold those things because their reasoning process has been confused, but they still call themselves soft determinist or compatibilist. Take any of these confused people and ask them what the definition of soft determinist or compatibilist is, and they will give you a correct definition, but then when they say other things, those things are absolutely contradictory with the definition of the thing they said they are. I really feel like the following shouldn't bear spelling out in excrutiating detail, but apparently it does. Allow me to describe a few extremely simple facts in terms of fully detailed baby steps. The following is not understood by a considerable portion of philosophy experts. Your desires were not otherwise than they were (which is to say, the past is one thing, and it's not some other thing than what it was). Therefore, the clause "if his desires were different" refers not to this world, but to something else. It refers to a possible world that is not the actual world. His desires were different in a possible world that was not the actual world. His desires in this world were what they were in this world. He could not have done otherwise. He could have done otherwise if his desires had been different. Compare with these two statements in the opposite order. He could have done otherwise if his desires had been different. Nevertheless, he could not have done otherwise. lorem - the first way makes logical sense pretty straightforwardly. But when you swap the order of the two statements, suddenly it seems enigmatic and maybe like something that doesn't make sense. This just goes to show that there are ways of thinking about compatibilism that can be confusing and there are other ways of thinking about compatibilism that avoid confusion. In math, there's arithmetic, then algebra builds on that, then calculus builds on that. But if you start bungling your arithmetic or algebra when doing calculus, you'll quickly get checked. In philosophy the corresponding thing is kind of a problem, because you can learn the low-level abstractions, then learn the higher-level abstractions, then attempt the work that involves the higher-level abstractions while bungling the lower-level abstractions without as much corrective present to provide you with a convincing check. The fantastic beast, it turns out, eats only real food and produces only illusory returns, and to be clear, what I mean by "real food" and "illusory returns" I mean real effort and invalid inferences, respectively. The confused compatibilist has let his imagination run wild and forgotten to rein it in, and as a result has got mixed up between fantasy and reality (possible worlds and the real world). Possible world thinking does many useful things - you pretty much can't live without it - but the imagination it requires can get slippery. Re are we free to break the laws: good news, you can have agent causation and libertarian free will while still calling yourself a determinist! Okay, even if that argument wasn't a crock of shit, why would I want to have that combination of things? Just call yourself a libertarian at that point. [][] Draining the Quagmire: the Real Case for Compatibilism (Compatibilism / Draining the Quagmire: the Real Case for Compatibilism) If you don't read the soft determinist literature because you're too busy to read any philosophy, that's a valid reason, and I pity your situation of being that busy. If you don't read the soft determinist literature because you disagree with them about the definition of one word, that's a stupid reason, and you're missing out on a lot of interesting ideas they have about that word, about other words, and about the concepts they refer to. The soft determinist literature is really interesting. You learn a lot of things about how your brain and the brains of other people work, like, "Oh yeah, that is how my brain has been working all along, and to see it described, now I understand it a lot better." A bad(?) reading of deconstructivism: "In this novel, on this page the word 'courage' is used to mean why this guy engaged in battle instead of running away, but on this other page the word 'courage' is used to mean why this guy decided to question his values instead of trusting other people to tell him what his values are. Therefore, there's no single, stable thing the word 'courage' refers to in this book, and therefore the word 'courage' when used in this book is meaningless. If you continue this analysis, you'll find that none of the words in this book mean anything. And if you continue the analysis even further, you'll find that all the words in all the books and all the conversations people have are all meaningless words." This is the continuum fallacy, which is an incorrect way of dealing with the paradox of the heap (Sorites paradox). The right way: there are edge cases, but there are also clear cases. The existence of edge cases does not disprove that there are clear cases. A pair of scissors has one degree of freedom. A pair of scissors that's had glue spilled on it has no degrees of freedom. Robots have degrees of freedom. Lorem. lorem - I once knew a machinist who was so adept that he could get a machine that's got four degrees of freedom produce a workpiece that a lesser machinist would have needed a machine with five degrees of freedom to produce. Define freedom as the conditions of the universe could have been different from the start? That's another strange but true implication of this. Counterfactual thinking entails this, but it doesn't involve it. On a normal day, if I'm defining terms it's not foremost among the definitions I'll provide. No, a compatibilist freedom does not require that "possible" worlds are really possible. The counterfactual "CDO if" does not require that the counterfactual world you spin up could really have existed. lorem - it really only has to be self-consistent enough to be plausible. When you spin up a counterfactual world that takes place in present day, you don't have to account for how there was a big bang and then one quamtum wobble a picosecond after the big bang was different from what happened in our world and that's why the world is similar to ours but not all the way identical. Could have responded to reasons if his attitude had been different. lorem - "Could have done better, if he had responded to the right kinds of reasons, if his attitude had been better" + "Could have done worse, if he had responded to the wrong kinds of reasons, if his attitude had been worse" Attitude is one of the three fundamental virtues, attitude is the subject of the 'if' clause in my compatibilism, and attitude is the most important of the degrees of freedom that pertain to self-control. Is belief a degree of freedom? No. The ways you can exercise self control over a degree of freedom don't include deciding whether or not to believe in something. Attitude is a degree of freedom that will have an effect on beliefs. If you decide to have an attitude of getting educated, checking your beliefs, looking up opposing arguments, and so on, it's likely that you will end up believing many things that you wouldn't believe if you had refused to exercise that degree of freedom called attitude. A compatibilism that's self consistent and addresses its bullets is not a quagmire of evasion. Lorem lay it down simply, briefly, and with the bullets. "Was that evasive?" When someone exercises one of their degrees of freedom, it's because they were fated to. When someone refuses to exercise one of their degrees of freedom, it's also because they were fated to. [][] Compatibilism, Correction, and Holding Responsible (Compatibilism / Compatibilism, Correction, and Holding Responsible) You act differently to in the one case someone tripping and hitting you and in the other case someone hitting you intentionally. This is natural, and not contradictory. Self control, the definition of control, and second order desires. Self control means exercising a degree of freedom that you have, or in other words making a second order desire effective. 2 cases of blame. "We know you could have done otherwise, and we know that in similar circumstances you normally do. Maybe you don't deserve this, but we'll give you a pass this time. But don't ever get caught again. We trust you won't even try." Compare. "We know you couldn't have done otherwise, because you're a shit, and because you haven't yet experienced consequences. But we also know you're capable of learning, and since we're at this stage of the learning process now, there are things we have to do. This is when we do what we're here for." Suppose you rarely drive drunk, but the one time you make the bad decision to, you injure some innocent person. You feel like you deserve a beatdown. Yeah, driving drunk is something you almost never do, and yeah, the odds of injuring someone in one act of drunk driving was unlikely, but that doesn't mitigate your feeling like you deserve some harsh retribution. Consider now a person who frequently drives drunk. Over and over, he has decided to drive after having too many drinks, and he was lucky enough never to injure anyone the first 30 times. The 31st time he did, he injured someone. Who is more in need of correction: the person who rarely does it, did it once, and injured someone that one time, or the person who frequently does it, did it many times, and injured someone one of the many times he did? There's a real-life story about an elementary school wherein one little shit there is regularly beating up the other little shits there. The people who might administer justice or correction in that kind of environment have decided "We can't punish him because he didn't know that what he was doing was wrong." Maybe in the case of adults in some cases this kind (e.g. dealing with psychopaths) this reasoning really is the thing to do, but what about the case of a child who is likely not a junior psychopath, but just someone who needs some learning of the right kind? Of course, if the people running this thing were in their right minds, they would apply punishment on the prospect that maybe the little shit can learn the meaning of right and wrong. It's a wild story. It's frustrating because the system of punishment that's administered in an elementary school is not part of what we call the justice system, so it's possible to have a formation of enclaves wherein the people in charge come to ridiculous conclusions based on terrible reasoning and it really becomes implemented. It's like every elementary school includes its own miniature justice system, and most of them do accept at least a minimum of correct principles, but any one of them is vulnerable to getting broken by bad philosophy. And when one of them gets broken in this way, they decide never to punish anything, because every time some little shit does something that isn't acceptable, it's because he didn't know the difference beetween right and wrong, and therefore didn't know that what he did was wrong. Clearly, if this were done in every elementary school, it would mean we've given up on any attempts at teaching people the differences between right and wrong. There are some people who always have difficulties in discerning right and wrong no matter what corrections are attempted, but there are a lot more people who, under normal circumstances, do learn in their upbringing the differences between right and wrong, and they learn it because of corrections applied to them during their upbringing. To eshcew all correction on account of not knowing the difference between right and wrong is to assume that everyone who needs the least bit of correction is as irreperable as a psychopath, and then to not attempt correction of that kind. Only if this kind of correction is attempted can we distinguish between people who learn just fine when corrected and people who don't. This requires at some moments to decide, "This child didn't know the difference between right and wrong when he did what he did, but let's see if he can be taught." To not attempt correction is to forsake all those people who might have benefitted from correction. In the middling cases, the person less in need of correction tends to be the person with a good amount of self correction and the person more in need of correction tends to be the person deficient in self correction. Degrees of culpability is why legal systems have charges, trials, and sentencing. Charges are often made scattershot. The trial is when it's ruled which charges the guy has no culpability for and which charges the guy has nonzero culpability for. And sentencing is when the particular degree of culpability is the factor for deciding what in the range of possible sentences will be the sentence for this case. Pobrecito lorem the story of the sandal thief. He stole one sandal too many. He met his natural and undignified end, and I didn't have to do anything. The person who is resistant to correction is the person who just won't change his attitude even if it's been abundantly demonstrated that doing so would be a good idea. Ultimately, whether you're this kind of person or not is a matter of pure luck. Yet this is the person who deserves to be universally disliked. "You seem to have no idea how many times I've been congratulated on how tall I am." "Oh. Yeah, that makes sense, but.. I mean praised. When someone says 'nice hand' it sounds like they're saying there was something virtuous about it, like saying 'nice shot' in a sports game. 'Well played' I can understand. 'Nice hand' is just weird." "That's congruent with the kind of people you meet here. Weird and not always coherent." "I suppose that's why we go here to make money. If this was where geniuses congregated I'd be doing something else." If I really felt like being shot, having a gun pointed at me wouldn't be very coercive. If I really felt like dying and I really felt like calling someone a doodoohead, I would say something like, "No, I won't give you my wallet, you big doodoohead. You'll have to take it from my cold, dead- *blergh*." "No person does evil willingly. Whenever you see someone do something that looks like it's evil, it was done by someone who was mistaken about what evil is." "There are people who would be delighted to conquer the world with a butter knife." [][] The Constraints of our Real-World Environment (Compatibilism / The Constraints of our Real-World Environment) Does stochastic terrorism count as coercion? It takes almost no effort for one idiot to send untraceable death threats to 20 people who are saying things he doesn't like. In modern times, almost everyone who has achieved any amount of notability has received death threats. They're almost as common as YouTube comments expressing disapproval. On average, a death threat these days is rarely followed with death dealing. What ought one to do about it? Most people who have received death threats have carried on doing what they were doing and nothing came of it. On the other hand, you could hardly blame someone for ceasing some worthy activity on account of a low probability death threat. If this kind of activity is something that will be found compelling most of the time, then low-effort trolls will overpower all the people who are doing things that are worth doing. These days, if you're doing anything that's worth doing, you have to accept the probability that someone will try to undo you just by being annoying. There are forces that are trying to undermine solidarity in the world, and you can't count out the idea of them doing anything. They're taking all the angles they can get, and they're trying to turn everything that should be simple into things that are annoyingly not as simple any more. Decide every time that you will countermand their trousers. They want smartphones to be as addictive as possible. They want your apps stealing your data. They want you voting against your own interests when they tell you that what's bad for you is good for you. They want you to lack the attention span necessary for reading books. They want you to think that buying things that won't make you happy will make you happy, or they want to squeeze all the money then can from you short of you starving to death. They want every division possible so that whenever you agree with someone about anything, someone else can butt in and say "yeah well what about trans people pissing in the wrong bathrooms?" They want even simple facts to not have the standing that simple facts should simply have. They want you more interested in distractions than relevant information. They want information to have no currency. They want all of your purchasing options to be things they can switch off "as a service". They want your neurotransmitters hijacked so that paying attention to real life isn't something that would even occur to you as attractive. They want you to think you'll have things you'll never have. They want to paint the other grass as always greener. They want you to take it as obligatory that you know the names and the faces of the movie actors and the musicians, and make sure you tell other people it's not acceptable when they don't. They want you having buyer's remorse, and they want you having it less than one month into the payment plan. They want collateral. They want class immobility. They want to assure you they care about you until the moment that stops being convenient to them. They want to be above the rules that apply to you. They want to define what virtue means to you and they want it to mean whatever happens to be useful to them. How free is your will now? You'd need a really strong will to be free after all that. What do you really know about being uncoerced? (This part to above) They want all your effort going into trivialities. They want you to love your local sports team and they want the other guy to love his local sports team. They want every kind of trivial bugbear that makes class consciousness something you're too distracted to think about. "The other city's sports team is the enemy! (I think he's too distracted to notice the guy picking his pocket)" If Frederick Douglass had remained a slave all his life and worked the fields until old age, it would be reasonable to say he had been coerced to. Imagine seeing him as a young slave boy, born into slavery, in a land where that institution would remain for several decades, owned by a guy with a whip and a stable financial status, and imagine you saying, "that lad is uncoerced". It would have seemed insane. But it turned out he was uncoerced. I want to apologize if this sounds like a racially insensitive comparison, but I do think it is appropriate to say that most people in the modern world, facing the obligations and the coercive factors that impinge on their wills, would need to have a willpower like that of Frederick Douglass in order to know what it's like to be uncoerced. Modern civilization really is that insidious. [Goes with punched in the head by corporate greed] It's not exactly a loosely coupled mechanism. The mechanism is just kind of.. plain and clear. [][] Frankfurt Cases (Compatibilism / Frankfurt Cases) Frankfurt-style cases do prove something like a mathematical truth. In doing that, they have to propose situations that are utterly unfeasible. But what they're proving is one part of something has moral relevance. So there's a morally relevant argument, and one of the premises of that can be proven one way or the other only with fantastical imagined scenarios. How much work does that do? Suppose we have a morally relevant argument and we stipulate that if any of the premises are to be proven one way or the other, those proofs must not summon any fantastical scenarios? Is that prohibition excessive? No. Therefore, Frankfurt cases prove nothing relevant. For a shorthand let's say that the take-home is this: Locke's locked room example doesn't really prove what it was claimed to prove, but by modifying the thought experiment enough, it can really prove that, but these modifications have to be extremely unrealistic to satisfy that condition, therefore Locke's locked room example has whatever ambiguities it came with, and nothing in the volumes of Frankfurt example rhetoric has either improved or degraded that. I hated Frankfurt cases for one day. In Locke's locked room example, you can either stay seated or try the door and find to your surprise that it's locked. There's a relevant difference between those two options. Hard to say it proved in any meaningful way what it was purported to. More contrived examples can prove what it was supposed to, but they're of no relevance because of the amount of contrivance necessary. If our technology were so advanced that you could do the kind of Frankfurt scenario that really does prove the thing, the ethics in that world would be so alien to us that there's very little point in talking about it, and it has no relevance to the ethics of any world that's anything like ours. Even when we do the part of philosophy where we plan for the future. Lorem is that wrong? Is it possible that you prove the thing about the premise and then it does matter to examples that are not contrived but common? Does that question even matter to soft determinism? [][] Lorem (Compatibilism / Lorem) We tend to have a pretheoretical notion that condemnation entails the possibility that a person could have done otherwise than he did in a given situation. This must be carefully dissolved. The source of the error is the limited resolution our natural senses have in imaging the insides of the heads of other people, so that all sorts of things can be imagined to be in there. Mine eyeballs, unaided, do not work terribly well when employed as a neuroimaging machine. This mistake is reinforced by common manners of speech. "You should have done otherwise" means "You could have done otherwise if you had a completely different brain at the time." It is a counterfactual. "Should" does not imply "could have under the actual circumstances," except when used mistakenly, which is most of the time. This mistake keeps getting propagated until we've all muddled up the difference between "could hypothetically, under different circumstances," and "could actually." Why don't the faculties of decision making help us much in clearing this up, considering that the way they indeed work is not according to the way we tend mistakenly to think they work? That's just to say: Why is free will an illusion despite it not existing? or just: Why is free will an illusion? or: Why do we have this illusion, free will? One reason we have the illusion of free will is again the opacity. We see actions issuing from brains, often seemingly unbidden - it often appears as though the brain really can begin a causal chain. But it can't. Even the most seemingly inexplicable action is purely caused by previous factors. Again even in regard to one's own decisions. When I do something that no one, myself included, expected, whence comes this decision? Correct answer: from factors inacessible to consciousness. Mistaken answer: from nowhere but myself. Mine eyeballs, unaided, can't even neuroimage the brain right behind them. There are people who understand the fundamentals of determinism, thoroughly to the point of making no fundamental attribution errors, and invariably they're also the people who make sense when they talk about things like decisions and desert. You could get whole groups of such people in a room together, and never a misleading statement relating to the topic would be heard, in either their formal or informal discussions. Hoomans really are excellent figurers out of things. I mean, some of them are. And this skill in figuring things out is based on being able to doubt almost anything. And this skill in doubting is based on being able to act as nearly as possible to being undetermined, despite indeed being determined. All of these abilities of ours are so perfected that they're asymptotically close to 100 percent. 100 percent would be breaking the laws of physics. But it's because we're so close to 100 percent in these respects that a very convincing illusion of indeterminism issues from our mode of being. Why do we have this illusion, free will? Confabulation, reinforced by false confirmation, reinforced by colloquialism. At best maybe I've figured out a few things about the issue of determinism. Even that estimation might be lofty. In the best scenario, maybe in x0,000 words I've mostly reiterated things already said - things that I've read and things I figured out that were already figured out - and topped that with a few things I figured out that hadn't already been figured out. Maybe. Also, to speak in terms implying indeterminism is a convenient shorthand. The shorthand is strictly false. But we continue to use the shorthand, and then we muddle up the untrue formulation with truth. Consider a pair of statements: "(1) Oh, I should have X, because I could have figured out Y at the time, and then Z. (2) That's why I should have X instead of W." Statement 1 has the unmentioned premise that you're talking about a counterfactual. Statement 2 is shorthand for "If I had a better brain at the time, which was capable of deciding better, I would have X instead of W." The statement is meaningful for 2 reasons: (1) It is a mention of what a better decision maker than you would have done, and (2) It is a lesson for how to improve your own brain so that next time it's faced with a similarly difficult decision it can decide better. In most legal systems, contra-causal free will is taken as axiomatic. It's a working model, strictly false, but in many such legal systems, it does a fine job of modeling what to do and when to do it. And you don't need to consider Einsteinian mechanics when you design the steering system of a car. In some legal systems, determinism has been taken as an opportunity for government grift. In some social systems, determinism has been a failure point that's led to much bad decision making. "Isn't it strange, that all those times you initiated a causal chain using your brain, what you chose did adhere to the law of gravity? Yeah, it adhered to that law, but broke other ones?" The illusion of free will is a matter of mistaking the surface of a thing for its core. There's the surface appearance of initiating causal chains. That's never what's actually happening, but it looks that way. If our brains had some emergent property of being able to stack the decks of quantum random processes, according to what criteria would the brain go about doing this? Suppose it were some mechanistic process. So by some mechanistic process, we stack the decks of quantum random processes, and turn some of the dice rolls from purely random to something mechanistic. In that case, it changes nothing about the system we have of individual quantum random processes and aggregate effects that are essentially deterministic. Supposing this were possible, and something we're doing with our brains somehow, it might change the balance of things like "How many atoms are required in order for predictability to be accurate to one part per trillion error?" But nothing about this introduces contra-causal free will. The idea of contra-causal free will is still unintelligible. Well, what if it were not a mechanistic process by which we stack the decks of quantum random processes? What if our effect that way were.. purely random! Then the effect would be nothing. A deck twice shuffled is exactly as random as a deck once shuffled. And again, contra-causal free will has no plausible explanation. It remains to be explained how quantum indeterminacy could possibly introduce contra-causal free will. And the idea of contra-causal free will remains something that can not be adequately explained in any way. There's only silence on the topic of "Contra-causal free will exists, and here's how it works." I would like to hear something that starts that way and continues in any way that's satisfactory. Does "you should have done better" mean nothing more than "in a different universe, different things happen?" Well, it does mean a little more than that. It means "in a different universe that is so nearly identical to ours that only one cubic inch is different, quite different things would have issued from your decision making". Lorem that doesn't go as far as to mark out what makes the difference in culpability. The forces that hijacked our dopamine systems didn't necessarily restrict our freedoms (well, not directly, and not so much a few years ago, but that aside), but to a determinist, how big's the difference between coercion and addiction? It's the same with many of the ills of society that involve incentives. That's not to say that e.g. a person who gets addicted to scrolling vapid Internet content is purely a victim in the same way that an actually enslaved person is a victim - it's not to say that those two people are equally free of blame for their situations. Incentives have effects on average or in aggregate. A particularly bad reading of free will would say that there's no effective difference between a system with a good set of incentives and one with a bad set of incentives, because everyone is free to choose what they do no matter what the incentives are, and everyone ought to decide on the best action whether the incentives for it are big or small. Whole lot of good it does to say that. Then what's to be said about all the people who make bad choices when there's incentive to? They would all make better choices and be better off if they were all more enlightened? That still doesn't cover the main factors. Information and incentives are two different conditions that both factor into making a decision. So the bad stance of saying that incentives aren't a factor and only information is, that's an incomplete solution, and it's a thing you tend to hear from people who like indeterminism, which is an incomplete concept. When there's some activity that has luck-dependence and a reward system that resembles a slot machine, this tends to be compelling because of a certain hard-wired matter of neurotransmitters. That's a matter of incentive that's quite aside from information. It is good to have the information about what incentives are at work, but one can certainly engage in activity one knows is addictive, knows why it's addictive, and still be unwilling to pry himself from it and do something more enriching purely because it's addictive. That describes when there's incentive and there's information, and they're pulling in opposite directions, and incentive is overpowering information. That's why matters of improving the world include matters of improving information and matters of improving incentives. Information and incentives are two different factors. Information includes information about incentives and information about other things. Incentives includes incentives about information and incentives about other things. We cannot will what we will? It's an interesting statement. With the right qualifications, it's a true statement. But in investigating what those qualifications are, and how otherwise the original statement may be took, there's an equally interesting task of unpacking concepts. I could easily say "strongly disagree" to the idea "We cannot will what we will." There are types of freedom other than legal and uncaused cause types that matter. Freedom from legal constraints, freedom from metaphysical constraints, freedom from coercion, et cetera. Compatibilism is a mired term. You can be one kind of compatibilist and not another. Soft compatibilism, hard compatibilism, soft incompatibilism, hard incompatibilism, further formulations and designations. You can want something. You can want what something you want to want. You can want what something you want to want to want. You can want what something you want to want to want to want. You can want what something you want to want to want to want to want. But you can't want what something you want to want to want to want to want to want. The magic cutoff number is 5. You can try to do some task that recruits a great deal of willpower. You can try to try. You can try to try to try. You can try to try to try to try. You can try to try to try to try to try. But you can't try to try to try to try to try to try. The magic cutoff number is 5. I can act as close as possible to uncaused. I can act as close as possible to doing things for good reasons. Still I often fail at both of those and do things for terrible reasons. Failures of willpower often result in me acting not as close as possible to uncaused, not as close as possible to doing things for good reasons, but rather act just like a process that's addicted to stupid drugs and does stupid things to orient toward obtaining and using those drugs. Grades based on improvement, grades based on absolute, right and wrong balances. There is a task which is to indicate who is doing the best, even if the people who are doing the best are having an easy time doing that. There is another task which is to indicate who is improving the most, typically because they've been figuring out the mechanisms of how to do things like habit formation, second-order volitions, and such. This sometimes goes wrong when priorities are confused. There are stories of a course being run in which there is a defined curriculum, and some of the students understand the whole curriculum at the start, and then they're graded the lowest because they didn't improve, even though they could score perfectly about the curriculum at the end, just because they also could at the start. And then even when the people running the course have an approptriate balance of the two motives, the students also know that program, so they throw the assessments at the start of the semester just so they can show improvement even if they haven't learned anything (cheesing the game). It's a bit of an arms race. But the mix of the two initiatives is in the right place. They just have to be careful to implement anti-gamebreaks. What's the difference between someone who's gone senile and someone who consistently says stupid things with acuity? Why is one a demerit and not the other? Someone who regularly says stupid things with acuity: if he has a habit of always saying stupid things with acuity, then he also has the acuity to know that what he's saying is stupid, so we hold him accountable for what he's doing, and we say "put that acuity to better use and stop using it to say stupid things." But what if he's deluded himself into thinking he's saying things that aren't stupid when he's saying things that are indeed stupid? Then we hold him accountable for what he's doing, and we say "put that acuity to better use and stop using it to delude yourself". We do that because we hold people accountable when they have ability and they put it to bad use. Here, the words 'could' and ' responsible' are not meant in the etiogenic sense, but they refer to what things we will say to him to try to influence his choices. But what if his only problem is that he's utterly unresponsive to reasons? He has most of the trappings of intelligence, and his only major problem is that he uses his intelligence to delude himself, and the delusion he chooses for himself is that when he says stupid things he thinks he's saying smart things. Do we really have grounds for holding him responsible? People have tried everything they can say to the guy to try to convince him that being responsive to reasons is a pretty good idea, and every time he shows every sign of intelligence other than being willing to adopt an attitude of being responsive to reasons. People have tried everyting they can say to the guy in terms of asking him if he knows what a self-delusion is, and he says he does, and they ask him to consider the facts that suggest that he's been deluding himself, and every time he shows every sign of intelligence other than being willing to do such an investigation of his motives. People have tried everything they can say to the guy to explain why the ideas he talks about all the time are stupid, and every time he shows every sign of intelligence other than making a realistic assessment that his stupid ideas are stupid. What do you do with this guy? He's gradually got abandoned by everyone who used to care about him. Every time, before any one of those people abandoned him, they told him why, and what he's been doing to make that happen. But he remained unresponsive to reasons every time. And now nobody talks to him. There used to be many people who talk to him. Between now and when he adopted his program of self-delusion of saying stupid things and thinking they're smart, every time someone decided they're not talking to him any more, they told him why, but that had no effect. But what else can you do? He's not responsive to reasons. But no one likes being around him - in fact every time someone's been around him in recent years, they quickly grow to hate being around him. So they choose to stop being around him. And now, one by one, he's been told dozens of times why - dozens of times, a person has said why they're going to stop communicating with him - because he's saying stupid things all the time and acting like they're smart things, and because he's taken that on as his delusion of choice - and then has stopped communicating with him. Now no one is willing to be near him, even though many people were willing to be near him before he started doing these things with his mind, and every time they gave him the reason why, and he's heard it that many times now, to no effect. And now he keeps saying stupid things to no one, always thinking they're smart things, always because he's decided to keep precious that self-delusion of his. This seems like the natural outcome of a person's choice to operate his mind in a certain way - to end up with no one willing to be around him. If someone has lost their capacity for intelligibility due to dementia, as a symptom of Alzheimer's disease or whatever such thing, the particularly kind among us will not decide to break off communications with them. We assess that this is a good person who made good decisions when they had the normal amount of ability to do that, and who is now incapacitated due to a disease that was not in his control, and now a much more appropriate reaction than to shun the person is to show compassion, and to continue to be willing to be around them, even if that's less entertaining than it was before he had this affliction. To nurse, rather than to shun. So, why do we shun the guy who had chosen to dedicate all his efforts into self-delusion, who otherwise shows all the trappings of intelligence? Well, normally, when we shun a person, it's for two reasons: (1) we don't like being around them, and (2) because they're responsive to reasons, and they have the capacity to change what they're doing in response to something like everyone shunning them. But our guy from the self-delusion example is a guy who shows all the trappings of intelligence other than being responsive to reasons. And still, and this is fully realistic, the outcome of a person like that in the real world is he gets shunned. Why? Because we still ascribe agency to him. Maybe if he loses everyone who ever cared about him, he will be able to apply the agency that he still clearly has, and decide to become responsive to reasons. If he does that, he will have to discard a certain delusion that's quite comfortable in many ways, and do things on a day to day basis that are often not as comfortable as that delusion. Maybe some time soon he will realize that this can be the better outcome, when he's left with no one to talk to but himself. Still he doesn't. What this story shows is that agency is not the same as being responsive to reasons. Our guy does have agency, and his having agency is the reason why we shun him rather than nursing him. But our guy does not have responsiveness to reasons. That's why he keeps doing what will make nothing coming out of his mouth be things worth listening to, and why he keeps doing what will make all the other entities in the world that do have agency avoid him. So agency is not the same as responsiveness to reasons. But there is some fundamental relation between the two things. Agency is a capacity to be responsive to reasons, even if the agency is used to resist that capacity of it. Suppose right now I were to pause time and replace all the nicotine in all the smoke shops and all the vape shops with another substance that is one tenth as addictive. Let's call this substance smickotine. And then I resume the world. Suddenly, every cigarette and every nicotine vape still tastes the same, but they're one tenth as addictive, because they all have smickotine instead of nicotine. What would the result be? Within a couple of weeks, sales of cigarettes and vapes would drop off compared to the world wherein they still had nicotine. At this very moment, there are thousands of people who have the idea that they "want to quit". "I want to quit this stuff. It costs money. It doesn't really do much for me in terms of any kind of sensation. But it's so addictive. Whenever I run out of the stuff, I can't resist the temptation to go to the store and get more of it." If suddenly the stuff were way less addictive, many of these fence-sitters would finally be able to resist the temptation to go to the store and re-up. But wait! In the nicotine world and in the smickotine world both, the decision to go to the store or not is a free choice. No one's forcing a person to buy his next pack of cigarettes. No one in either of these scenarios made it illegal to buy your next pack of vape pods. In both worlds, each person's choice either to go to the store or to refrain was a free choice, and yet in the smickotine world there are fewer of them who decide to go to the store and more of them who decide to refrain. So influence does have some bearing on decision-making, even if it's not to make something illegal, or taboo, or even to communicate anything noticeable. Then what determines the decision? The decision is determined by some combination of information, influence, and a person's decision-making process. I don't hold you morally accountable for not being able to leap from New York to Kenya in a single bound. But I do hold you accountable for not having a different brain one minute ago? Pobrecito "poor thing" or "poor bastard" - a combination of pity and laziness. Lorem. Heroin: I don't recommend it, but I might be the wrong person to ask, because I've never tried it, or maybe I'm the exact right person to ask because I've never tried it. Is the purpose of praise to condition a person to do more nice things for you? To condition that person to continue doing nice things for people in general, if that's what you want to call it. When someone does something nice, they expect to get thanks for it. At that point it's kind of obligatory in the sense that we have a convention of expecting it at that point. If I don't, then something will be off, and that person might not want to do something nice for someone later, and if I do, then that's a kind of confirmation, and that person will want to continue doing nice things for people. So, to a pretty close approximation, the purpose of saying "thanks" is to condition a person to keep doing nice things for people. Zhuangzi empty boat story. A guy is piloting a boat, and comes across another boat some short distance away which is violating the traffic law. Our pilot shouts disapproving words in anger. As the boats come closer, he sees that there's no one in the other boat. Then there are no more words in anger. There's a possible world in which I did raise my hand, but in no sense of the world 'could' is this world related to it. When doing soft determinism it's easy to loosen up your ideas and wording to the point of just expressing contradictions. Bonus points if you've stopped yourself and said, "But wait.. in what sense of the word 'could' did I mean that? Only the sense of the word 'could' that says someone 'could' conjure up an alternate universe and swap it in for this universe. Is all I've been doing a load of shit? Maybe the hard determinists were right. Maybe I'm supposed to be one of them.." If I drive drunk and hurt someone with my car and then a family member of theirs confronts me, I'm going to accept that I deserve to get beaten up. If he says he's going to and I say I accept and then he changes his mind, I'm probably going to feel worse than if he had gone through with it. The first person experience of deserving blame is to accept conditioning. To whatever extent the deserved beatings don't happen, we can wonder if our societal mechanisms for conditioning are not in good repair, and whether people aren't adequately getting appropriate learning. A hard determinist could look at a book by a soft determinist and say, "Every argument in this book that uses the word 'could' is using it in the sense that imagines people can conjure up an alternate universe and swap it in for this universe, which is not an ability anyone has, so all those arguments are built in faulty premises. And that's about two thirds of the arguments in this book." Yes, every time I'm using the word 'could' I'm referring to an ability of people to conjure up an alternate universe and swap it in for this one, and yes, I'm aware that people don't in fact have that ability. However, I assert that those facts don't present a problem or indicate a flaw in my arguments. Anything that talks soft determinist style or indeterminist style is like four humors. Useful to an extent even if the reference is bogus until the language develops more. Do you think that distinction about possible worlds (Can we break the laws by David Lewis) rescues soft determinists from accusations that they're being a bit cavalier about their use of the word 'could'? Lorem: when I went in with the hand I should have folded and made the two outer I felt worse than the guy who got beat. The thing with the board cards shouldn't have happened (to my foe) and my decision to go in with what I highly suspected was the worst hand is something I shouldn't have done to myself (in terms of EV). Any good poker player knows that once you see yourself doing things like that you should go home and stop trusting yourself for a while. "You blame people for not being able to break the laws of physics?" "Yeah, sometimes. And sometimes I praise people for succumbing to the laws of physics." "And you think you've got things worked out?" "Yeah." "So you think you've got things worked out, and you also blame people for not being able to break the laws of physics sometimes, and you praise people for succumbing to the laws of physics sometimes, and you don't get the sense that maybe there's some repair to be done in the attitudes and concepts there?" "That's right. If you're trying to home in on what bullet I might have to bite, and if that counts as a bullet for the biting, then I bite that bullet gladly. Ain't no other system that works better, and it makes plenty of sense when you do the reasoning." "The criminal is determined to commit a crime and the justice system is determined to punish." Lorem. This is sort of to say: if they're low, we can be low. But it doesn't explain a criminal justice system that works even half decently. This doesn't mark out the difference between a real criminal justice system and a vigilante system. I know that a person who is smart in the right ways won't be programmed to be a jerk, because of how stupid that would be. I know he will be programmed to be a nice person, because of how much smarter that is than being a jerk. When I see that a person who is not smart in that way is programmed to be a jerk, of course I pity him because of how much that indicates bad results. If I have some stake (ability to prosecute) in some transgression of his, should I then trust that his pitiable condition will end up being more than enough punishment, and not bother? No, because if everyone did that it wouldn't be. But if I have some stake (harm done) that I can't do anything about afterward, I'll have to settle for that. A guy who had access to some of my money once tried to scam me and keep the money when he was supposed to give it back, but he pulled this attempt so stupidly that I was able to charge him through the legal system for more than the amount he tried to steal. I told this to a friend who asked me if I was mad at the guy. I said no, if he tried to do that to me once a week, and just as stupidly each time, I would be able to make an easy living just by litigating him. And before he tried it, I thought he would try it. And I even have a video recording from before it happened where I'm talking to a video camera saying I think he might try it and what would happen if he does. When he really did try it, that was like winning a scratch ticket. In this case I didn't even feel a disutility at the time of the transgression, because I could see the crime and the punishment like it was a block universe. When someone commits an injustice, but he's too clever to get caught, we're disappointed, because fate didn't give him what he should have had coming. When someone is both an asshole and an idiot, and on account of those he has a bad time, we rejoice. When someone is kindhearted but unintelligent, and has a bad time on account of that, we're pitying, because he doesn't deserve his curse. Why are our reactions driven so differently based on whether it's intelligence or attitude? In the case of the two assholes, we rejoice when the guy is given an incentive to consider an attitude adjustment, and we regret when he is not. In the case of the kindhearted blunderer, he doesn't have the option to take bad outcomes as incentives to fix anything he can fix. So we see the fates giving him unjust punishments, and that's when we call the fates cruel. If there were a pill that evil people could eat that makes them no longer evil, would someone be more to blame if they didn't eat it and chose to do evil? I don't know how much you meant that as a hypothetical question. For most evil people there is already a pill they can take that will make them no longer evil. It's that pill that for some people is a "hard to swallow pill" that says that if you stop being a jerk, you will have a clear conscience and you'll enjoy that more than anything you can get by taking the other option and being a jerk. Most evil people are aware of that truth. When they do evil, it's only an akrasia. It's the kind of evil that's "blameworthy" in at least the sense that we should be vigilant about disincentivizing it, because if we disincentivize it enough, then the agency of these people will respond to reason. Even if you believe in hardest determinism, reading the soft determinist literature yields lots of insights of the ethnographic kind. It's about parts of your own brain, even if they're the parts you want to shut up, even if you say this book has nothing to do with determinism. If you're really a hard determinist and you stick to your guns, it's easy to become someone who pushes apathy as an appropriate attitude to take toward things that are best handled with attitudes other than apathy. Compatibilism: the distinction between free and unfree actions marks out much of the difference between what excites our different reactions, but it's not complete. The distinction between first-order desires and second-order desires marks out much of the remaining trouble area, but it's still not complete. Aristotle said that coercion is an edge case, and ever since then it hasn't been cleared up what exactly counts as one's own agency and what counts as compulsion. In modern discourse, coercion is seen as the clearest example of compulsion, but even that seems like a contrivance if Aristotle knew what he was talking about. Soft determinism has the heap of beans problem, but that doesn't mean it's wrong. "Continuum fallacy" is the term for thinking it's no good on this account. If you don't like being called soft, would you, like most people, say that there are differing amounts of culpability? If you think murdering comes with more culpability than sneezing, in whatever sense that might be meant by culpability, then you're a soft determinist (sorry if you don't like being called that). We need to have a term to refer to the nearby possible world in which someone had a different attitude when making some decision. Using the word 'could' for this quickly leads to confusion for all participants in a discussion. I propose we use the word could-asterisk. He could not have made that decision like someone with perfect information at the time, because he only had a small amount of the relevant information at the time. He could-asterisk have refrained from kicking that homeless person because that's what someone would have done in that situation if he wasn't a jerk. (goes good near the end of a section like this) If you think my compatibilism is bullshit, try acting like you reject it: the result won't be good. Are you gonna commit some crimes and assume you can reason your way out of culpability? Are you gonna assume you have perfectly effective willpower at all times for as long as you continue to live? [] Comprehensibility, Emergence, Etiology [][] Agency Emerges (Comprehensibility, Emergence, Etiology / Agency Emerges) Not every local entropy reverser is an agency, but every agency is a local entropy reverser. Consciousness and the illusion of free will are both emergent. Consciousness is real, not an illusion. Free will is an illusion, not real. In order to persist in the way that a cat does, you have to do a great deal of not persisting in the way that a rock does. Substance or process? You are the laws of physics in action much more than you're a lump of specific matter. [][] Etiology (Comprehensibility, Emergence, Etiology / Etiology) Understanding cause and effect in the less than global sense is an incredibly complicated and fascinating topic, but I've given a sufficient account here of all it matters to free will and determinism. It is notoriously difficult to give a satisfactory of what cause and effect mean in general (it has possibly never been done yet satisfactorily) or even say much of anything about it definitively. Here I've limited myself to saying a few of those definitive things that can be said about it. Happily, those are all that matter to the subject of this book. Meaningful statements include the teacup broke because the cat decided to push it off the table and the recession in the economy caused an increase in unemployment. Because of butterfly effects, conjunction quickly becomes everything on earth plus several things in space caused me to choose Indian food for lunch. In a global sense everything is constrained fully and at all times. In a less than global sense, not. There's a lot of freedom in being completely constrained sometimes. Cause and effect on the less than global scale is more like a technology than like a law of physics. Cause and effect on a global scale is more like a law of physics than like a technology - really it's more like a group that includes all the laws of physics. Before there was intelligent life, there was only global cause and effect. Just everything at one time caused everything at the next time. Local cause and effect - counterfactual thinking - is just a thought technology, it's just a way of thinking about predictions and explanations. lorem - when I say "global" cause and effect, I mean including things that can be quite local, and when I say "local" cause and effect, I mean including things that can be quite global. Use terminology "global" cause and effect and "local" cause and effect in scare quotes. [][] Free Will Means Fixed Will (Comprehensibility, Emergence, Etiology / Free Will Means Fixed Will) [][][] Seeing Double (Comprehensibility, Emergence, Etiology / Free Will Means Fixed Will / Seeing Double) Making a decision is figuring out what the output of your programming is. Making a good decision is figuring out that your programming is good. Only one of the "possible worlds" is really possible. The idea that's standardly referred to by philosophers as "possible worlds" really means non-self-contradictory worlds, or self-consistent worlds. They're worlds that can be imagined in any amount of detail without running into any contradictions other than that they're not our actual world. To reason about a causal choice is to identify non-self-contradictory worlds that are consistent with the past plus the set that includes what's in your brain and what might be in your head if another brain were swapped in. This is perceived as competing counterfactuals. At the time of that perception you don't know which of the set of hypothetical brains is your actual brain. The illusion of choice is the illusion that you're picking which of the brains will be yours. The reality is that you're just finding out which one it is. If it's a decision that requires deliberation to pick the good option, and you pick the good option, then the brain you had all along was one of the ones that deliberated, one of the brains that did the activity of imagining all those brains. If you knew at the outset that it was a decision that would take deliberation to pick the good option, then you knew at the outset that the brain you had all along was one of the brains that would imagine all the brains and have the illusion of picking one of the brains that imagines all the brains (not one in the set that doesn't imagine all the brains). Or, if you desire things that require mental effort, and you understand how things work, then you will be committed to mental effort. Lethargism is thus refuted. That's why you know you don't want to sit in a bucket. When I said "imagine all the brains", of course I meant that one imagines a plurality of possible timelines and imagines that somehow one and the same brain exists at the point that the past timeline splits into those possible future timelines. In each of the non self contradictory worlds there is a single causal network at the global scale. When making a decision you consider which of the possible worlds is consistent with the past plus the set of what's in your brain and what might be in your head if you got any other brain swapped in. Then you get the illusion of choosing one. What really happens is you find out which of those possible worlds with its causal network was the actual one all along. Either the previous state is one that brings about decision to turn the switch, turning the switch, completing the circuit, and the light turning on, or the previous state is one that brings about decision not to turn the switch, not turning the switch, not completing the circuit, and the light not turning on. When I'm deliberating about the decision, I don't know which of those timelines I'm on. I'm one one of them, with one past and one future. When I finish deciding, and I either turn the light switch or I don't, only then do I know which timeline it is, which is the one it was all along. The illusion of free will comes from not knowing until it happens. And a knowledge engine cannot know everything about itself, so some heuristic must be used. The illusion of free will is the heuristic. You don't have perfect knowledge of your brain and your surroundings, but you do have some decision-making process, so every time you make a hard decision, it will be a decision under imperfect information. You'll need some kind of heuristic. How about.. you imagine two scenarios are actually possible, you populate those with as much information as you can, and then you see which one pulls you more. "I've been living with moving pictures my whole life. Now you come along and you're saying they're illusions and trying to explain how they work. It seems to me really contrived, and it seems to me a lot more straightforward and probably true to say that there really are tiny people inside my TV." Why would imagining something that has no chance of happening be part of making a decision to do some other thing? The number 0.999... is exactly equal to 1. There's a fact that's simple but difficult to understand. You're gonna die some day. That's a fact that's simple but difficult to understand. [][][] Why Don't I Just Sit in a Bucket? (Comprehensibility, Emergence, Etiology / Free Will Means Fixed Will / Why Don't I Just Sit in a Bucket?) Whenever I set my computer to doing a big calculation, for example rendering a video, I've never found that I press the 'go' button for rendering a one hour video and then the computer is done half a second later. Similarly, if I think that some good things require mental effort, I can conclude that refusing to put any thought into decisions will not always result in good decisions. Lethargism is thus refuted. I'm programmed to want things and I'm programmed to do what it takes to get them. If you're hungry, you're willing to walk across the room to get food. That's the simplest case against lethargism. There are things you want, and you're willing to do what it takes to get those things. "Fine for you to say, but I seem never to have more ambition than the amount it takes to get food from the other side of a room." [is rewrite] "Lol, don't you know that 95 percent of people who lose weight gain it back?" "Yeah. I've heard that. Hang on.. The way you said that. What I heard you say is that 95 percent of people who lose weight gain it back. But the way you said it, it was as if you just said that it's been proven with 95 percent statistical significance that everyone who loses weight gains it back. You're aware there's a difference between those two statements, right?" "..Yeah." "Okay, well I've scored 95th percentile or better in a lot of endeavors. So much for any idea that it was futile to try this." [][][] Deliberation and Satisficing (Comprehensibility, Emergence, Etiology / Free Will Means Fixed Will / Deliberation and Satisficing) When you decide to put on shoes before going out, you don't really imagine two scenarios, one in which you go for a walk barefoot (*). But putting on shoes is a decision just as a harder decision is a decision. If a brain could know everything about itself and its environment, then maybe even the hardest decisions would seem as mechanical as deciding to put on shoes before going out. Only when making a hard decision do you summon the illusion of multiple actual possibilities and the illusion of picking one. *: unless you're one of those barefoot walking people. I've heard it's a good idea. Sometimes the best way to make the hardest kind of decision is by feel. However, if you have an important decision to make and you don't gather any information, and then make a decision based on feel, this is a bad process. When you have to make the hard kind of decision based on feel, you do that after you gather information and weigh it. This is not libertarian free will. It's deterministic. Even when making the decision based on feel is the best course of action, that is only to say that it comes from some part of your mind that's not explicitly available to your own scrutiny. Sometimes when you summon up the illusion of free will, it's because you have a decision that should be easy, but you're finding it hard because you need a second order desire to overpower a first order desire. There are a number of things you can think about, and one of those things you can think about is thinking. So you can have thinking about thinking, and that's no trivial curiosity - thinking about thinking is a big part of the basis of how we're more intelligent than other animals. And now that I've mentioned thinking about thinking, we've just as soon started thinking about thinking about thinking. And now that I've mentioned that, et cetera. Thinking about thinking about thinking about thinking, and so on. Because of the importance of thinking about thinking about thinking, naturally, issues of infinite regresses and how to deal with them will be a necessary part of the treatment of the subject. Ways of dealing with infinite regresses, or rather, ways of successfully handling an objection, "But that's an infinite regress." 1: Converging sum. 2: Self-referential item. 3: Show that it's a finite recursion. 4: Don't know how to resolve it, but no good reason to think it can't be, and all that matters about it is some other matter. 5: Show that the claim is wrong and there's no recursion involved. [][][] What Else Would It Have Been? (Comprehensibility, Emergence, Etiology / Free Will Means Fixed Will / What Else Would It Have Been?) Tralfamadorians are not feasible The movie Arrival asks us to imagine a race of aliens, The Tralfamadorians, who perceive all of time - the past, the present, the future, and everything that happens, has happened, and will happen - as readily as we perceive only the present moment and the space around us. Could that work? If it could work, how could it work? We hoomans have the limitation of living in three dimensions of space plus one spot in a time dimension, in an ever-progressing present moment. Well, what if a god outside of this world were constantly feeding one of us with information about the future? Whoever is getting that information might have perfect knowledge about the future, but there's not much to say about that possibility, because there's nothing to hypothesize about the doings of being that are outer to our reality. So what's the closest thing we can imagine to a Tralfamadorian from our "stuck in time" present perspective, assuming we can't invoke alternate modes of physics? Suppose I'm programming an AI, and I want it to be as insightful as a Tralfamadorian, and suppose that if I can't do that then I want to make it as close as possible. First, no knowledge engine can know everything about everything in the world. Further, no knowledge engine can know everything about itself. That's a problem. There's no way to make a finitely big computer that can know as much as a Tralfamadorian knows unless and until we figure out how to violate some of what we presently take to be the laws of physics. So, how close can I get if I'm programming an AI? The AI has to be some machine that takes inputs, does some processing on them, resulting in a decision, and then give some outputs. No matter what, the inputs will be imperfect information. No matter what, the decision will have to be a decision based on imperfect information. It turns out that my solution might be to design an AI that has something like the illusion of free will. When it's processing, it's working on imperfect information about the past, the present, and the future. When it's processing, it doesn't even know what its own state will be when it's done making the decision. The way it attempts to make a best decision with its resources might be to imagine two or more hypothetical futures, all of which are plausible, to populate those with whatever information it has that might be relevant to the decision, to model how the different decisions in the different hypothetical worlds result in different outcomes in those worlds, then to see which of those worlds has the most desirable result. The decision would be based on some assumptions like, "Supposing I modeled enough parts of these worlds to model the decision and the result, and supposing I predicted well enough the different sequences of events in the different worlds," and so on. And those are exactly the assumptions a hooman has when he makes a decision based on hypothetical thinking. And those are exactly the assumptions to test when a hooman refines a decision, when checking if the decision really is the best one we can guess at. Therefore, it seems plausible that any sufficiently advanced intelligence would have either the illusion of free will, or would have a system of processes that does all the same things as an intelligence that has the illusion of free will. lorem - I say that we have the illusion of free will, and that it's an illusion. You say maybe not. I say what else would it have been? GAI whether we would have to give it the illusion of free will or not, we would need to give it the kinds of processing that go with the illusion of free will. Not so much with some kinds of narrow AI, but some narrow AI already do process that way. [][] Lorem (Comprehensibility, Emergence, Etiology / Lorem) The two boxes thing Is called Newcomb's paradox or Newcomb's problem. Everyone thinks the answer is obvious, intuitive, and simply true - problem is they're split 50-50 on whether that right answer is one box or two. If you think both boxes is the right answer, this is simply magical thinking, and it betrays that either you don't understand what a decision is, or you don't accept that, or you think a decision is some thing other than the deterministic kind. If you think both boxes is the right answer, then there's no way to square your idea of what a decision is with reality. If your idea of what a decision is is consistent, and in a way that can be squared with reality, you will know that one box is the right answer. When I grabbed the gold, I saw only the gold. lorem - liezi Newcomb and rock paper scissors. When I was in middle school, there was one teacher, Mr. Albert, who was an expert in rock paper scissors. The first time I ever met the guy was to play rock paper scissors with him. I wasn't in any classes taught by Mr. Albert, but one day a friend of mine said that I must meet him and see how many points I could score in rock paper scissors against him. So I met the guy, and I said I heard he was a rock paper scissors expert, and I asked him if we could have a match. He said that the rule was we play first to ten points. I will grant that the first round may have been essentially 50-50. After that, he could anticipate my every decision. This doesn't seem like it should be possible. If on one round I throw rock and he throws paper, I might think something like "It must seem to him least likely that I would throw rock again after just losing a round with rock, so I think he will throw scissors on the next round, so I'll throw rock again," or I might think [whatever line of reasoning would lead me to throw scissors on the next round], or I might think [whatever line of reasoning would leal me to throw paper on the next round]. The reasoning is arbitrary, right? You can use any line of reasoning to decide your next throw, and any of those is just as legitimate as any other line of reasoning that would result you in deciding to throw any of the three options on any round, right? And it can't really be predicted which of those three arbitrary lines of reasoning you'll pick next, right? Mr. Albert demonstrated that it's not that simple. "Okay, last time I threw paper and he threw scissors, on the next round I threw scissors, so this time when I threw paper and he threw scissors, on the next round I will throw rock." That's a more advanced line of reasoning. Anticipated. Albert throws paper, scores one more point. He could tell by looking when you would do any of these simplest-level lines of reasoning, and which one. He could tell by looking when you would do any of these second-level lines of reasoning, and which one. And every time he would know whether you're going to throw rock, throw paper, or throw scissors. "Okay, if this guy can tell by looking what my line of reasoning is, and more quickly than I can get to the conclusion of my own line of reasoning, if I just throw a random choice every time, at least I can get my win rate up to 50% of rounds, right?" Right? Won't do. Albert can also tell when you're thinking that, plus "Okay, random move.. scissors", or "random move.. paper". Legend had it that he had never lost a first to 10 with any student. It seems like it shouldn't be possible that any hooman could be that good at rock, paper, scissors. What you have to understand about the game dealer in Newcomb's problem is that he can anticipate what your reasoning is going to be, and what your decision is going to be, even better than if Albert were the game dealer. In reductionism, you can always check that the lower level of abstraction things are going as expected. If I say that 4 to the power of 3 is 64, you can check that by typing 4^3 into a calculator, and your calculator will say 64. If you really want to check that reality is still working as expected, you can reduce this exponentiation expression into a multiplication expression: 4 x 4 x 4. And you can calculate that 4 times 4 is 16 and 16 times 4 is 64. And if you want to do a more thorough check that reality is still working as expected, you can reduce this multiplication expression into an addition expression that will expand to 4 + 4 + 4 + 4 + ... + 4, with 16 instances of 4 and plus signs between them. And if you want to do an even more thorough check, you can reduce this addition expression into a series of incrementations by one that will expand to successor of (successor of (successor of (successor of ... (1)))), with a 1 in the deepest layer of parentheses and 63 instances of "successor of" before it. And all of those checks will resolve into 64, whether expressed as an exponentiation, expressed as a multiplication, expressed as an addition, or expressed as an incrementation. If you find someone who regularly does exponentiation expressions in his line of work, and you require him to reduce every exponentiation expression into an incrementation expression, he would still be able to complete any of his tasks, and the result would be the same. It would only take longer. To use exponentiation is to use an abstraction that saves time, but does not change the result of any given calculation. All forms of emergence, except perhaps "strong emergence", admit of the same property. Emergence implies "unexpected" in a certain sense of the word, but not indeterministic. A cellular automaton may produce emergent effects that are more unexpected than an exponentiation equation, but both the cellular automaton and the exponentiation equation are fully reducible. What we don't find is a "difference in type". Lorem a word on what it means for us to be searching for a "difference in type". Emergent entities of protons, neutrons, and electrons include: oxygen atoms, agency, generosity, angst, ammonium ions, and ethanol molecules. In a world with face-up predictability, you could read tomorrow's newspaper today and it would say what the winning lottery numbers are. If face-down, also how many people won. You could make a coin flipping machine that perfectly thwarts a perfect predictor. That's one way to set up a frustrator. To describe this with a simplified model, let's say that there's a coin flip, and the outcomes, heads or tails, depends only on the angular speed. So this coin flipping machine can toss a coin with angular speed 3 rad/s and it will lands heads, or it can toss a coin with angular speed 4 rad/s and it will lands tails. Now suppose there's a perfect predictor that has to make a prediction and announce "heads" or "tails" through a loudspeaker when the coin is at the height of its trajectory, but the sound waves of the "heads" or "tails" announcement can have an effect on the coin. Now suppose that our frustrator coin flipper tosses the coin with an angular speed of 6 rad/s, and if nothing issues from the loudspeaker, then the coin will lands heads, and if the sound "tails" issues from the loudspeaker, then the angular speed of the coin flipping will change to 6.2 rad/s and it will land heads, and if the sound "heads" issues from the loudspeaker, then the angular speed of the coin flipping will change to 6.1 rad/s and it will land tails. So the way this frustrator coin flipper launches the coin is such that if nothing issues from the loudspeaker, it will land heads, and if "tails" issues from the loudspeaker the coin will land heads, but if "heads" issues from the loudspeaker the coin will land tails. And similar thing for tails and 7 rad/s and 7.1 rad/s and 7.2 rad/s. Now whether the coin launching frustrator launches with 6 rad/s or 7 rad/s, no matter what the predictor announces, it will be wrong every time. This can all be encoded into just the way the coin is thrown and knowledge of how the sound waves constituting the prediction will impinge on the coin. This is just to say that a frustrator can be arranged in such a way that there's one thing that effectively contains both sensor and effector. There was the example of the more advanced frustrator with a real sensor and an effector and a processor in between, and how that could be set up as a frustrator. But you can bind all those things up into a single mechanism. This mirrors certain robots that have been constructed and demonstrated that can do certain other things in ways that effectively combine sensor and effector and processor into one thing that's not really separable into those three things, but does the same kind of task that we normally use sensor plus procesor plus effector for. Game dealer: "Round one has not yet begun. Would any of the participants like to make any comments?" Predictor: "Yes. If I say red he will do blue, and if I say blue he will do red." Game dealer: "Your comment has been noted." Game dealer: "Round one begins. Predictor, what color will the other player pick?" Predictor: "Red." Frustrator: "Blue." (Or Predictor: "Blue." Frustrator: "Red.") What the predictor said before the round started was the perfect prediction. What he said after the round started: he was in a situation that doesn't qualify as something we can ask of even a perfect predictor. What Predictor said during round 1 of the game was immaterial. If we counted what Predictor said during round 1 of the game as some measure of his merit as a predictor, we would be applying an unfair standard to how we're measuring his abilities. Handily, Predictor did make a prediction before that, during what Game dealer called 'time for comments before round one'. And, wonderful to relate, what Predictor said during that time was a perfect prediction. This shows that Predictor could make a perfect prediction, when the conditions are appropriate for demanding that of him (when he said "If I say red he will do blue, and if I say blue he will do red"), and the rest of that only shows that there are situations we can put the perfect predictor in that are no meaningful measure of his abilities. Yeah, we can put Predictor in a no-win scenario. So, what this shows is that in a face-up predictability scheme, the best possible predictor can't always both announce a prediction and have that prediction be perfect. Or, conversely, what this shows is that if there is a perfect predictor, we might only expect the perfect predictor to be able to predict perfectly when certain face-down conditions are held. So when there's certain face-up conditions, and ability for a system to react to a prediction, a perfect predictor that has to make a prediction and announce it won't always be able to hold to all those conditions and predict perfectly. All this is to stake out what exactly can and can't be expected of the best possible predictor. lorem - Jesus microwaves a burrito Repeated incrementation is commutative, addition is commutative, multiplication is commutative, but exponentiation is not commutative. If you ask "is it commutative?" at each level of abstraction, starting with the lowest, you get "no, no, no, yes." Non-commutativity emerges at the level of exponentiation. (m1) x multiplied by the sum of y and z is equal to the sum of x multiplied by y and x multiplied by z. (e1) x to the power of the sum of y and z is equal to the product of x to the power of y and x to the power of z. These two rules have a pattern that they both consist of. But in the rules of addition, there's no rule that consists of that same pattern. So m1 and e1 are a couple of rules that follow some pattern, and that pattern emerges at the level of multiplication. Agency and local entropy reversers. The effect you have on a set of things between the dinner table and the toilet does never exceed the effect you have on lifting things onto shelves and so on. If your refrigerator is running right, the inside will be much colder than the outside. How often do you open a cupboard in your house and find that the inside is more than 20 degrees colder than the outside? It's probably never happened (unless the refrigerator counts as a cupboard). That's because having a chamber that's that much colder than the surroundings is an 'unlikely' arrangement, in a certain sense of the word 'unlikely'. This 'unlikeliness' refers to negentropy, or the local reversal of entropy. It's a good thing we have refrigerators. We have them because we like when they locally reverse entropy inside. Why can't we cool the planet by building billions of refrigerators and running them with the doors open? It's because the amount of heat they reject outside is more than the amount of heat they suck out of the chamber inside. If you can reach the backside of your refrigerator, you will feel that there's an area there where it's basically a heater to the space outside of it. This doesn't violate conservation of energy. The total effect of the refrigerator is (1) to move some heat from the inner chamber to the outside of it, and (2) additionally to turn some electricity into heat, and also reject that to the outside of it. So the amount of heat coming out the back is equal to the sum of the amount of energy sucked out of the cold chamber inside, and the amount of electrical energy it's turned into heat. If you leave the door open, it will act like a heater, just turning electrical energy into heat. Conservation of energy is not violated. The sums work out. Conservation of energy is also known as the first law of thermodynamics. The second law of thermodynamics is the one that guarantees that the amount of heating it will do outside is greater than the amount of cooling it will do inside. No referigerator has ever not done more heating outside than cooling inside. And none will, unless some day we figure out how to violate the second law of thermodynamics. This law applies to engines of all kinds. More generally, the second law guarantees that every engine puts out less useful energy than it takes in. The difference between the first law and the second law is that word 'useful'. A car engine takes useful energy in the form of liquid fuel, and converts it into a combination of useful motion and useless heat. First law: the sum of that useful motion and that useless heat is equal to the amount of useful energy it liberates from the chemical bonds of the fuel. Second law: there will be useless heat, so the amount of useful motion output is less than the amount of useful chemical energy input. Likewise with your body, if you lift heavy objects from the floor and deposit them onto shelves, the amout of gravitational energy you put into those objects will be less than the amount of chemical energy you liberate from the food you eat. There's a fundamental similarity here between your backside and the backside of your refrigerator. Every source of agency is a local entropy reverser. And when a person resolves 4^3 in one line to 64 in the next line, he isn't thinking about 64 successive incrementations of zero. Lorem the example about incrementation and addition and multiplication and exponentiation is maye not exactly analogous to ant emergence? Because multiplication is a certain pattern of repeated additon, not just any kind of repeated addition, and exponentiation is a certain pattern of repeated multiplication, not just any kind of repeated multiplication. Perhaps it's a perfectly fine example if: there's no relevant difference in type between this and ant emergence, and this is a good way of showing how special/unspecial emergence is. The emergence argument for free will makes it seem like rabbits are coming out of hats. But there are good arguments for consciousness being strong emergence. I think consciousness is strongly emergent, I think strong emergence is mysterious, but I don't think free will is emergent. I don't think free will comes with consciousness. Free will is an illusion, but consciousness is not an illusion. "A computer will never be conscious." What's so special about meat? Lorem. lorem - there's some difference between your brain and the meat in the burger you just ate. And there's some difference between your computer and the rock that you use as a paperweight. But your brain is just like the burger meat, only with a special kind of arrangement of the meat. And your computer is just like the rock, only with a special kind of arrangement of the silicon. Sure, it's natural to say that our computers aren't conscious, and that they'll get more advanced, and that it's hard to see that they'll ever become conscious even as they get more advanced. But what's so special about meat? Your brain is just a more advanced hamburger patty, and somehow it became conscious. We don't know much about that the conditions of that 'somehow', but one might say that there's nothing about it that prohibits it happening in silicon as well. Lorem: more about cellular automata. Emergent objects, can compress information about them, example of a collision. You may have a record player that can play any record in your collection but a record could be made such that if you tried to play it with your record player it would cause it to shake at a resonant frequency and break it. If determinism is true, why don't I just sit in a bucket all day? Because I feel like being in the casual stream. Lorem a diagram. When we say that a computer program has a decision-making process, do we then stop thinking of it as deterministic? Here's a computer program I wrote a while ago. It's a few thousands of lines of code. It does some pretty complicated stuff. A few days ago, I wrote in the code comments of this program a thesis supporting fatalism, a real emotional piece designed to make you give up on ever giving an effort to anything. I ran the program today and it still does the same thing as it did before. Since fatalism means a number of different things, and since nihilism means a bunch of different specific things, let's introduce a term, 'lethargism', to mean the attitude that since you can't branch the future of the timeline you might as well give up on effort. I accept all (?) forms of fatalism as true, as well as potentially several forms of nihilism (e.g. moral nihilism (not really, but for the sake of argument in this thing I do in many places even when I don't really)), but lethargism is that thing that I staunchly say is incorrect. Lethargism expressed: "That's the way I want to go. Just, I wanna just be in a bucket, right, and people feed me, and I go 'Nurse, it's coming out the other end', and that'd be a great- wouldn't that be a great way- just sitting in a bucket watching telly, like that, just naked, in a bucket, right, so I just eat as much pizza as I'd like, and drink as much beer, and then I just filled up, and they sorta like, they tip me out, shut um, like that, and that's, that's my, that's a great fucking life. That's a great life." - Ricky Gervais. (This was philosoper Ricky Gervais's description of the best way to die in old age, per a 2014 interview with Jon Stewart. In a 2021 (2022?) podcast episode with Sam Harris, Gervais reiterated the bucket idea as what his natural first reaction to the idea of determinism is.) (goes good near the end of a section like this) The only thing you get to experience is the present moment, and the only thing you ever get to do is execute the next round of the stepwise laws of physics. [] Determinism [][] Determinism: The Basic Case for It (Determinism / Determinism: The Basic Case for It) History plus laws equals one. Stepwise state plus stepwise rule equals next stepwise state. I can reason counterfactually about the past without needing to have two real pasts - one real and one imagined will do. As for the future... [][] Deism (Determinism / Deism) Is there a cosmic child whose playthings we are? We are determined by the impersonal laws of physics, which are not an agency, and you can't be the plaything of that which is not an agency. "Free enough". This universe might be a simulation on a computer that's being run by someone with the intelligence level of a hooman child. If that's the case, then we really might be the playthings of a child, but it's still a lot more salient that it's the laws of physics determining us, not the agency that switched those on. "Free enough". Maybe the impersonal forces of physics were selected by a deist god among quadrillions of options, and an extremely fine selecting process was possible (like if there are 10 fundamental constants in physics and each was tuned to the 40th decimal place). If the selection can be this fine, then maybe the initial conditions could be picked with such discernment that the state of things 14 billion years later was essentially selected. If that's the case, then the present conditions of our planet may have been foreseen from the outset even if the god hasn't intervened since first setting it in motion. [][] Lorem (Determinism / Lorem) In the video game Worms Armageddon, there's a strange glitch that shows up sometimes that illustrates the concepts of rollback and chaos. In this game, sometimes at the end of a turn it will show a replay of what happened that turn, and usually it works fine, but sometimes the replay rolls, and the start of the replay looks like what you just saw on the previous turn, but the end of the replay shows things that didn't happen. It has to do with how that part of the game engine is programmed. The game sends information to the replay module including information about what things looked like at the start of the turn, and what actions the player took that turn. Then the replay engine takes that information and replays it, not like a video tape, but like the game is still running and we went back to the start of the turn and gave it the same inputs. There's some slight error about what information it needs to run the scenario again, LaPlace's demon style, and how sometimes what it gets is missing some small part of that information. So usually the replay looks like the turn that just ended, and sometimes it doesn't. It's a perfect demonstration of the butterfly effect, especially because it was an unintended outcome of a game that wasn't designed to do that. It's like you took LaPlace's demon, told it you have full account of what the world looked like one minute ago, gave it that information, then it confidently made a prediction of what the present and the future would be like, but because it was missing some tiny piece of information that it would need in order to track the developments, it came up with an incorrect prediction. Is determinism true? Fundamentally, no. Effectively, yes. Unless you work in certain professions. Same with Newtonian mechanics. Is Newtonian mechanics true? Fundamentally, no. Effectively, yes. Unless you work in certain professions. Newtonian mechanics isn't real because it's an edge case? Consider the following plausible scenario. Somewhere far away, there's some planet, call it X, that's doing much like how Earth was doing in the year 1400. Somewhere near planet X, there's another planet, call it Y, with Star Trek level technology, but the planet Y people don't have the prime directive. So the planet Y people visit planet X and they teach them Einsteinian mechanics. Now the people on planet X use that knowledge, applying it to their technologies, and as a result they can make better technologies. One day a planet X person says to a planet Y person, "Every time we've used these equations, there's this one term that's always come out extremely close to 1. In fact, if we had just rounded that term to 1 in all of our calculations, the differences would only have been to the 9th decimal place, and all the engineering tasks would have ended up the same. Can we just start rounding that term to 1 and keep rounding it to 1? Will the roundoff error of changing that term to 1 ever be material?" And the planet Y person replies, "Uhh, for now. Yeah, you can round that term to 1 every time for now and it won't make any material difference, for now. In a few centuries the things you'll be learning about and the things you'll be designing will require you not to round that to 1. The term will matter some day, but for now it's as good as 1." Then the planet Y person writes some symbols on a piece of paper and hands it over. They're the equations for Newtonian mechanics. "Here's what those equations look like when you round that term to 1." Then the planet X people use only Newtonian mechanics for a while, until they understand more things, and finally they detect when they're going to need the Einsteinian mechanics that you get when you don't round certain terms to 1. In that world, Newtonian mechanics will always have been understood to be a set of approximations. That would have some effect on the history of whether Newtonian mechanics are seen as "real". In our world, in the days of the first useful steam engines, if you asked anyone who knew anything relevant about them, they would have said that Newtonian mechanics are absolutely true laws of nature. And in our world in the age of artificial satellites, if you ask anyone who knows anything relevant about them, they'll tell you that Newtonian mechanics are an edge case of Einsteinian mechanics that only work as good approximations when things are moving sufficiently slowly. On planet X, they would have answered that way both in the age of artificial satellites and in the age of the first useful steam engines. The people on planet X would never have had the crisis of "what does it mean for a physical law to be real" with regard to Newtonian mechanics quite as much as we had on Earth. Newtonian mechanics are never exactly right, even when dealing with things moving as slow as five centimeters per second. But does that mean they're not real laws of nature? One can say that something is a real law of nature if it's useful enough for people to use them to predict things when they know when they apply. If that's the criterion, then most calculations of mechanics in the present day are using real laws of nature when they use Newtonian mechanics and do that quick check called "relativistic effects will not be relevant to these calculations." One can say that something is a real law of nature only if it describes exactly how some part of nature works. If that's the criterion, then Newton's laws aren't real, and they never were, but we used to think they were. If someone on planet X takes that as the criterion, the planet X people would never have been under the illusion that Newton's laws are real laws of nature. lorem - There's a bakery near where I live, and they make the best croissants there, and every time I walk by that bakery, I can't resist the temptation to go in and buy at least one of their tasty snacks. Further, that bakery is on the block where I live, and my place is closer to the other end, and it's a dead end. So every time I walk from my house to anywhere else, I have to pass by that bakery, and I also have to go in and buy some of their wares. I've tried resisting the temptation, and it's never worked. I'm pretty sure it could never work, no matter what techniques I try for resisting. That bakery completely eliminates my free will. It's a total violation of the free will I normally have. And I wouldn't have it any other way. I control enough of the rest of my life and my decisions in it. I resist unhealthy snacks in almost all other cases, and I don't buy so much from that bakery that my health might be adversely affected. So I like it being there. If they shut down, I would be devastated, even though it would be an unmitigated increase in the amount of free will I have. No, I'm quite emotionally invested in it's remaining there and doing business profitably. If I'm an enslaved person, then it's an enslaved person I want to be. Fate is both kind and cruel. Instead of praise and blame, reminders of the kindness and cruelty of fate. Priming effects never did achieve total brain control. Can you feel gratitude toward the impersonal forces that constitute fate? Yes. Using free will as a heuristic is like using Newtonian mechanics as a heuristic. In this game of the sims I can buy my sims a computer that can run pong or a computer that can run space invaders but I can't buy them a computer that can run doom or better, therefore my sims are indeterministic. In this game of the sims I can buy my sims a computer powerful enough to run the sims at 100x speed, therefore my sims are deterministic. (this would cause an infinite regression problem?) The only way to predict the future of my sims game is to simulate the game, which is the same thing as playing the game, so the whole idea of determinism of the sims game is meaningless? lorem - suppose I gave my sims a computer that communicates with my outer verse, and in that outer verse, it predicts what will happen in the sim world, subject to face-down predictability constraints. They may test it and become convinced that indeed it is what it seems to be - a communications channel with what can only be an outer verse. It makes sense to say that a system is deterministic, but it doesn't make sense to say that a world is deterministic? Every time someone does something nice for me, every time someone does something mean to me, I'm reminded of the sheer terror and wonder of the abyss that takes all our lives back in the end. Happens when someone holds a door open for me, especially if my hands are full at the time. Happens when someone cuts me off in traffic, especially if I'm coming home exhausted after a long day. Every time I do my sums again, with updated information, and calculate in absolute terms, how much more lucky I am than I would be if I were dead. Nietzsche eternal recurrence. The point is to get a sense of the incredible gravity that would be attached to an eternally recurring timeline, then understand that once has some nontrivial amount of gravity. Have you used your past regrets to work out your decision process for the future? To a poker player, eternal recurrence is a model for understanding when a bad outcome after a good decision based on limited information is not appropriately a regret. Or when a good outcome after a bad decision is appropriately a regret? Schrodinger's river card. By the time the deck is out of the shuffler, the order of the cards is some definite thing, but the right way to model the next card is as a superposition, which means to treat a deterministic thing as though it's indeterministic. lorem - or to treat an indeterministic thing as if it's a "sum over all paths"? Standing at the top of a hill, you start a rock rolling down it. The hill is steep, and with many rocky patches. You can't predict where it will come to rest at the bottom, but a physicist with a tape he could pause could. If you were to send a bunch of rocks, one by one, rolling down that hill, they would land scattered in a distribution at the spots at the bottom of the hill, even if you tried to roll them all as similarly as possible. When you send one out and it starts rolling, you can say "There's a chance it will end up over there, and there's a chance it will end up over there." This use of the word 'chance' is clearly about ignorance and nothing else. It is possible that even though our universe is multiple billions of light years across it's still tiny compared to an outer verse. It could be that the fundamental verse is a quadrillion times the size of this one, they've built really big computers in it, and our verse is to them like a sims world is to us. A parable: guy hears his friend do the bit about how the idea of determinism is meaningless because we can't simulate our universe within our universe. Then he goes home, launches a game of sims, and sees one of his sims say the same thing about their world. Lorem. Poker: imagine there's Nietzsche's eternal recurrence, but it's not immune to quantum indeterminacy, and that the remainder of the deck is shuffled by a true RNG after every time a card is dealt from it. And the agency in charge of the eternal recurrence takes you back to exactly when you were facing this or that decison at the poker table. lorem - it would only make sense to make the same decision on every rollback. It would never be the case that if there's 1000 rollbacks the right decision is fold 500 of the times and call the other 500 times, depending on some kind of feeling of premonition or something. Lorem: time worms and block universe and four dimensionalism. Lorem about taking a series of pictures and superimposing all the figures to one ground. Lorem about making a list of things in a three-dimensional space and how that's similar to making a list of things in four-dimensional space. lorem - turning a 2D world plus time into a 3D block universe, lorem - what it means to draw a path on a diagram representing a movement over time. The best 2 word definition of determinism is everything's determined. It's a good name. Determined doesn't imply predictable. When I say that a hooman decision is deterministic, I'm using the term 'deterministic' as a shorthand for 'effectively deterministic' or in other words no different in any way that matters from how a computer program is deterministic. And when I say I'm a determinist I mean that hooman decisions are effectively deterministic like that. The doge tied to the wagon metaphor is true and therefore beautiful. You go to a pool table, you rack up the balls, place the cue ball, and deliver the opening shot. You have a camera set up to take a picture one tenth of a second after the cue ball hits the 1 ball, to take another picture 1 millisecond after that, and another picture 30 seconds after that. You go to a physicist and give him the pictures from 100 and 101 milliseconds after the break and you ask him which balls were potted by the break. He gets back to you a week later and says the 7 ball was potted. You give him the picture from 30 seconds after the break and the 7 ball is the only ball not on the table, potted. You ask him why the 7 ball was potted. Lorem he hands you a report with the whole causal story that takes the first two photographs as initial conditions and explains how the 7 ball ended up in one of the pockets. It's complete in terms of explaining all the collisions the 7 ball was part of, and all the collisions (including bounces off walls) of the other balls that collided with the 7 ball previous to their effects on it. It can exclude analysis of any of the balls that didn't collide with the 7 ball or didn't collide with any of the balls the 7 ball collided with. Lorem it's a long report. That's how causation works in the real world with anything complex enough to matter, such as hooman decisions. Lorem you did see what the table looked like a tenth of a second after the break started and you did see the 7 ball on its way to the pocket right before it went in, but the time when you became aware that the 7 ball was to be potted as a result of the break was not as soon as a tenth of a second after the break started. In this pool balls example, the causes weren't clear even when you had access to them, because the interactions were quicker than you could work out in realtime. In the case of a hooman decision, many of the causes aren't clear, but for other reasons than them being fast and fleeting. [Goes with: vape store story] Because of inattention, the guy said the wrong number. At first, it would seem like the number he said was as good as random. But when you figure out the right analysis, it makes perfect sense why the mistaken number was that one rather than some other one. Many of our actions are even less scrutable, but just as deterministic, which is fully deterministic. 1: "Fate decides", 2: "Fate determines", 3: "Given the way things are, it is determined", 4: "Given the way things are, it is inevitable." All four statements mean the same thing, but the items earlier on the list suggest that there's some agency behind what events shall occur in the future. The last one has no agency suggested. Compared to the last one, the first one seems agentic in multiple ways. First, there's a 30-foot tall boogeyman called fate, and secondly, he's deciding things. But "fate decides" is just a shorthand for "given the way things are, it is inevitable." Notice also how the items higher on the list are shorter sentences. That's because we get economy of speech by using certain shortcuts like imagining agencies and giving them names and intentions. [] Fatalism The standard definition of fatalism is "that kind of exaggeration people tend to pose when asked for the first time about determinism". It's false by definition e.g. "but you can be a determinist without being a fatalist!" Fatalism is a choice, not an exemption from choice. Even the no matter what you do version of fatalism is true. Whatever is going to happen is going to happen no matter what. Still I optimize for trying to make the best things happen no matter what. The 'strongest' form of fatalism is the Final Destination kind. That makes a weird combination of the "no matter what" condition with indeterminism about how one can thwart it temporarily but it will catch up to you. Death doesn't take 'no' for an answer. Okay, this form of fatalism is silly. This is the form that includes Oedupis or Red Dwarf 8x04 Cassandra. But if you take the 'no matter what' version of fatalism (the 'strong' kind) and you give the time of day to what determinism actually means, then you have a combination of "no matter what" plus "there's only one 'what will be'". This can be seen as that thing that hard determinists assert is true but that is so hard to give the time of day because of the persistent illusion of free will and all the reflexive attitudes it comes with. The monkey paw form of fatalism is like saying there's no difference between face up prediction and face down prediction. You can make up stories like the Cassandra episode of Red Dwarf, but that's not how reality works in general. Taxonomy of fatalism. Monkey paw fatalism, antisapient fatalism, hard determinist's fatalism. Lorem. "no matter what" fatalism vs all possible worlds. [] Incomprehensibility [][] The Stance Without a Positive Case (Incomprehensibility / The Stance Without a Positive Case) [][][] Why the Negative Cases Don't Amount to Anything (Incomprehensibility / The Stance Without a Positive Case / Why the Negative Cases Don't Amount to Anything) Theologians don't make good poker players. Are they equally bad at playing the game of life? I don't know. But I trust that the principles that are true are the principles that are useful, and I trust that the principles that are true are the principles that are good. lorem - also, this issue is not, as you place it, beyond proof, disproof, or practical outcome. An even better solution for you: the world as you know it is something you've been imagining, and somewhere not too far from here - you have to find it - there's a door that will lead you from this world you imagined to the real world, and there's nothing unpleasant of any kind there in the real world. Do you prefer believing in that? Does believing in that make you feel good? Is its making you feel good a good reason for believing it? Are you going to start going around and checking doors now? The mystery of libertarian free will is something we would have to live with if it were necessary for doing things that determinism couldn't also do. If it could sustain any "no other way to explain that" justification, I would be ready to accept it with all its vagueness and say, "Yeah, hard to reason about any of its working parts, but we need it." But aside from the problem of having nothing to say about it, it's also something we don't need. Does belief mean there's a statement that you act as if is true even if you're not fully sure if it's true or not? That's not what belief means. I may believe something is true, but with far less than certainty, and then choose not to act as if it's true. If belief meant something that you act as if is true, then there would be no way to explain what is meant by "hedge your bets", or several of the concepts in investment finance, such as hedge funds, and every investor would have a one item portfolio and all their money on their one favorite bet. Free will, I act as if it's true, but I don't believe it's true. It's not a belief, but some other species of thing. We have a word for it: illusion. When I sit down to watch a moving picture, I act as if there are tiny people inside my TV, but I don't believe there are. That's called suspending disbelief, and suspending disbelief is something entirely different from believing. lorem - that's why we have the term "suspend disbelief" and why "suspend disbelief in" doesn't mean the same thing as "believe" [][][] Why the Fragments of Positive Cases Doesn't Amount to Anything (Incomprehensibility / The Stance Without a Positive Case / Why the Fragments of Positive Cases Doesn't Amount to Anything) If I want to help someone with good information but they make their decisions as an uncaused cause, then I have no idea how to help. You can't reason someone out of the position that they're sure they're a brain in a vat. Determinism does everything that people want free will libertarianism to do (except for satisfying fairy tales), plus a lot more. It actually answers questions about what to do with something other than, "I don't know because no one knows how it works". That's one of the particularly bad ones, even though it's regarded as one of the best ones. The other ones aren't much better. I've tried them. The guy who read this book before me left the following margin notes. I know enough about the guy to know that he wouldn't do that in anger while hyperventilating. If he left that note it was because he gave the piece a fair treatment and was either trying to guide the next reader, to draft a fuller commentary elsewhere, or some other level headed and impartial thing. Paraphrase the "his problem my problem" bit from ginet and then quote it. "He says that an agent can start an uncaused cause of an action, but I say that an agent just is the uncaused causing of actions. There are problems with his account, and there are also problems with my account, but I think his can't be reconciled whereas mine might be." But if I were to read you this and you were to squint your eyes just right, it would sound a lot like philosophy. There's a word for this: pastiche. By that I mean both obtuse and fake. Billiant bolliant belliant. The positive accounts of libertarian free will are in the same relation to any philosophy of any amount of substance as Strom is to any extant Earth language. Why do you still even want this at this point? "What's north of the North Pole?" doesn't have an answer, but there is an answer to why it doesn't have an answer. At the north Pole, there is no north direction, so you can't go north from there, so there is no place north of there. That's an excellent answer to the question of why there's no answer to the first question. Compare: there's no answer to the question, and there's also no answer to why there's no answer to the question, and no answer to why that is, either. It's just no answers and no questions all the way down. It just hasn't been built up, but it just hasn't been knocked down. Burn a penny candle looking for a half penny. The lengths people will go to to defend "I feel contra causal" and "morality requires contra causal". They will write volumes to defend those while saying only "nobody knows how the rest works" and padding that. I like reading analytic philosophy and I like reading occult poppycock like the Lushi Chunqiu, and I like reading novels, and Internet forums, almost one of everything I like reading, but those positive accounts of libertarian free will, reading those is a bad time, like reading new age woo. I gave them the kind of reading where you spend 2 hours reading an essay that's less than 10 pages long. I took the trouble to understand the meaning of every sentence before concluding in complete fairness that it says nothing. Making outrageous claims on little to no good reasons, compare, making no claims and giving reasons that have nothing to do with the claims. A grand array of nuts and bolts, all paired up and perfectly fitted, holding together nothing. Quantum physics has proven that you can be in two places at the same time, and that's why you can change anything in the universe using only your imagination. Rabbits are mammals. Ducks are not mammals. When I talk about mammals I'm sometimes talking about rabbits. When I talk about mammals I'm never talking about ducks. And sometimes when I talk about mammals I'm talking neither about rabbits nor about ducks because I'm talking about 'mammals' the category without reference to any of its subcategories. This kind of talk, where you're making some distinctions clear even though they seem maybe obvious, this is often a necessary part of crafting a good essay. Sometimes you have a good essay to craft and in a few places you have to take the reader through that kind of weird, plodding terrain in order to make sure there are no mistakes about the main points you're making. The problem with the positive accounts of libertarian free will is that they keep doing this kind of thing to absolutely no point. The essay just keeps oscillating between statements like those, laying down definitions, and saying that we don't know anything about the topic. When you take stock of the whole thing you have, "We don't know anything about the topic," plus the 99 percent that has nothing to do with the point, but that 99 percent is made of grammatically correct sentences that technically contain no inconsistencies, but also contain no support or augmentation to the thesis claim, "We don't know anything about the topic." [][] No Encounters with The Third Kind (Incomprehensibility / No Encounters with the Third Kind) [goes with: the time I sought a headache by checking YouTube] News flash: it's been rescued! The quantum people have rescued free will. Now back to our regularly scheduled programming. How would having access to quantum randomness have anything to do with free will? Suppose I have a computer program that's at the head of the controls of a power plant. It makes important decisions about when to open and close valves in ways that keep the power coming out and nothing exploding. It's basically a giant robot. Suppose the program is deterministic. Now my friend gives me a quantum RNG. I press the button on it, it says 3. I press the button on it, it says 8. Nice. Very random. And then I say, "Great, now I have all the parts that one would need in order to endow a machine with free will. All I have to do is take this deterministic decision-making program, give it access to this quantum randomness detector, and.. oh fuck, right. Having those two things together has nothing to do with free will, and to create something with free will it would need to have something that's really got nothing to do with any of this." In what way do exact deterministic effects and purely random effects combine to produce the kind of free will where a person can be an uncaused cause of something through an act of his agency in a way that reflects something other than mechanistic rationality and spasmodic twitches of the brain? What is this other thing that's produced and how? Perhaps there's some way to combine these two types of substance in some way that produces a third type of substance, and that happens in hooman brains, and that's never been detected, and it's never been explained even in principle how that could work. Perhaps neuroscience in about 40 years from now will detect what the third sort of thing is and how it works. And that's how I combined these two types of thing into a thing that can do uncaused causes for reasons. The rule of sufficient cause seemed pretty solid for hundreds of years, and then it was found that the swerve is real. The swerve can't bestow free will, but maybe that third kind will be found some day. [][] Lorem (Incomprehensibility / Lorem) Soul stuff is not disproved. It has the same standing that the god of the gaps has. An investigation into antiquated commonsense notions and differences in language. In the ancient western world, it was believed that the soul resided in the chest, or that the chest was the location whereby the soul connects with the physical world. This seems like a strange idea, even for ancients to have. It seems like even with the technology available in ancient times, you would know what it's like to have your head punched and subsequently either have your soul rattled, or the location whereby the soul connects with the physical world rattled, and in a way that doesn't exactly match what it's like to be punched in the chest or somewhere else. Lets try to imagine what it would feel like to have the commonsense notion that your 'self', if it seems to reside in any particular part of your body, has as that location the chest. If you're like most modern people, you have the commonsense notion that your head is where your 'self' seems to be located. That's where most of your sensory input streams are located. You see out of your head. Your head is where you hear, taste, and smell. Now imagine you were to shut your eyes somewhere with no sounds to hear and with nothing in your mouth to taste or in your nose to smell. Now suppose you rub your chest with your hand. Now your only sensory inputs are coming from your chest and your hand. Can you do that and as a result feel like your chest is where resides your 'self'? If then you open your eyes, can you feel as though you're getting optical data at a location one foot above the location of your 'self'? I don't know. I can't. When I try to imagine what it would feel like for the seat of conscience to be in my chest, I'm pretty much drawing a blank. In modern times, our language has grown to reflect this change of soul location. Now we have expressions such as "What's on your mind?" and "What's going on in his head?", and we see these two as pretty much synonymous, as if your head and your mind refer to the same location. When we use the word 'heart' in the English language, we're usually referring to something like what emotion-guided passions a person has - something more like a gut feeling. "My heart says I want to rescue the world, but my mind tells me that's not possible," is something you can say in modern parlance, and the meaning would be clear enough, but this utterance would have made no sense at a time when the mind was thought to be in the chest. Fun fact: in Chinese, if you say the word that means 'heart' (in the anatomical sense), the meaning it also has in the looser sense is something like 'intention', in a way that has more to do with what we mean by 'head' in modern English than what we mean by 'heart'. Liezi body swap story. lorem - translate the body swap story I built an elaborate thread winding machine that would be able to handle the effect of when the flow of time is reversed. Now how many years it's sat there without any one of the spools turning one inch. If we find something that can only be explained in terms of soul stuff, then we have evidence of soul stuff, even if it is not direct evidence. A regret is not the body's sign that you now should attempt to travel to an alternate timeline in which you did other than what you did do. A minor regret, it is clear enough, is a sign that one ought to work out his decision process better, to pause now until that's worked out, and take that revised decision process into future scenarios. This is the correct way to process the regret, and it dissolves it. The way this is framed by the brain is as an illusion of free will. It's a heuristic. The illusion is more efficient than a clarity that does away with the illusion. A major regret indeed can convince a person to dedicate all their effort into trying to reverse time, and a person may remain stuck like that for a while before they decide on better ways of coping. Irrational thinking and behavior like that in times of severe emotion is more of a feature than a bug - that's something evolution gave us that keeps things interesting and sometimes makes us think about the hard philosophical problems. Soul stuff has the same ontological status as the teapot in orbit around Mars. lorem - epistemological status? The person who is fully inclined to reason this way (WJ dilemma of determinism) is also the kind of person who makes a terrible poker player. I'm closely familiar with this brain type of yours. I bet all in on the flop with a made hand, and my foe has only a flush draw or an open ender, something not worth calling with for the rest of the money. Then something goes through his head approximately, "But wait! The present moment is the only time that exists, past and future are only phantasms. If I call and see the last card, it will therefore be the only true test of a benevolent god whether or not the last card completes my draw. God, give me a sign. God is all good and not a deceiver, therefore one of the next two cards on the top of the deck must be a spade. I call." And then I win all the money 1.8 out of every 2.8 times. And the next time he has a draw against a made hand, he does the same reasoning process all over again. This kind of theological personality definitely makes one bad at the game of poker. Does it also make him bad at the game of life? Probably. To what extent are our intuitions independent of our concepts, and to what extent are our intuitions products of our concepts? To the extent that they're independent, the intuitions can help us arrive at more true concepts. To the extent that the intuitions are a product, they can be cited to entrench us ever further into untrue concepts. The most forceful arguments for free will are just lists of the second kind. If determinism is true, we will have to change a lot of dictionary definitions? Remorse: the feeling that a person should have done other than what he did do. That definition is fine, even if determinism is true. It just means that remorse is a feeling that someone should have done something impossible. That's still a real feeling. "Have you ever had a regret and then taken a time machine and changed the past?" "No." "Yet you think that regret refers to actual possibilities that the past could have been different?" "Yes." "That seems to me a confusion." (Re WJ dilemma of determinism) To say it could have been otherwise is less gloomy? Isn't it more gloomy? It only took 400 years, but finally the newest generation has been brought up in a world that's been cleansed of the commonsense notion of the immaterial mind, that vague and unworkable idea. (this is re VBW ep 27 "wait, the brain does that too?!") There's a tier list of attempts to rescue free will which has at the bottom things that sound maybe for a second like explanations but that don't solve anything, then slightly above that are the elaborate explanations couched in technical language that fail just as badly (bonus points if modal logic notation is invoked). At the top of the tier list of attempts to rescue free will, several degrees above all the others, I would put "Stop trying to rescue free will. It doesn't need rescuing. If you try as hard as some of these people do, it's just embarrasing." You can dedicate any amount of effort into looking for the fairies that can account for free will being rescued - I'll tell you now, the amount of effort that's worth is zero once you've given the literature a thorough reading once. Once you've read the literature, next time you're thinking about how you might be able to rescue free will, try this instead: take a dart, decide you'll throw it across the room to the opposite wall, take a guess where it will hit, then throw it. Whether it lands one millimeter or one foot off from where you guessed, say, "Well, it was fated, it was not exactly what I had guessed, and it's good enough." [] Indeterminism, RNG [][] Randomess and "Randomness" (Indeterminism, RNG / Randomness and "Randomness") [][][] True RNG and Pseudo RNG (Indeterminism, RNG / Randomness and "Randomness" / True RNG and Pseudo RNG) A true RNG could have done otherwise. Your pseudo RNG is essentially as good as a true RNG in some ways but not others. It is deterministic. But it can thwart prediction by almost any predictor short of a theoretically perfect one. But it is not immune to rollback. A true RNG is immune to rollback. A pseudo-RNG could have done otherwise if its seed had been different. [][][] What Types of RNG You Have and Where (Indeterminism, RNG / Randomness and "Randomness" / What Types of RNG You Have and Where) The odds that a preponderance of quantum wobbles would directly cause a decision is about as long as the odds that a bronze statue will wave at you the next time you look at it. Not impossible, but astronomically long odds. Later today, I will gather 100 coins, toss them all up in the air, and if all 100 of them land tails side up, I'll go out immediately and buy a 40-pound bag of marshmallows. The odds that these 100 coins will result in a decision to purchase something today is astronomically greater than the odds that a quantum random process in my brain will result in a decision of any kind today. You don't need a true RNG to do chance then choice. A pseudo RNG will do. If you use a true RNG to do chance then choice, that doesn't get you anywhere other than determinism plus randomness. (doesn't get you anywhere a pseudo RNG doesn't) Deterministic butterfly effects happen in the brain. Indeterministic butterfly effects do not happen in the brain. Indeterministic butterfly effects happen in the world and impinge on the brain. [][] The Adaptive Value of Pseudo RNG (Indeterminism, RNG / The Adaptive Value of Pseudo RNG) This is also what makes the Canada Goose a far more fearsome creature than at first it seems. Canada geese use stochastic violence to be disproportionately effective. That's why Joe Pesci's spirit animal is the Canada Goose. Sunzi: don't ever launch an attack by way of the same reasoning you used to launch a previous attack. lorem - That will make you predictable and it will give your opponent too much of an advantage. Good news: your other options for making a good attack plan based on unprecedented reasoning are virtually inexhaustible. The trench fighting educational video: he got himself killed by popping up twice in the same spot. lorem - the premise of that video is a bit silly, but that premise points out what it means to necessitate someone else to do something. It's the same as the idea of "suicide by cop". [][] The Greek Gods Are In The Dark Matter, And That's Why Hitler Survived As Long As He Did (Indeterminism, RNG / The Greek Gods Are In The Dark Matter, And That's Why Hitler Survived As Long As He Did) [][][] The God, The Hods, and The Prods (Indeterminism, RNG / The Greek Gods Are In The Dark Matter, And That's Why Hitler Survived As Long As He Did / The God, The Hods, and The Prods) (Lorem) Lorem [][][] The Bullet Was Prodded from Its Course (Indeterminism, RNG / The Greek Gods Are In The Dark Matter, And That's Why Hitler Survived As Long As He Did / The Bullet Was Prodded from Its Course) (Lorem) [goes with the geiger counter and all star by smash mouth] You ought to try divination by animal entrails [][] Lorem (Indeterminism, RNG / Lorem) The AI will create forms of art that are not even accessible to hooman sensory modalities: artistic movements, styles, pieces. When a hooman asks, "What's that like?" the AI will say, "Well, sort of like" and then output audio-visual material that hoomans have long had the technology but not the imagination to produce. And it will be great. The prods do things for reasons, but those reasons always trace back to pure caprice. They do something malevolent because they thought it would be interesting, they do something benevolent because they thought it would be interesting, they do something neither malevolent nor benevolent but just interesting because they thought it would be interesting. They do plan things that take many steps. Their reasons for things do admit of teleological chains ("I'm doing X so I can do Y so I can do Z"), but the foundation-level reson of each such chain is "because I felt like it" ("I'm doing X so I can do Y so I can do Z, because Z would be amusing"). Sometimes the prods intervene on your decision making at the deterministic level through things like suggestion. Sometimes they intervene directly by stacking the decks of quantum level processes in your neurons and synapses. When they take the direct mode, the recipient of their efforts makes a decision that could be called the result of contra-causal free will. Some people have been lucky enough to figure out how to make themselves a regular vessel for the will of these entities. These people have enacted the will of the prods multiple times, perhaps often. That's the only kind of free will that can exist in our realm of matter, and even in those cases it's not even one's own free will, but one channeled from the realm where free will can even be a thing. I'm using the expression "because I felt like it" to indicate what the fundamental motive of the prods is, but those words may be misleading. It might be more accurate to mark it as "because the result is amusing," but be warned, they often have a strange sense of what's amusing. How can one become a vessel for the will of the prods? And is that a good idea? And what does it feel like when that's happening? Let us consider this answer: "The prods like whatever's interesting, so if I become as interesting as I can, that will maximize my odds of being the recipient of their interventions." That can't be the full solution statement. They probably find it amusing that many of us are interesting people and that many of us are terminally dull and boring. I find that at least a little amusing, and I'm not even made of dark matter. So it very well could be that some of the regular recipients of the direct interventions of the prods are the most boring people in the world, about whom the prods have decided: "We shall maximize his boringness, so as to make other things more interesting by contrast." And there are also interesting people they help along to maximize how interesting they are. It follows that there's no one formula for how to become the regular recipient of the interventions of the prods. As for whether it's desirable to be a regular recipient of the direct interventions of the prods, clearly the answer to this is also a "sometimes, sometimes not." Considering how malicious are the things they do sometimes, it's at least often extremely unfavorable to be in their direct interests. And in other cases it's extremely favorable to be in their direct interests. As for what it feels like, not much. When they directly or indirectly intervene to make a decision for you, it sometimes feels like acting on a whim, and it sometimes feels like making a reasoned decision after long deliberation. It is possible that the prods do want to prevent us from advancing our technology to the point of being able to contact them. They might have survival instinct. It is possible that they want never to kill us off completely, nor to let us get to that level of technology. Or maybe they do want us to get the technology to contact them and otherwise affect them. If we did that and killed them, that might be the most amusing of all to them. I am determined to endeavor to act as close as possible to undetermined. Because I accept that that's the best program. And because I'm programmed to take the best program and try to make it my program. There's a fundamental difference between a true RNG and a pseudo RNG. The task of a brain in approximating freedom is to be like one of those pseudo RNGs that is effectively as good as a true RNG. That plus constraints. Not like every one of your actions should seem like it has no constraints of any kind other than pure randomness. But some of your actions should seem like they are some amount constrained and some amount purely the victim of nothing but pure randomness. What I'm trying to describe here is how the creative process works. This is a fairly coarse-grained description of it. But suppose you can act in certain instances like you've set some constraints and otherwise all details of your action are purely random, subject to no constraints. Now suppose further that the result of such a process is usually no good, and then you recruit your aesthetic sense to determine as much, and then you try a reroll. Same constraints, same degrees of freedom filled with pseudo-randomness approximating pure randomness, and repeat. Then the results, subject them one by one to your aesthetic sense, throw out all the results that are crap, and catalog the rest. There will be that minority of cases wherein the result was not crap - where you took a condition that partakes of some amount of constraints and some amount of what you could call your best approximation of true randomess, and by lucky enough chance you got something that wasn't crap (bad art), but instead was good art. What I've just described here is the creative process in terms of pseudo-randomness for generating things within some constraint and the aesthetic sense for selecting among the results. This is exactly what the default mode network of the brain is doing in the case of artists. It takes some grinding away and getting rubbish results, and once in a while something good pops out, and you know it when your aesthetic sense says that's the one. At the finer level of description, there's also iteration. Not like it's purely one series of random draws, and most are good art and a few are bad art. In reality, there's some number of intermediate steps. You only roll the pseudo-random RNG to determine some small portion of a thing. And then the result is some bit of a thing, and then you have to work based on hunches of whether this bit of a thing is a step toward a bad art or a step toward a good art. And if your hunch is right, you'll know when it's a step toward a good art, and then you set the conditions again of how much the next increment of experimentation will be constrained and how much will be a throw of the pseudo-random RNG, and again you run that until you get a second bit that satisfies your hunches that say it's another step toward producing a good art. So at the end of the process, it's your aesthetic sense that tells you whether you ended up with a good piece of art or a bad piece of art, and along the way, you need your aesthetic sense to apprehend bits of partially random solutions and provide you with hunches about which of those are steps toward a bad piece of art and which are steps toward a good piece of art. The aesthetic sense is not exactly the sort of thing that most lends itself to being described with words. At the finer level, applying the aesthetic sense to provide a series of hunches to a series of steps toward a convergence, those lend themselves even less to being described with words. So I've done what I can above to describe in words what of the process can be. The rest, the finer distinctions about how it works: they're details I can use when I'm using them, but I can't describe them exactly when describing things is the thing I'm trying to do. If you have a good pseudo RNG in your brain and no constraints, then what thoughts pop into your head will be like the Library of Babel. It's an inefficient process for getting to things worth documenting. The reason why our brains work creatively better than cataloging the Library of Babel does is because we can apply constrains to part of a thing and direct the RNG to the part that isn't constrained. The ideal is to be as close as possible to undetermined. Impairments can limit how close one can get. The wonderful thing about having a hooman brain is that if you have no impairments, you can learn how to make your pseudo RNG effectively as good as a true RNG. c.f. "You will achieve what men call originality" in "Style" by F L Lucas (originality versus eccentricity) Technically it's vanishingly unlikely that a quantum event in the brain can be an uncaused cause in the form of a decision? Okay, well uncaused causes outside of the brain can get magnified by chaos and more macroscopically be an effect that turns a decision. Run a universe simulation on a computer that can't do true RNG but can do pseudo RNG. Brownian motion will be fine, and all the rest. This shows how few wrenches indeterminism throws in reasoning with deterministic models. Take a computer program that's designed to work deterministically. "But this computer lives in a world that's fundamentally indeterministic! How much of a complicaton will that be to it?" Zero flipped bits. 100% it's effectively deterministic. Same with ethics and such. Formulate them with a deterministic model, ask how much of a wrench fundamental indeterminism throws in those. Zero. Okay. 'Effecively deterministic' it is then. Rollback time by 1 decade. Resume it from there. Billions of quantum events happen differently from how they did in this reality. After re-running the thing for a decade, there's a person on earth who looks a lot like you, has your name, mostly the same personality as you, but has had some quite different experiences than you've had. It can be said that you exist in that world, but you have a quite diffferent life that time around. Rewind time by 1 million years. Resume it from there. There is a hooman civilization with billions of people, but none of them have your name and your personality in a way that could be recognized as being the same person. It could be said that hoomankind exists on that world, but you don't exist. Rewind to when the universe was one millisecond old. Resume it from there. Now there's a hooman civilization with billions of people, but the continents are all different. It could be said that Earth exists, but North America doesn't exist. Rewind to when to when the universe was one femtosecond old. Resume it from tere. Now there's no earth. It could be said that the universe exists, but Earth doesn't. Prods theory has perhaps the same status as soul stuff. It does try to be an explanation re Hitler and stuff. But in terms of direct detection, the status is the same as soul stuff or the teapot. If many worlds is true, what accounts for why you find yourself in this timeline and not one of the other ones? Random draw? It is not known. However, some people say that the many worlds hypothesis is not satisfying because it doesn't do away with randomness - that it doesn't account for the randomness of which timeline in particular we find ourselves on. That certainly is one thing that remains an unsatisfied mystery of many worlds, but it's no kind of disproof. "Oh! You're saying many worlds because you don't like randomness? Well, many worlds still does have an unresolved sort of randomness! Bet you feel pretty stupid now!" The extent to which the idea is satisfying or not has nothing to do with whether it's true or not. To say it's false because it's not satisfying to all the problems of randomness is pretty stupid. If you roll an RNG to generate a string of random letters, it's exceedingly likely that you'll get something about as meaningful as "kjyedjhctfdhnkhxt". But it's entirely possible that an RNG issues a sequence of random letters that comes up as, "There is a kind of particle, there are 8 variations of it, they're about two orders of magnitude smaller than quarks, and all the types of quark are made of combinations of those 8." It would be strange, unlikely, that an RNG produces a grammatically correct sentence such as that. Still more strange if the idea the sentence expresses turns out to be true knowledge that can only be proven in the future - some time in the future, a hypothesis is proved, and the idea was first spoken by an RNG at a time when no one could have verified it. However unlikely it is to happen on any one roll of the RNG, it is possible. But running unconstrained RNG and checking the results is an inefficient means of generating hypotheses or insights. Whatever the scale, some things are close calls by some sum of quantum effects and decision making. An example. I had been in a Discord channel. The theme there seemed to be that no one ever says anything. Months would go by without a single message or screenshare or use of the voice channels, even though there were dozens of members, and many of them online at any given time. One day I was deliberating about whether I should post a greeting there, along with a .gif related to the original reason the channel was made. I dragged the .gif to the input box, I typed a message, I hesitated, thinking about whether to send or not. I decided I would send the message. As my hand was moving toward the 'enter' button on my keyboard, a dog across the street barked, which stopped the movement of my hand. I deleted the message. I've been in the channel for several years since then, and only a few times since I entered has anyone ever said anything and then quickly found out that it's an area for radio silence and nothing else. Would my life be radically different if I had sent the message? I don't know. The point here is something else. When you're on the fence about some decision, what tips the scales can often be some arbitrary priming effect like hearing a dog bark. Why did that dog bark at that particular instant? Let's suppose that dog barked because he had been staring out the window and had just seen a squirrel outside. And let's suppose that the squirrel had run over to that location at that time because he had seen a nut on the ground there. And let's suppose that the nut on the ground had fallen from a tree earlier that day. And let's suppose that it was a gust of wind that morning that shook the branch enough to make the nut fall from the tree. And let's suppose that if no gust of wind quite that big had acted upon that tree that day, the nut would have fallen a day later. And let's suppose that if one quantum random event had happened differently a week before, essentially a quantum random coin-flip landing heads instead of tails, all those factors would have been different by that much, the wind would have been calmer that day, the nut would have fallen a day later, the squirrel wouldn't have been at that location at that time, the dog staring out the window would not have barked, and nothing would have stopped my hand from pressing the 'enter' button on my keyboard, and I would have announced my greeting to that Discord channel. I don't think that message would have changed my fate materially, but this example is to show how many close-call decisions can have the scales tipped by some nearly irrelevant sensation that's caused ultimately by some quantum random event some time before. A pseudo RNG is just a true RNG with a longer seed. lorem - A pseudo RNG that reaches into the world for its seed in certain ways is identical to a true RNG (it might have to be a lot slower than a true RNG). lorem - but also what's the difference between me making a pseudo RNG and then running it, and in the other case an alternate world where some quantum fluctuation a billion years ago resulted in the present time having nothing even remotely resembling the earth? You could say that that pseudo RNG is really a true RNG because in this world it produced a number and in a different possible world it never even existed. The pseudo RNG. Suppose it's programmed such that when you press the 'go' button it runs a delay of one week, then takes the seed, then uses that to produce a pseudo random number. You press 'go'. What number it comes up with a week from now is free from determinism. If many worlds, different of those have different numbers produced. If one of branching timelines, different of those have different numbers produced. Even if nature's true RNG can't directly cause a neuron to fire, the brain's pseudo RNG can feeler nature's true RNG. (This can explain the cockroach thing. The other explanation can't). The default mode network needs an RNG seed. The one that's as close as possible to true RNG is the best one. Quantum tunneling, like a roller coaster going down a track, but the passengers are rocking it side to side. You have in your head not a true RNG in the sense of something that can dip down into a quantum process at any moment and pull out some randomness, but you do carry with you one of those pseudo RNG that has a long enough seed that it is true RNG with a delay. Even if some decision you make is not a close call, the information you have going into the decision is. So it feels like an easy decision but if some small chaotic thing had gone a little differently there's some piece of information you would not have had gone into the decision and you would have decided otherwise. Maybe things will be okay on account of an external agency who will make sure of it, maybe things will be okay despite the nonexistence of such an agency. Maybe things won't be okay on account of an external agency who will make sure of that, and maybe things won't be okay despite the nonexistence of such an agency. Local and global maxima in n dimensional idea space. The creative process in the brain is similar in many ways to generative AI. A local minimum is sought by random walking and constraining and unconstraining variables. Other maxima are sought by taking a bigger-step random jump and then seeking a local maximum from there. Lorem difference between unconscious creativity and conscious? The "funny how?" effect. In the middle of this scene, the other guy is thinking "oh fuck, am I about to get killed by this guy for no reason other than he just kills people once in a while under essentially no provocation?" It's good use of RNG. He doesn't just walk across a room and kill someone he's never met. He decides to take an ordinary thing said in an ordinary conversation and pretend like he's misconstruing it as a provocation. The bronze statue may wave its arm. Imagine the macro odds were shorter. In a baseball game, sometimes during a perfectly aimed swing the bat moves straight through the ball. People commonly leave the house through a ground-floor window instead of the door. It's common to see people walking backward. Lorem more examples here. Our world really is like that, only the randomness is different in degree, so what does it mean to say that the world truly is deterministic? That would have to require the hidden variables theory to be right. Or maybe a benevolent god scouted out all the many worlds and decided to make real only the best one. Random erring can be part of a deterministic program. Lorem: that's how I now use mouse buttons 4 and 5 Levy walking is an ability that evolution gave us as a predator and prey strategy. Creativity is using the same idea but in idea space instead of physical space. Art is using that to pursue the aesthetic. The difference between conscious and unconscious art creation has to do with the management of constraints. When one is doing art consciously, one is deliberate about when one is applying or lifting constraints. When one is doing art unconsciously, the applying and lifting of constraints is random, just as the movements at any given time within the constraints at the time is random. Dreaming is the least conscious form of art creation, and the least constrained. Conscious: "how can I make an art, x, that expresses y". Dream: start with a random anything, then see what other random anythings could come along, no matter how surreal. [] Integration, Positive Philosophy [][] Personality Psychology and Integration (Integration, Positive Philosophy / Personality Psychology and Integration) Imagine I have two personalities, and I can switch between them at will. The first personality is capable of having lots of valuable ideas pop into his head, but is deathly afraid of doing any organizing aside from just writing those ideas down. His phobia is so extreme he won't even open any document he's written before. He's diligent enough to do the writing, but he never presses 'open' in the word processor: he only ever presses 'new', types, and presses 'save'. The second personality has only complete diligence but no creativity. He never presses 'new' in the word processor, only ever presses 'open', moves stuff around, and presses 'save'. He's the consummate over-organizer. He's the kind of guy who would sort the contents of a spam folder into 40 categories instead of deleting them. New ideas never pop into his head. Well, I said, "Imagine I can switch between them at will," and I've been describing a fantasy. An integrated person is someone who can switch between opposites like this at will. The reality for most people is that they're stuck in some set of ways when really it would be better for them if they found it easier to access states of mind that would partake of certain opposites of those ways. The amount of willpower it takes to accept this and complete the task of integration is an amount that seems disproportionate, like it's way harder than it should be. This is one of your difficult assignments. Now imagine the two personalities are two different people, Phillip and Patrick, each of whom is stuck in his own way. "Your writing is really damn good. You'll have to arrange the bits you've got down if you're ever going to make it worth anything. Heck, if you just arranged the bits you already have down, you'd have most of a book already. Several books, really." "I can't." "What the hell do you mean you can't? All you have to do is open your completed bits and move them around like a machine. That part would be really mechanistic - basically the easiest task you can imagine." "Easy, yeah, that's kind of why I hate it. I've tried it before. I quickly start feeling like my brain is turning into bone. Then I feel like I have to escape if I'm going to survive. So far, I've always escaped when I've got into that situation, and so far that's resulted in me surviving every time." "Are you aware that's a bogus excuse?" "It would be easy for you, wouldn't it?" "Yeah." "Then where's your book?" "I don't get the ideas in the first place like you do. I haven't had a book's worth of novel ideas in my whole life together." "Now you're saying what I hear like a bogus excuse." "How's that?" "Well, you would be naturally suited to arranging a book if you had your own content written. So start writing content." "Maybe I should. Where do you get your content?" "The ideas just pop into my mind at completely random times, and I make sure that every time that happens, I halt what I'm doing and write it down." "That doesn't happen to me. I don't get the ideas popping into my mind that would prompt me to halt what I'm doing and write them down." "Oh. Weird. I can't imagine being like that." "How do I start?" "How do you start getting ideas that are worth writing down popping into your head at random moments?" "Yeah." "Fuck. I don't really know what instruction I could give you for that. See, you had no trouble explaining your bit to me, because it's perfectly mechanistic. But I can't explain my part to you, because it's the opposite of mechanistic." "Then I might be a lost cause. But you're not! Won't you start balancing your regular MO with the task of organizing once in a while?" "Just like I can't explain to you how I do my thing, I also can't start doing what you're recommending, even though you could explain it." And on and on it goes, and Phillip and Patrick both remain ineffectual for completely opposite reasons. lorem - SLUAI and RCOEN, or ENFP and ISTJ Paint a description of two people whose personalities are opposites to each other along whatever combination of traits. There's about a million variations to choose from. Pick any one. Whatever pair you come up with, you will find that the world contains more people who are stuck as personality 1 or stuck as personality 2 than there are people who can easily flip between them. The mechanism called personality psychology is not inherently bad. It is inherently good. The amount of stuckness in the world is a pretty bad outcome at present. Still, many people have overcome the stuckness. Personality is a set of habits. Habits can be good, and best kept. Habits can be bad, and best abandoned. The habits of personality can be some of them kept, some of them broken, and some of them flipped back and forth, but only if you're flexible enough. Buridan's ass emerges somehow. Lorem. lorem - Buridan's ass isn't any kind of good description of what a donkey would ever do, but it's a wonderful example of what many hoomans end up doing. Of course, that's always been the intention of the illustration - we've had the idea for a while and it was never intended to be a description of donkeys. So Buridan's ass doesn't describe any part of how lower forms of cognition work, but it does describe some emergent effect of how higher cognition works. [][] The True and The Good (Integration, Positive Philosophy / The True and The Good) P F Strawson: do all cases slide into the objective case, or do meaningful gradations remain? Contra Wolf (the 'sane deep self' bit and the 'asymmetry' bit): blame is necessary as a means for trying to turn mn-insane people into mn-sane people. The categories are good, but suggests there's no category mobility. It's all a bit too neat and tidy. A lot of people are mn-insane even if their background is far from JoJo's or the person with the deprived upbringing. It purports to solve the endless regress (using something like a self-referencing concept, the true and good). This still opens up a slightly different endless regress. How can someone get the True and Good? When someone else shows it to them. Then where did that someone get it? From who showed it to him. Et cetera. This endless regress is easier to solve. A moral determinist wants to do whatever he can do to ensure that other people can see the True and Good, and to ensure that he can help still other people to see the True and Good. Wolf and Dennett both point out that praise is something we do even in determined cases. Wolf in 'asymmetry' says that blame is not the same as that. Dennett says that blame is the same. Shall we dispense with correction? "Let's not blame him, because he couldn't see the True and Good" or "Let's blame him, so that he might see the True and Good"? "Virtue isn't something you can win in a damn lottery?" (This is said by the character Maestra in the book Fierce Invalids Home from Hot Climates by Tom Robbins.) If this is true, then it means that there's some way to refute "there's the endless regress, and you're not the author of your own character." It is not hard to imagine an nth order desire. An nth order desire can be just "desire to" followed by the nth minus 1. Sometimes I desire to desire to desire to desire to eat fish and chips. This is nontrivial. Sometimes I eat fish and chips because I desire to, even though I desire not to desire to. Sometimes I eat fish and chips because I desire to, and I also desire to desire to. When I desire to desire to desire to desire to eat fish and chips, and then I eat them, that's the kind of scenario wherein I haven't put on a few pounds recently, I have a few hours to spare, I'm not distracted by things I can't put off until later, and I can go to the chippy and really be with the experience, and really enjoy the flavor unabated by any distracting self-conflict of any kind. [goes with: a sufficiently intelligent robot would not play poker] "The AI would refuse" thing from Begining of Infinity. Lorem. [goes with the True and Good] The positive wealth is generated principle. You can be the author of yourself in some ways but not others. Fate is the ultimate author of your character. If you're lucky enough that fate made you capable enough to be able to see the true and good, then you can handle the rest of the task of authorship. The True and The Good is a sigmoidal function. Consider this. Every adult deer knows that it's a bad idea to jump into a prickly bush, because it hurts. How does every adult deer know that? Unfortunately they don't have the kind of language that would enable them to teach that to each other. Every deer who is now an adult, at some earlier point during their maturation had the idea of jumping into a prickly bush, not suspecting it would hurt, found that it hurt, and then learned that you don't jump into prickly bushes because that hurts. It's just the bumbling nature of being an instance of life as we know it. Sometimes one of us lifeforms has to learn something a previous lifeform already learned, all over again, and by experience. It's the clumsiness of our origin: original clumsiness. Fortunately we hoomans have mechanisms that are a lot more handy than prickly bushes. You can learn from mistakes without even making them yourself. And they can teach you a lot more than a prickly bush can teach you. I used to like beating people up back when I was a little shit. I was told that beating people up comes with retributive consequences. Then for a while I was split between my desire to beat people up and my desire to avoid retribution. As I got still older and wiser, I learned still more, and eventually I learned that I don't like beating people up, and for reasons a lot better than retribution. Maybe I would have learned that anyways even if there were no mechanisms of retribution in place, but even in that case, those mechanisms made my learning process a lot less painful for a lot of other people whom it prevented me from beating up. "That's basically original sin?" "Oh, I don't know the lore. I've seen a few videos. It's turn-based tactical combat, right? I've been thinking of getting it. Should I get it?" "What?" "Original sin. The video game, right?" "No, I meant that thing from the lore of Christianity." "Oh. I've never heard of it. What's original sin?" Mustache-twirling evil does exist in the world, and that's a hard and important fact of life. This includes vicious evil with none of the traditional exempting or excusing conditions. (Mustache-twirling is a euphemism for genocide-committing) To whatever extent someone's desires change upon learning this stuff is the extent that the knowledge set them free, and ignorance had been making them unfree. To whatever extent someone's desires don't change upon learning this stuff is the extent that they were almost as good as free without needing to be freed. Therefore a world that by default makes someone desire what knowing this stuff would make them desire is a more freeing world than one that doesn't. [][] Willpower as a Currency (Integration, Positive Philosophy / Willpower as a Currency) Willpower is a currency, second order volition effectiveness, atomic habits, and "every plan I make is based on". Easy to practice moderation when there's nothing around with which to get immoderate. Lorem Managing willpower is the activity of maximizing the effectiveness of deep self. I don't understand how anyone can have an alcohol collection. I have a friend who has a stocked bar at home with a range of whiskies, beers, liqueurs, wines, and so on. And it's utterly alien to me how someone could have all that stuff at home and not just be smashed at all times of the day every day. How the heck can you have all that stuff around, and be there, and not drink it? I just can't imagine what it's like to be like that. [][] More than Conditioning (Integration, Positive Philosophy / More than Conditioning) It takes struggle to figure out the differences between what you can and can't have. It therefore takes struggle to know the extent of your freedom. It takes knowing the extent of your freedom to have freedom. It therefore takes effort to have freedom. The better part of creativity results from having conflicting desires of differing orders. [][] The Only Way to Win is Mirth (Integration, Positive Philosophy / The Only Way to Win is Mirth) Find the Mencken quote. [the one about the guy who delights in chaos of any kind. I have only the audiobook, not paper or ebook, but the audiobook is split into tracks with chapter names. Might be the chapter called 'literati'] The psychological warfare right now has been so effective that people are becoming unresponsive to reasons. The proportion of people in the free world right now who are unresponsive to reasons is far greater than it was even a decade or two ago. We're having one of the defining features of agency sucked right out of us. Knowing how to improve the world right now requires knowing what to do about people who no longer fit a certain definition called agents with the kind of freedom necessary for moral attribution. That definition was never quite right, but now it's very outdated. The only way to win is to party better. Psychopaths are not responsive to punishment but are responsive to reward. Modern brain rot has a similar effect. If you party better and you only allow good people, this for a lot of people is the only kind of incentive that can sway them. An artist is the kind of person who is beyond outcomes. That is the kind of agent of chaos. To say "He'll get his due," is to state something like it's a certainty when really it's probabilistic and not certain. It is therefore a form of self deception. It is an unhealthy but common reaction to injustice. It should not be too hard to say "He'll get his due, maybe." Sometimes bad people just get away with everything they do. To kid yourself with this platitude "he'll get his due" is a bad idea for the same reason that playing roulette is a bad idea. [][] Lorem (Integration, Positive Philosophy / Lorem) There's a term I need to coin, and I just figured out a name for it. Fuzzy cogs. Concise definition. Fuzzy cogs: if there's something that the book Becoming a Writer successfully expounded, it was a good several of these, and if there's one thing the book A Whole New Mind abjectly failed at expounding in the least, it was any of these. Fuzzy cogs: matters of 'process' that are true, useful, but hard to describe adequately. Things that simply can't be described fully using words, but that are nonetheless worth describing partially in words, but even that takes great skill. Besides being hard to describe, they're hard to justify even when described by someone who has the skill to describe them however partially. They are absolute things, and absolutely useful, despite the inherent limits of language. Example. "Think outside the box." Something that's absolutely true as a thing that can be done, something that's absolutely useful, but something that the four-word injunction does close to nothing to clarify. Past those four words, improvements can be made in extending the instructions, and failed attempts can also be made. Despite the limits of language, there is potentially much worth saying beyond four words - things that can make relevant headways into describing and prescribing the thing. In brief: it's hard to say substantial things about creativity. See other. The universe may be indifferent, but at least it's not apathetic. The universe is not inherently hostile to your wellbeing. It's the combination of the universe and hooman nature that's hostile to your subjective wellbeing. More accurate than saying the universe is inherently hostile to your wellbeing might be to say that you are inherently hostile to your own wellbeing given the environment of the universe. What the world needs is a new kind of agent of chaos, but rather than being someone who causes chaos, is someone who can handle living in chaos and remedying it. A philosophical bit: well, if I had unlimited knowledge, then the best thing I could do would be to invent something that solves some relevant problem better than existing technology that can also be manufactured with present technology. So that makes it hard to answer the question "If I were doing something other than each of the things on this list, would that be better?" If I made the list of things I'm possibly doing with my actual, limited knowledge, and then answered that question from a place of unlimited knowledge, the answer would always be "yes", and abandoning everything listed would be the best option. But if I made the list based on limited knowledge and then I apprehend the question also from limited knowledge, then the answer must always be "I don't know, because I don't know what I would find if I looked somewhere else and gained some knowledge that I don't have right now." So what it means to hypothesis test this is: once in a while, do something that's not on any of the lists, and then in aggregate see how often that turns up something new to add near the top of the list. If infrequently, then the list is probably good as it is. And if the list already includes inventing something that solves some relevant problem better than existing technology that can also be manufactured with present technology, then that's probably a good sign that the answer effectively is "no". Maybe the more relevant question then is, "Given the things I've already described how to invent, could I describe something even better to invent?" Reluctance to doing my real thing can kick in as quickly as less than one second. As soon as I think, "Maybe (right now) I should," it's like what I hear at the same time is, "Maybe I should walk into a jail cell and shut the door behind me." Some people are not subject to the Yin of Zhou principle. It's fewer than most people interested in psychology think. Almost everyone has it (everyone except sociopaths, psychopaths). Of them, some don't suffer from it and some do. For people who are suffering from it, closing the gap is a matter of getting them informed about their own incentives. If all of them closed the gap, the result would be a massive gain in net wellbeing - almost everyone in the world would be a lot better off. To close the gap completely, the people who are exempt from the Yin of Zhou principle would be personally disincentivized from causing much suffering. Lorem what about someone who inclines toward evil, but we have a legal system that will make the punishment felt by them if they do evil, but then when they make the decisions they're impulsive and temporarily don't bother caring that they'll be punished for it later? You could use an unemotional reward and punishment machine to affect things like second order volitions, habits, etc. Suppose I were to program a machine that can track my data about how well I'm learning some course, and the machine can issue an electric shock if it wants to issue a punishment, or it can dispense a piece of chocolate if it wants to issue a reward. Now suppose that in the case of learning a course, what it takes to do a thoroughly good job takes more than just memorization skill, takes more than just dedicated time at whatever intermediate level of effectiveness, but to do thoroughly good in that course requires things like making study time effective, takes one's full attention in focusing, takes putting in time, and all that. What it really takes, if I want to make it work, is things like habits, second-order desires, cleverness in reasoning about how I dedicate my brain to the task in the time I can allocate, and such things like those. And every week, every day, several times per hour, the machine tracks how well I'm doing and issues some number of electric shocks and some number of pieces of chocolate based on how I'm doing in the timeframe. You can be damn sure I will want to get the highest-level design aspects about how I'm using my brain as good as I can get them. Even if my robot friend is brainless, and rather based on some pretty simple considerations of how it tracks my progress. What this gedankenexperiment proves is that punishment and reward are effective guides to all the levels of agency. Further, perhaps, that if one's sense of praise and blame is purely utilitarian, that praising someone or blaming a person can be justified in terms not only of what their most knee-jerk reactions are, but further in terms of how all of their agentic efforts have been going, all the way to the point of things like habit formation, second-order desires, cleverness in problem solving how to develop one's brain, and so on. It's possible that the absurd of Camus applies both to everyday thoughts of the philosophical kind and also to philosophical concepts of the rarefied kind too. How much difference is there between evolved moral intuitions and how much those should be dissolved by rational insights? And can that amount really be dissolved? Sometimes you have to do something that doesn't align with your values, just to confirm that you do have agency, then you confirm that you don't like the result, then you confirm what your values are. And sometimes you do similar, and you disconfirm what you thought your values were, and you figure out that your values are something else. Don't be a prefrontal cortex soloist. The following is conjectural and based on no evidence. To say that the prefrontal cortex (PFC) is about doing "the harder and better thing to do" is only an approximation. In many situations, this appoximation is right. Not always, and indeed one of the worst outcomes for a person is when they become and remain a PFC addict. Your typical over-user of the PFC is the guy who exercises a great deal of willpower only to overwork himself and become miserable. An extremely high-functioning person is someone who has a strongly developed PFC, and who also knows how to make this region shut up and let the other regions talk. Whether this is right in terms of brain regions or completely wrong in terms of brain regions, the point stands. I mean, even if I'm wrong in terms of what brain regions are associated with these factors, I'm talking about some set of real mental faculties and I stand by my point in terms of those. There's impulsiveness, then there's, if you have enough willpower, the faculty that stands in front of that and says, "Stop! Don't do that. Do the obvious better action instead." Then there's the faculty that, if you have enough integration (or something) stands in front of both of those faculties and says, "Stop, both of you! What about this 'obvious' better action makes it better, and what makes it obviously better? Have you really paid mind to the possibilities? Have you thought of the range of things you could do that might end up better? Have you really weighed those and given them time? Have you tried doing that while dancing?" This is the thing that many people silence because being fastidious is good enough. This is the thing that takes real discomfort to listen to, and even more discomfort to give it the floor for an appropriate amount of time. The exceptionally good among us are the people who don't shrink from a wrestling match with this faculty. (Okay, this faculty is probably also in the prefrontal cortex. I probably have my geography all wrong.) Where there is no great passion, there is no great art. But where there is only unrestrained passion, there is no great art. (paraphrase of F L Lucas) There's a certain tedious strategy video game that when I play it, I burn massive quantities of calories through my frontal lobe. If I no-life this game for a week and eat my usual amount of food, I will lose several pounds of fat. This doesn't happen when I'm doing art. Gary Kasparov had to retire from a chess tournament because he lost more than 20 pounds using his frontal lobe. lorem - "What men call originality" (from Lewis page 12) Of all of my favorite nonfiction books, and even if you exclude textbooks, almost all of these are written in a style that leaves something to be desired. Gaiety, in particular, is the metric along which almost all of these fall miserably short for style. I mean to say that if the style of these books were improved, along that metric and related ones, the result would be not only more entertaining, but they would also make their points better. Douglas Hofstadter stands almost alone in contrast to this trend. That guy is a great writer, as educator, stylist, and source of original insights - you know a Douglas Hofstadter book when you see the inside of one. (re "fastidious" "PFC soloist") And the amount of kinetic energy required to operate a lockpick is a lot less than the amount to operate a bettering ram. On the scarcity of doing and becoming. Everyone has a 99th order desire for doing and becoming. And almost everyone is either denied the opportunity or denies it to himself by letting it get superceded by one of the lower 98. What box are you not thinking outside of? This is often the most important question you can answer, but when you do, it will always be from within some wider encompassing box. And if you ever work out what that box is.. well, it's boxes all the way outward. Only when you solve about 98 levels of that puzzle will you know how you can make your 99th order desire be effective. Mathematical model. First, weights: first order has 1 (or 100, or whatever), then a geometric sequence for the weights of each of the rest of the orders. Then the ratio of an order's weight to the first order's is how effective that order is. Life is like a sailing race in a storm. It's exceedingly likely that your outcome will be decided entirely by the weather. Still there's some small chance that how you steer will have some effect on your outcome. What ought one to do about all this? One ought to steer. It's generally accepted among psychologists that the subconscious can make you do things for reasons you're not aware of at the time. It can have a number of things such as goals, but in the standard model, the goals of the subconscious are simple, and it has little to no ability to form plans. For example, if you hate your job, and you think you've decided to stick with it, but your subconscious has other plans, you might insult your boss at a particularly inopportune moment, to your own surprise, and get yourself fired. That's your subconscious getting a goal done that you didn't know you had until the deed is committed. The subconscious is also capable of long-term planning, although this is not accepted psychological science. Here's how it works. You consider a number of options of things you could do. Then, safely tucked away out of sight of your conscious mind, your subconscious has the realization, "oh, the best option is this one thing that will take great pain and toil and time. So much that the conscious mind would balk at the amount of commitment required if it knew a realistic assessment. So, we shall do this thing, but I will have to offer up many defense mechanisms (rationalization, denial, et cetera) so that when we're on this adventure, the conscious mind thinks we're doing one thing when really we're doing some other thing." Then you face more pain and toil on that adventure than you consciously knew you were buying in for. And only when you're done most of that adventure can you look back with clarity and see what the defense mechanisms were and what differences there were between what you were doing and what you thought you were doing. That's how it works when the subconscious decides to choose a path of long-term strife and act as your shield. On the difficulty of overcoming one's own personality psychology. You can have the whole dilemma worked out fully and remain extremely reluctant to what you know is the best thing to do. We find it so easy to notice the habits of other people that it takes no effort, and we do it automatically. And we find it so hard to notice our own habits that it takes scrutiny, and sometimes we don't detect them after we give our best efforts in trying to. Some educators of hoomans use dog training clickers. They say the learning task works better when you take the blame and praise out of it. IRL if you extend the poker playing robot to enough levels, you find a dial labeled "desire to play poker" which can be turned down. [] Intermission Bits Intermission bit. "I could have held three dumbbells at a time yesterday if I had a third arm yesterday, but I didn't have a third arm yesterday just like I didn't have the part of a brain that we refer to as a desire different from the one I did have. All this talk about imaginary body parts. You talk like you're laying blame, but all I'm hearing is a bunch of cryptozoology." 2 Laozi quotes, check if the translation is good, check how applicable in terms of surrounding context, consider doing own translation: "Who the cause shall truly scan" and "dispense with correction" per Legge translation. [is rewrite] "Did you know that free will is an illusion?" "It's not! I saw the whale jump over the thing, and I saw it with my own eyes." "But what you saw was really a rapid succession of still images. The motion of the whale was only apparent, not real." "Fuck me. Free Willy is an illusion." Junius: "Is free will real, or not?" Jack: "Like God, free will is real if you believe it's real, and it's not real if you believe it's not real." Junius: "That's now the stupidest thing I've ever heard about free will... Junius: "Oh! and possibly the stupidest thing I've heard said about a god." [is rewrite] The universe is ridiculous, so to be a microcosm of the univerise, which, by the way, is a good idea, you must become ridiculous like the universe is. Interlude: "The one commandment / 1: This is the one commandment." Determinism <- Interlude: the original tell a duck to be a duck piece. [] Lorem 1 Morgan Freeman quotation from Shawshank Redemption about I've been rehabilitated. Appendices: just the arguments laid out formally, just the narrowly-defined stances laid out concisely. [] Lorem 2 nothing violates the laws of physics, however???? (emergence has a backdoor into something that's just as good?) "aperiodic crystal" I acknowledge that there might be things that knock down some of the things I've said. I don't know for sure if those are genuine knockdowns or in error. I severely doubt that any of those potential cracks do much to undermine any of the applied matters that I have argued for. "meta-skeptical" about ethics: no criteria for moral responsibility is objectively right. wikipedia: moral skepticism: a category that includes error theory trying to retain freedom of will in the light of modern science is like the epicycles thing Homer Simpson (to himself): Remember the advice your father gave you on your wedding day. Abraham Simpson (in a flashback): If you ever travel back in time, don't step on anything, because even the tiniest change can alter the future in ways you can't imagine. Homer Simpson: Alright, as long as I stand perfectly still and don't touch anything, I won't destroy the future. Stupid bug! You go squish now! *swats a giant mosquito*. *gasp*, but that was just little one insignificant mosquito. That can't change the future, right? Right? A passing giant sloth: *Grunts in a manner that sounds like "I don't know"*. "Nice hand." "Yeah, the shuffler did it." "Seems to me about as strange as congratulating someone for how tall they are." "I have been congratulated for how tall I am more times than I can count." The tasp from ringworld The excessive apologist stance. Give children guns and if they murder someone give them 2 hours of timeout. A hard determinist would probably have to say that what he means by free will is something other than any commonsense definition. Bob Sapolsky quote (from Making Sense the book adapted from the podcast) from making sense about god hardening pharaoh's heart [about the Ori] Everything that's ever happened to me as a result of my own agency is because of what kind of agency that is. And what kind of agency that is is something I chose before I was born when I had no information about anything. Lethargism is to disagree that a computer takes time when it uses its processor. One argument pro lethargism is to send someone a link to a video by a talented musican who has got no recognition and has since disbanded his music group. Let's imagine that among the many homonyms in the English language, the negation of determinism had all along been 'tomato'. Philosophers have disagreed about whether or not determinism is true. Some of them say that determinism is true, and some of them say that determinism is false. Or, in terms of the negation of determinism, some of them say that 'tomato', in the philosophical sense, is false, and some of them say it's true. Now a groundbreaking new theory comes along and says that this disagreement has all along been a confusion of language, and this can be proven by showing that we in fact eat tomatoes regularly. In this case it would be clear that the tomatoes that we eat never did have anything to do with the determinism debate. "I'm a determinist and I ate a tomato this morning." seneca no man was wise by chance kierkegaard chess piece it can't be moved crushed under the weight of the lives I didn't live It's bad luck to be superstitious. "There's more beauty in truth, even if it is dreadful beauty" - John Steinbeck, East of Eden SMBC face-up prediction SMBC the most utilitarian thing you can do is not believe in utilitarianism [] Lorem 3 Retro appendix. Check the website for an appendix citing more of which other authors contributed which ideas, and possibly an expansion pack of footnotes. lorem - it could be argued that some stance other than expressivist is required to make satisfactory sense of all parts of the legal system. As for the parts of the legal system that are treated in our present investigations, nothing more credulous than expressivisim is required, so we take expressivism as provisional for our present purposes. Just mention in an end note which of the stories are true. This book does not end with book recommendations. This book is designed so that you're done reading on the topic when you're done reading this. I would say movie recommendations are in order. Lorem - and TV, YouTubes, Docos -Movies- Memories (1995) part 3: Cannon Fodder Ikiru (1952) Paprika (2006, the Japanese film of that name, not the other one) Lola (2022) -Docos- The Crime of the Century (2021 documentary in 2 parts) -TV- Love, Death & Robots S1E14 Zima Blue Simpsons Treehouse of Horror Time and Punishment -YT- How to Radicalize a Normie (YT) There are things that you might want rescued from that I won't try to rescue you from. My only guiding principle is to rescue you from inconsistencies and obscurantism. If I also rescue you from things other than those, it will be incidental. One thing I won't try to rescue you from is fate, but I will try to rescue you from any apathy that might come from fate. Dedication: to Tom Robbins Disclaimer about poker. Poker is bad for your health usually. If you play poker, play for nickel and dime stakes with your friends or online, log all of your results, and read books and/or articles about it. Treat it purely as an intellectual activity. Anyone smart enough to make a living playing poker anywhere on earth is smart enough to do things that are much better worth their time. If you play for nickel and dime stakes enough and read enough to get good, then play for real stakes once in a while maybe. Even that is probably not worth it. [] Metaethics, Ethics, Ontology [][] Expressivism (Metaethics, Ethics, Ontology / Expressivism) "What do you think of the Harry Potter convention?" "It's good. They have this fiction called Harry Potter, and they've figured out how to get together on that theme and have a good time." "What do you think of the court trial?" "It's good. They have this fiction called moral responsibility and they've figured out how to get together on that theme and maximize the odds that we have a good time." In the study of law, a business is a type of "legal fiction". This name suggests that a business is a thing we made up, but we keep using it because it's a useful thing we made up. Lorem: that's what Yuval Harari said, but wikipedia disagrees. A philosopher once said, "It's been proven conclusively that morality is a type of fiction. We must protect the populace from this information! Quick, put that information under lock and key!" 'Normative' is a good word. It suggests that maybe morality is nothing more than social norms. It seems almost like it's more similar to the idea of etiquette than to ethics. Since determinism is effectively true, moral categories are not real things. Since it's useful to treat moral categories as real things, moral categories are effectively real things. The general always knows more about how the war is going broadly, and the soldier always knows more about how one specific battle is going. So the aristocracy can know that free will is an illusion and morality is a set of fictions, and then tell the populace that free will exists and moral categories are real things. If the world got the Threads treatment and beyond, and then bounced back, triangles would still be there for the discovering, but Harry Potter wouldn't. Of which kind is morality? Free will? There would be Aristotle 2.0. There would be Jeremy Bentham 2.0, Harry Frankfurt 2.0, and all the rest. Does that mean that these things are natural categories? Maybe they're just the reliable patterns of hooman nature. [][] Responsibility or Culpability (Metaethics, Ethics, Ontology / Responsibility or Culpability) If you think that "could actually have done otherwise" is required for blame, and you say "someone with better sense would have done otherwise," then what is the name for this action? You have some word for it other than blame? Correction? I never blame people, but I give them correction when they transgress? Blame is when someone transgresses and then people say he could actually have done otherwise. Correction is when someone transgresses and then people say he could have done otherwise if he had more sense. An alternative: the second thing is what I call blame, and I don't have a word for the first. It may be right to hold someone accountable for something they did even if they were not the ultimate author of their own character. If determinism is true, this would require saying that you hold someone accountable for things that happened 10 billion years ago that they had no ability to affect (or you hold them accountable for the conjunction of that plus the laws of physics). If this is right, it's a minimal form of what might be meant by accountability. This has nothing to do with the concept "someone did something before you were born and could actually have done otherwise and you are guilty for it" which is silly. It never makes sense to frame it this way, that you're responsible for things that happened before you were born, except when addressing objections to determinism. But if you accept determinism, and someone presses you on the implications, you have to say yes, technically it comes down to that. [][] Exempting or Excusing Conditions (Metaethics, Ethics, Ontology / Exempting or Excusing Conditions) A psychopath is someone who would kill you to steal your car and then park it in front of his own house. When he wanted your car it made sense to kill you to get it. When he could have thrown away the evidence it made sense not to because he would have had a longer walk home. [][] Lorem (Metaethics, Ethics, Ontology / Lorem) You are a persistence of pattern in the same way a river is, which is the ontological sense that makes it handy for naming things. Does that rule out the reality of other senses of how you can define what you are? "Who are you?" "Bob." "You are Bob." "Yeah." "What does that mean?" "Uhhh.. it means that if you want to get my attention, the word 'Bob' is what you say?" "Okay. But in what sense are you Bob?" "What? What senses are there?" "Suppose this. Bob, you are some exact configuration of molecules, this one with four limbs and one trunk, and all the further details. But only as long as you are this exact configuration of- oh, Bob's gone. I saw a molecule here and there fly off and a molecule here and there clump on. Now this arrangement of matter is like 0.0001% different, so it's not Bob any more. A few different bits, no longer the same thing really, no longer Bob." "I'm not Bob now?" "No, if the person I was talking to a minute ago was Bob, then Bob isn't the person I'm talking to now." "Who am I now?" "Someone else." "I feel a lot like Bob did. But I'm not Bob?" "Imagine that you and I now are sitting by the side of a river, alright?" "K. Sitting next to a river." "We're watching the river. At every moment, the water in that river is flowing by at every point along that river." "Pretty much as rivers work, as I'm familiar with them.." "And we see that that's how the river works in general. Even we go away and do something else and come back the next day." "And the river's still there?" "A river is still there." "A river is still there, but it's not the same river?" "What about in terms of droplets of water? We were looking at the river the previous day, and we were looking at specific droplets of water that together filled this space. Those droplets shortly later became part of a lake, down over there, and they have been most of this one day. And we're looking at a river now. And the droplets that make it up were one hour ago dripping off some glacier, up over there, and just joining a flow." "Not the same river, then? Or is it?" "Is it the same river or a different river?" "I don't know. You tell me." "Is it the same river or is it a different river? The answer is 'in what sense?'. In one sense, it's the same river, and in another sense, it's a different river." "What are these senses?" "Well, one happens to be very handy in terms of how we use names for things, and the other one isn't." "Yeah?" "But that's only one of many things that make up the difference between the two senses." "Okay. In what sense is it handy for using names?" "To say it's the same river." "But it's not really the same river?" "Really? I don't know about 'really'. In terms of matter, it's not the same river as it was the day before. In terms of how we use names for things, it is the same river as it was the day before, or at least we call it that. I mean, I could phone my friend Jim now and say 'Hey, let's meet by The River Blackbird,' and he would know to come here, to this location in particular. No more words necessary." "Handy, as you said." "Handy. But the trick only works because Jim and I were at this location before, and I told Jim that the name of the river we were looking at was The River Blackbird." "But it wasn't the really the same river as the one you would be talking about if you phoned him right now?" "I don't know about 'really'. In terms of droplets, it's not the same river. In some other sense, it is the same river." "Okay. I know in what sense it's never the same river. In what sense is it the same river?" "In terms of persistence of pattern." "Aha! In terms of persistence of pattern, it is the same river. It's only in some other sense that it's not the same river." "Yeah, and usually when we use words for certain things, rivers included, we use words in handy ways that don't get too wordy, and we can use words to refer to things that persist in pattern, even if they don't persist in exact arrangement of specific particles." "Like when we're talking about rivers.." "Yeah.." "And what else?" "A number of other things. People.." "Oh yeah. Am I still Bob in some sense?" "In some sense, you're still Bob." "Oh, good.. But not in every sense am I still Bob?" "Not in every sense." "If I was Bob a minute ago, then now I'm in some senses still Bob and in some other senses no longer Bob." "Sad, but true." "In what sense am I still Bob?" "First of all, in the sense that makes it handy to use words to mean things." "Would be a lot less handy if you had to call me by a different name every minute." "And in that same sense, there are many other reasons why we could say you're still Bob. Not just for the sake of sparing on how many names we issue to things." "And that sense is the persistence of pattern sense?" "Yeah. You're Bob now in the sense that a pattern has persisted that is now you and a minute ago was almost identially the same pattern." "The difference is I have now one new set of memories of talking to a guy who sounds like he's on a lot of drugs." "Exactly right, but what about the rest of your memories?" "Good glob, I have almost exactly the same set of memories that Bob had. One new one I added to the set. Maybe one old one fell out to make room for it, but the other 99-someodd memories I have now are memories Bob also had." "Then you're worthy." "Worthy? Worthy of still being called Bob. Even though Bob existed one minute ago. And we're in doubt about exactly what that says about me." "So there's the persistence of pattern sense of identity. With certain things we like naming, it's handy for assigning names to things. It's also handy in other senses, like saying you're the same person as Bob, or rather, you're Bob, still now." "But there are other senses of identity?" "Yeah." "And they're less handy?" "Are they?" Does the word 'triangle' refer to a nonphysical actual, or to a regularity among physical actuals? If the word 'triangle' refers only to a regularity among physical actuals, whence comes that regularity? And do we really have no justification for calling 'triangle' a nonphysical actual if that regularity does indeed exist among physical actuals? If 'triangle' refers to a regularity among physical actuals, but we do not have the authority to deem 'triangle' a nonphysical actual, is there something restricting us from doing that? Could some hypothetical authority, more intelligent than me, say "it is incorrect to call 'triangle' a nonphysical actual"? If in that case there still is something that could qualify as a nonphysical actual, what if not 'triangle'? Are there some absolute criteria for what does and what does not count as a nonphysical actual? And if there are correct answers to all of these questions, do any of them matter? Do the words 'and', 'or', and 'not' refer to nonphysical actuals, or to regularities among physical actuals? If the latter, are those regularities things that evolution naturally inclines us to? If 'triangle' is not a nonphysical actual, it sure is something that is profitable to talk about as if it is a nonphysical actual. And even if 'triangle' is not a nonphysical actual, there sure is some regularity about things that invests some utility in talk of triangles. Whatever status that indicates about triangles, could all the same things be said about an abstraction such as 'car'? You could define the minimal case of 'car' as something that has a drive system, a steering system, and a suspension system, and you could further define what counts as each of those three things. Now when I see something that has some assemblage of bits, and those satisfy the criteria of drive system, steering system, and suspension system, all in one assemblage, I see that together that all satisfies at least the minimal criteria of 'car'. Is that some kind of contrivance, some kind of artificiality, that makes 'car' less fundamental than 'triangle'? I have a minimal criterion for what counts as real emergence and I have a minimal criterion for what counts as a real abstraction. If the utility of some assemblage is greater than the sum of the utility of its parts, then there is real emergence. If the profitability of talking about a system is greater than the profitability of talking about its parts, then it's a real abstraction. This is a working definition. Let's just talk as if these concepts have some kind of ontological status of being 'real'. Then whether they're really real or not is perhaps an uninteresting question. Whether or not morality is a nonphysical actual, it's a utility. So we'll talk about morality in the sense that morality is a utility. And as for whether morality is a nonphysical actual, or a physical actual, or neither, we won't bother with those questions. Bob recently had an experience that made him starkly aware of the two different senses of past self and how he handles both senses in combination. Two years ago, Bob took a course on computer programming. There was only one course material: a 12-hour video that was really a compilation of several shorter videos. In taking the course, Bob learned how to set up the programming environment on his computer, and many times in the course, there's an instruction to pause the video and solve a quick programming challenge. So it took a lot more than 12 hours to watch the video set and play along. All the while, as Bob was taking this course, he produced his own set of study notes. For every lesson about a concept, Bob made notes on the concepts. For every programming challenge, Bob finished the challenge, took a screenshot of the code, took a screenshot of what the computer does when running the program, and wrote down a description. A thorough set of notes for a thorough process of taking the course. After that, Bob didn't use the programming language for a couple years, and nor did he take another course on the same programming language, and nor did he review his notes or the course video. Now it's two years after Bob took the course. Earlier today, Bob went to the library to refresh on the course, and he brought with him a laptop computer that has on it the course video. The best process for reviewing the course would have been to bring the video and the review notes he had originally taken, but he forgot to bring his review notes. So there's Bob at the library, and he's rewatching the course video on his laptop, and he's left his review notes at home. As Bob watches the course video, it's highly effective in restoring his brain to a state that understands the things in the course, ready to use those things and program again in that language. It does take some effort - at many points, Bob does have to pause the video and digest a concept, but altogether it's working very well at shaking the rust out of that part of his brain and restoring it to a fine polish. 12 hours of this and Bob will be almost perfectly back up to speed on all this - it's taking an effort, and it's working. An hour into this process, Bob realizes that he didn't bring the review notes he had taken those two years earlier. To rewatch the video and reread the notes would be a perfect review process. Rewatching the video absent of those notes is almost perfect as a review process, but not quite. That's fine. He decides to stay at the library and keep watching the video. He'll review the notes next time he gets back home. Now, as Bob keeps watching the video, at several points, he's thinking, "Oh, that concept takes some effort to understand. Did I write about that in the review notes? I don't remember. I wrote that set of notes two years ago. But I must have. Me two years ago was a pretty smart guy, and he did do a thorough job of making review notes, so it would be a pretty bad lapse if he forgot to write up that idea in the notes. Still, I don't recall if I did or not - I'll see later when I get back to those notes." Now Bob is working with his past self in two completely different ways at the same time. In the sense of self that's a persistence of pattern, he's getting refreshed on this course he used to know well, that he neglected for a while, and that he's now still able to restore in his mind. That takes a bit of effort, but it works as long as he's watching the video and pausing once in a while. In the sense of self that's more transient, more like an instantaneous thing, he doesn't remember what he wrote in the review notes, or whether he did or didn't remember to write this and that idea - there's zero recollection of that sort of thing. That sense of Bob two years in the past is essentially working like it's a separate person. Let's suppose that one year ago, Bob's computer caught fire and he lost the notes he took, but Bob's friend Jim also took the course at the same time Bob did, and Jim also took review notes, and Jim is also a good note-taker and a good course-taker, and Jim did a thorough job of taking the course and taking notes on it. After Bob's computer fire, he asked Jim for Jim's set of review notes. Let's suppose that Bob at the library today left Jim's review notes at home, not his own. Now when Bob gets to one of those parts in the course with a particularly tough concept, he's thinking, "Oh, that concept takes some effort to understand. Did Jim write about that in the review notes? I don't know. I wasn't watching Jim when he took those notes. But he must have. Jim is a pretty smart guy, has been for more than two years, and he did do a thorough job of making review notes, so it would be a pretty bad lapse if he forgot to write up that idea in the notes. Still, I don't know if he did or not - I'll see later when I get back to those notes." In that second scenario, what Bob is thinking about Jim's notes is pretty much what in the first scenario Bob was thinking about past-Bob's notes. In that sense, Bob of two years ago is effectively not present-day Bob, but a different person. But when Bob is watching the video today it's not like he's watching it for the first time. It's not like someone else took the course two years ago and Bob's just seeing that video for the first time today. No, his refreshing on the course by rewatching the video now is going way faster than what he had to do the first time he took the course. In that sense, present-day Bob and Bob of two years ago have diverged only a little, but it's not taking too much effort to merge that part of them back together, because of how nearly they're just the same person. Bob is counting on persistence to work almost perfectly, and it is. So, that was the day Bob realized how there are different senses of self, how these different senses have different amounts of persistence, how sometimes you have to confront both at the same time, and how the thought process works in a scenario wherein you're managing both of them in some combination. Is gravity real? It's an abstraction that's profitable to use. Is it really real? Uninteresting question. Answer: don't know. Is it a real abstraction? Interesting question. Answer: yes. It's a real abstraction, which is what matters, and whether it's really real doesn't matter. If I say you shouldn't rob people in broad daylight, am I moralizing, or just giving you good advice about how to stay out of jail? Is it necessarily one and not the other? Is it necessarily both? Is it possible that it could be either or both? Is it possible I'm saying that because I don't like it when other people have a bad time even if I think it's neither morally good nor bad? "Don't rob that person. Not because I think it's morally good or bad to rob people, I don't think either, it's just because I don't like it when other people have a bad time, and when you rob people, they have a bad time." "Oh, stop moralizing." "I'm not moralizing! I said I don't think robbing is morally good or bad." Realists and nonrealists can talk about 'responsibility' and 'deserve' and confuse each other about whether they mean reals or just conventions. Is Harry Potter real? And boo hooray. Boo hooray is potentially not terrible for implementing a legal system. Murder: boo! Big boo! One of the biggest. Therefore when someone murders and he's caught, we put them away for a long time. Shoplifting: boo! Not really the biggest boo. Therefore when someone shoplifts and he's caught, we apply a punishment not quite as big as for murder. "Why are all those people at that festival wearing pointy hats and waving little sticks around while uttering Latin phrases?" "Because of Harry Potter." "Why was that guy just put in a prison with a five year sentence?" "Because of moral responsibility." It is possible that these two answers, "Because of Harry Potter" and "Because of moral responsibility", are both referring to things with the same kind of ontological status - pure fictions, but that pass the requirement that they're the best way of explaining something that's happening, and not necessarily something that shouldn't be happening. Lorem. Maybe gratitude means attributing some kind of metaphysically real deserving to a person. But whether it does or doesn't, it's a memory aid for remembering who is good for your health, and that's a good enough reason to keep using it. [] Political Philosophy [][] Gamebreaks (Political Philosophy / Gamebreaks) Exactly 50 percent of the effort exerted by lawyers, legislators, and philosophers is dedicated to attempts at gamebreaking. There should therefore be no surprise that there are legal systems that are in shambles on behalf of bad faith arguments supporting indeterminism, and the are legal systems that are in shambles on behalf of bad faith arguments supporting determinism. Whatever the mitigating factors to blame might be, some clever people will always try to invoke them gamebreakingly, and lawyers are often employed to apply all their cunning to this endeavor. In every society that can avoid becoming overrun by evil, there must be people ready to conquer potential conquerors. I would prefer to live in a society that, if someone wanted to conquer it, we would have the means to prevent that. And that includes threats internal as well as external. A functioning society must have the means of preventing crime from overrunning it. There's a tendency that a society doesn't last long when it doesn't have, or loses, the ability to countermand threats internal or external. [][] The Case for Consistency (Political Philosophy / The Case for Consistency) Why do I vote? If no one did, then things wouldn't work very well. And that's a good enough reason. It's a categorical imperative. Lorem reasons for voting in terms of the game theory and how it's possible but not likely that you'll swing a result with one vote. Lorem what this has to do with legal punishment. Dismiss one case on account of determinism and deterrence just got weaker. Do that too many times and deterrence has no effectiveness left. What about if someone has become too feeble to be capable of committing another crime? Relenting in that case would also weaken deterrence. "I've been called as an expert witness to provide my expertise on behavioral biology in upwards of 15 court cases, ranging from murder to multiple murder, and every time, I've said that the person couldn't have been guilty because he wasn't the ultimate author of his own character, and every time they've ruled the other way." [][] Lorem (Political Philosophy / Lorem) Cleverness got us into trouble: clever tricks that make people harm themselves. Cleverness will get us out of trouble: clever solutions that dissolve the clever tricks. Let's say there's a fictional country called Canadoo, and in this country there's a guy who has been going around murdering people, and the courts have ruled, "Well, this person's grandparents had a really rough time because of what some of our grandparents did, such a hard time that this guy has been underprivileged, and underprivileged people tend to do more murdering on average than privileged people do, so when this guy goes around murdering people, it's technically not legal, but the enforcement of the law shall be to do nothing." This is an argument that makes use, specifically abuse, of the idea of determinism. So the guy keeps going around murdering people, because that's what he likes doing - it's not something most people like doing, but this guy does - and every time he does, the enforcement of the law is to do nothing. Stealing cars is also technically illegal in Canadoo, but the enforcement of the law when that happens is also to do nothing. Regarding that, the law says something like, "Well murdering is a crime that has zero disincentive for some people who can claim determinism as their reason for doing it, and any claim for determinism is just as good for anything else, and stealing a car is a lesser crime than murdering, so if the legal penalty for murdering is none, then the legal penalty for stealing a car must also be none." Not coincidentally, this makes it very inexpensive to run a legal system while saying it's not short on resources, which is especially handy in a land where an incredible amount of government funds get siphoned off to a few special top-ranking officials of the government and a few of their special friends, and Canadoo is one of those places. Lorem this is an example of misusing the idea of determinism in order to run a massive grift. That government of Canadoo had the broader MO of doing whatever they can to grift, as long as they have a cover story that can plausibly be spun as blunder. And sometimes that was to take a philosophical idea such as determinism, corrupt the interpretation of it in a way that could be passed off as a blunder, and then use that as a means of grifting. A person rescues 100 people and then murders one person. Thoughts? Why should there be legal punishment if he's done an amazing net good? But if there wasn't, then what would happen to a legal system? To what degree do my emotions map on to agentic intuition, to concepts, to what kind of ontology those concepts have? Perhaps in this case my reasoning ends up at something like "The legal system has strangely little to say about what the consequences of doing something nice out of the kindness of your heart are. But the legal system has to maintain that murdering one person is illegal, and that there must be enforced punishments for murdering one person, your reputation otherwise being immaterial to that. Because where would we be if the legal system couldn't do that? If the legal system weren't in the business of making murder illegal and enforcing such a schema, things would turn pretty bad pretty quickly. So murdering has to be illegal, and that illegality has to be backed with a systematic mechanism of dealing punishment." So the endpoint of your moral intuitions was about what legal mechanisms we need to have in place in order to prevent bad things happening in cases such as murder, where usually the murderer hasn't also rescued 100 people the day right before murdering. It's utilitarian (rule utilitarian?) about what we need in the form of social conventions. Social conventions with teeth. Indeterminist model because children and criminals. Small brains understand the indeterminist verbiage and might not understand the more technical ways of framing it. Possibly the law books have to match. Possibly not. Philosophy is what some people would call the interesting part of law and what other people would call the uninteresting part of law. How quickly will we have the relevant technologies that would cause us to change our attitudes, and how quickly could we change our attitudes? This is a different question from the question of how quickly we should change our attitudes in light of hypothetical technologies. Brain scanners may some day be so powerful that a handheld device could make relevant predictions about a person's decisions quickly enough for those predictions to matter. That would take technology so far advanced from those of present day that it could be called "really far off" (barring a technology singularity). Assuming it will be a really long time before we have technology like that, it could be argued that we will have to settle with our barbaric attitudes for as long as our technology hasn't made those massive gains. [] The State of the Matter, Stances [][] Stances, Defined (The State of the Matter, Stances / Stances, Defined) "Does free will exist?" "Free will can mean a number of different things, some of which exist and some don't." Soft determinism: metaphysical freedom and moral freedom are two different things and not identical. Hard determinism: metaphysical freedom and moral freedom are two things that are identical. Silliness: metaphysical freedom and moral freedom are the same word. Determinism is true, and therefore there's no freedom, no free will, no freedom of speech, no political freedom, no interstate freeways, and no free samples at Costco. There's only fixity, determined will, determination of speech, political determination, interstate determined-ways, and determined samples at Costco. "You're disagreeing with the people who say that the disagreement is not real? I don't know if I can handle a triple negative right now." "The disagreement is real (single negative). There are two disagreements. One is whether metaphysical freedom is real or not. The other is whether metaphysical freedom is identical to moral freedom. Well, some people say that all that is beginner mode and that the relevant questions are neither of those. They say that the answer to both of those questions is "doesn't matter". Whether or not metaphysical freedom exists, they say, doesn't matter, and all the relevant questions about moral freedom are unrelated to metaphysical freedom. "Are you saying that most people who call themselves hard determinists are really soft determinists in denail and that most people who call themselves soft determinists are really free will libertarians in denial?" "I don't know about 'most'. I don't know the population distributions really." "Are most free will libertarians really something else in denial?" "Yeah. Idiots." "Whoa, shots fired! Let's be more civil. I don't even think you're right, I mean, I don't think you can be so sure that they're wrong and you're right to warrant calling free will libertarians idiots. Political libertarians maybe." lorem - "Whoa, shots fired! I don't know if political libertarians can be called idiots even if free will libertarians definitely can." [][] Never Resolved (The State of the Matter, Stances / Never Resolved) "At every point in history there were things that were understood by science and things that were still mysteries. And every time history advanced, more of the things became understood by science, and science could explain them in terms of simple cause-effect statements. By induction, I predict that this will continue until all the things are understood by science." "At every point in history there were things that were understood by science and things that were still mysteries. And every time history advanced, no matter how many more things became understood, some remained mysteries. By induction, I predict that there will always be remaining mysteries no matter how many more things get understood by science." Whatever the best accounts that can be given for all stances, all will have some remaining mysteries. Determinist: "Why the illusion of free will appears to us exactly the way it does, I can provide some parts of that answer, but there are parts that haven't been figured out yet. I contend that the remaining mysteries aren't as bad as those of opposing stances." Indeterminist: "How exactly there can be an uncaused cause is a bit mysterious. I contend that the remaining mysteries aren't as bad as those of the opposing stances." At best, your choice will come down to who in your opinion has the less-bad set of remaining mysteries. Not that every question in all these analyses is a matter of opinion, but if you finish the task of really reading this book, you'll probably choose one stance, and say that the remaining mysteries for that stance seem less bad to you than those of the others, and that is a matter of opinion. What you're saying amounts to, "Thinking is hard, and I give up on thinking and making sense," but you've couched that in a bogus argument that makes it sound like thinking and like you haven't given up on thinking. I don't like it because apathy makes me sad when I see it. That charge is bogus. What you just said about how my argument sounds to you, how your argument sounds to me is the same way. That counter is bogus. [][] Ground Rules (The State of the Matter, Stances / Ground Rules) 2 questions: "What would we need?" and "Do we have that?" 2 questions: "What do we have?" and "What should we do about it?" (descriptive and prescriptive) Here are some warning signs you might notice to indicate that you're reading the work of a confused philosopher. There's almost never a good reason for a text to say, "I think free will means-" rather than, "Here I will be defining free will as-". Likewise if you see something like, "Let's consider the work of someone who thinks free will means-" rather than, "Let's consider a piece of writing that defines free will as-". Defining terms is great. But if someone is making a stance based on disagreeing about a definition, or characterizes the work of other people as disagreeing about definitions, you might be reading the work of someone who is identifying a nothing as a something. lorem - there is one exception. That's the appeal to agree that the definition that should be used by all participants should be one that agrees with the commonsense definition, or with some technical definition. Other warning sign: "I think determinism and free will are compatible (or not compatible)". This one it's a little less obvious that it's a disagreement about a definition, but it is just a disagreement about a definition. You must learn to spot the difference between an essay wherein everything being said amounts to supporting an argument about a definition and an essay that's not completely pointless. This also applies to written or verbal argumentation about almost any topic. Even if hidden variables is true, the discussion of indeterministic effects is relevant. [goes with: even if fundamental indeterminism is true, the discussion of deterministic things is relevant] [][] Lorem (The State of the Matter, Stances / Lorem) When a determinist says, "You should have done better," he means, "Do better next time," or, "You could have done better if you had been a completely different brain at the time." When an indeterminist says, "You should have done better," he often does mean, "You could have done better with the same brain in the same situation," which is pure nonsense. A consistent determinism might require biting some bullets. I prefer that to bad arguments for free will. Bad arguments for free will require biting the bullet of having no plausible mechanistic account, (or deciding to buy up a bad one). Determinism does not require the fatalism/nihilism bullet. What's the state of discussion right now about determinism? Democritus versus Epicurus has not been settled. Whence comes the swerve? There have been bad accounts that can be refuted definitively, and other accounts that don't admit of being falsified. The ones that don't admit of being ruled out include the argument from soul stuff. There is no disproof of soul stuff in the same sense that there's no disproof of the teapot in orbit around Mars. There's the commonsense notion that people can decide what they do. There's the commonsense notion that things obey the laws of physics. The simplest way to illustrate the clash is to say, "How can all of the following be true? I'm made of stuff, stuff obeys the laws of physics, so I have to obey the laws of physics, but I can also decide what I do. I don't see how all that can be true. How can I be bound to the laws of physics but also capable of deciding what I do?" When a rock rolls down a hill, it's obeying the laws of physics. But I can choose to roll down a hill or roll up the same hill. In fact, most of the best decisions I've ever made seem to have that character of rolling up a hill instead of down it. It sure seems to me like my decisions are somehow not subject to the laws of physics the same way other things are. It seems to me like there must be some neat and tidy explanation of how I find myself with that difference, the one that marks out the difference between me and a rock, it being only able to roll downhill and me being able to choose. Yes, it would seem that way if we're coming at it from commonsense notions and taking our first serious glances at this puzzle. Lorem paste from other, the pick and place robot that announces it's deciding freely. Does that robot have free will? Clearly not in the same way you do. Now suppose you're working on a farm and your task is the following. Every time a goat comes at you, move this section of fence to the left, and it will direct the goat to the pen behind you on the right, and every time a sheep comes at you, move this section of fence to the right, and it will direct the sheep to the pen behind you on the left. And all day you sit there, and they send goats and sheep at you all day, and every time there's a goat, you move the thing to the left, and every time there's a sheep, you move the thing to the right, and at the end of the day, there's the two separate pens, the one with all the goats in it and the one with all the sheep in it. And you take your paycheck for the day and you go home. Did you decide every time you moved the thing to the right and every time you moved the thing to the left? Yeah. Did you freely decide? Yeah, you could have freely decided to move the thing the wrong way however many times you wanted to. But you decided to move it the right way every time because that's what makes your boss want to give you a paycheck. Now suppose I were to craft a robot that does the task. The robot has an eye and an arm, and it can tell what's a goat and what's a sheep and move the thing accordingly. Now the robot is doing the exact same job you were. It's.. making all the same decisions you were. But it's following a program, and you don't have to follow the program. Let's say you have one favorite goat, named Bob, and Bob the goat likes to hang out with the sheep, so when Bob comes at you, you send him into the sheep pen, but whenever any other goat comes at you, you send him into the goat pen. And suppose your boss allows you an error over in a while and that doesn't threaten your paycheck. And suppose that after your shift, the next guy picks up Bob from the sheep pen and puts him in the goat pen. Congratulations. You have done your job, and mixed in the occasional act of rebellion. Free will! But I could also program the robot to do the same thing. Now when the robot sees Bob the goat, he sends him into the sheep pen, but otherwise sends goats into the goat pen and sheep into the sheep pen. What's the difference between you and the robot now? Well, still several things. You decided that this would be your decision criterion. The robot did not decide that this would be its decision criterion. I had to program it. It's not a self-programming robot that added in the extra condition for Bob the goat. So perhaps free will has something to do with deciding your own decision criteria, or being your own programmer. Perhaps you never even deliberated about what you would do with Bob the goat, but the moment you saw Bob coming at you, you were overcome with emotion, and you sent him into the sheep pen because you knew that Bob the goat likes being with the sheep, but you surprised even yourself at the moment you made that choice. A robot would never do anything like that, right? So maybe emotion and sudden unexpected behavior and certain other things like that are where you get your free will from. Rollback: we feel like we could have decided otherwise. Could a person have done other than what he actually did do? Monism and dualism: at first glance, both compatible with determinism. (We will remain agnostic, but soul stuff becomes an ignorance barrier). Various metaethical theories: at first glance, all compatible with determinism. (We will come to combine a few and remain agnostic about a few). Indeterminacy a problem or not? (We will see that strictly it is, but effectively it is not.) The expression "contra-causal free will" is a bit long, and a bit misleading, so for a shorthand I will go with "etiogenic free will" or "hard free will". The second of those is only three syllables. The first of those is a term less misleading than "contra-causal". "Etiogenic free will" means "free will that creates causes," and we will use it to mean the kind of free will that can be an uncaused cause. If prior events have only 99% influence in our decision-making process, and the other 1% comes from.. something other than prior causes, then this is the site of.. etiogenesis. Back in the times of ancient Greece, there was an ancient Greek fella named Democritus, and he's the guy who we credit as the first guy who talked about atoms. Back when Democritus was talking about atoms, the word 'atom' meant the smallest possible bit of stuff, the bits of stuff such that no smaller bits are possible. In modern times, we still talk about atoms, but now we mean something a little different, because the atoms we talk about can be split into even smaller bits. For example, atoms of iron, atoms of oxygen, and so on. It was only recently we figured out how to break those into smaller bits, I mean recently compared to when ancient Greeks were around. So, a relatively short time ago, we were talking about atoms of iron and atoms of oxygen thinking they were the smallest possible bits. And the word 'atom' comes from words meaning "can't be cut". So whether it's atoms now or quarks now or whether we've been breaking things into even smaller bits, when Democritus was taking about atoms, he meant whatever the smallest bits possible are. For a long time, people argued whether that's even a coherent idea or whether maybe infinitely tiny bits of stuff are possible, like for any tiny bit of stuff you could define, someone else could define an even tinier bit of stuff. Anyways, Democritus said that there's some smallest possible bit of stuff, call one of them an atom, and he said that atoms always fall downward in a perfectly straight line, which sounds super strange in light of modern science no matter what these tiniest bits he was talking about were. But anyways, in this same piece, after he talked about atoms falling downward in perfectly straight lines, he expressed the idea that we now call determinism, and we credit him as the first person who did that in a still-surviving piece of writing, although it's entirely possible that someone else said it before him and we just haven't heard of that someone. Now I'm not quite as old as ancient Greece, so the first time I heard about the idea, or saw a short story meant to express the same idea, I got it from The Simpsons (a short called Time and Punishment). But back in ancient Greece, after Democritus did his thing, another ancient Greek guy named Epicurus said that okay, some smallest possible bit of stuff exists, call one of them an atom, but no, atoms don't fall downward in perfectly straight lines, but sometimes they swerve and deviate a little from straight lines. Again, anyone who understands modern science would say "what the heck are either of you talking about with this atoms falling in either straight lines or along paths that swerve a little stuff?" but anyways, in this same piece, after he talked about atoms falling along paths that swerve a little from straight lines, Epicurus said that determinism per Democritus is not true, and similarly we credit him as the first guy who said that indeterminism is how things work. And from that day to this, people still argue about whether determinism is how things work or whether indeterminism is how things work, and every time we do that in modern times, if there's anything we prove, we prove that the old issue between Democritus and Epicurus is still not solved among the people who talk about it. Every time someone says something like "I feel like I can freely choose what I do, but I also understand that I'm made of stuff that must obey the laws of physics, but how can these two things be settled?" they're pointing back to the old disagreement between Democritus and Epicurus. Lorem Democritus second, disciple of Leucippus who was the first person to talk about atoms. Preface. If I were to cut off your right hand, hold it in front of me, and let go of it, it would move downward. If you think you can do something different with it, please raise your right hand. The difference is free will, or is it? Free will, or fixed will? Chapter 1 Today, you can roll downhill, like a rock, or you can roll uphill, unlike a rock. Naive belief in determinism and naive belief in indeterminism are both rooted close to pretheoretical notions. Consider the following conversation "Are you a determinist?" "Yes. Are you?" "No. Why are you?" "Because the laws of physics." "But we need free will to explain things like choices and morality." "But what about the laws of physics?" "But we need free will to explain things like choices and morality." "But what about the laws of physics?" "But we need free will to explain things like choices and morality." "It seems we're stuck." The reason why this disagreement can quickly become like some robotic and neverending game of ping pong is for the following reason: on each person's side, he's showing how the other side bumps up against some strong intuition we have. Rejecting free will quickly bumps up against our commonsense notions of how choices and morality work. Rejecting determinism quickly bumps up against our commonsense notions of things following the laws of physics. If one of these people is right and the other is wrong, we will have to show how one of those two sets of commonsense notions is illusory. lorem - either illusory or not universal The first time I noticed the problem of determinism, I was about 7 years old. I tried to ask my mom about this puzzle, but I probably couldn't articulate it competently. But I've had it as a mystery ever since then. That's not necessarily to say that I came up with the idea before learning about other ideas. Maybe I had heard about the idea of determinism and free will, then forgot about it, and then later the idea resurfaced in my mind. Or maybe I really did come up with the idea after seeing that episode of The Simpsons that prompts that kind of thinking without mentioning determinism outright. In any case, from age approximately 7 onward I was aware of the mystery in terms of a "But wait! This, but also this, but how both?" A minimal compatibilism. What is compatibilism? It can be defined as the belief that determinism is true and free will exists. What kind of free will? The simplest definition of free will is that which can't exist if determinism is true. Some kind of free will that's compatible with moral responsibility. Okay, so morality is a thing we do. There's some set of activities we engage in, determinists included - like telling people what kinds of action are good ideas to do - and we call it morality. And part of how we do that is by identifying will as in the difference between an intentional action and an unintentional action like a twitch, and that difference is some kind of will, and when you do the willing action you're free from whatever might prevent it. So those are the conditions for a minimal kind of compatibilism. It doesn't require that ethical categories are real entities, just that there's a set of actions we do that we call morality. With this kind of compatibilism, if you do some action that's normatively forbidden, and no one coerced you to, and it wasn't something like an unintentional twitch, then we do things like saying "not cool." Maybe a kind of compatibilism with a stricter set of conditions would require saying that moral categories are real entities. Some kinds of compatibilism say that free will is a strongly emergent property that can't be reduced and that therefore determinism doesn't apply to it even if it's made of deterministic parts. There are hard incompatibilists who say that free will doesn't exist because that's the meaning of determinism being true and also that moral responsibility is not a real thing or something we should even be using as a concept. Bob Sapolsky, for example, thinks that the only correct purposes for a criminal justice system are deterrence and rehabilitation. So a loose definition of compatibilism is anyone who says that determinism is true in some relevant sense but otherwise disagrees with Bob. There might also be people who say that morality is a thing we do, but that moral responsibility and the like aren't real entities, but who also says they're not a compatibilist. I don't care to do much else about the mire of the highest level of abstraction about these things. Compatibilism can mean so many different things that I wouldn't ever write something that says, "I would like to argue for compatibilism" or "I would like to argue against compatibilism". And it has never occurred to me to say either "I'm a compatibilist," or "I'm not a compatibilist." I am aware that these words at the highest level of abstracton exist, and I am aware of what labels some people put on the stances that mean "this highest level of abstraction from this domain (metaphysics) and this domain (ethics)". Whether you use these terms or not, what substantial things we agree about or disagree about are at some lower level of abstraction, maybe the second-highest level plus a few further ones. So I don't have much else to say about the highest level of abstraction. Like all those other people who have serious things to say, I say them about the layers that have meanings that can be talked about without getting too mired. Obvious bad argument section. c.f. Euthydemus. "This dog is yours, and this dog is a father, therefore when you pet your dog you're petting your father." Let's knock down some arguments that it's clear to me are flawed. "Information requires that things could be otherwise, and clearly information exists, therefore things could be otherwise, so determinism is false. If determinism were true, information couldn't exist. So the existence of information proves the falsity of determinism." This is a kind of fallacious argument called equivocation. The problem is with the phrase "could be otherwise". Information requires that things could be otherwise in some sense of the phrase "could be otherwise" but not in the determinism sense. Has nothing to do with that. Let's use an example to illustrate the difference. Suppose I have five zeros and five ones, and I'm to arrange them into some sequence. I take a piece of paper, and on the left I write "0000011111", and in a different spot on the right I write "1011010001". According to the scientific definition of information, the sequence I wrote on the left has a small amount of information, and the sequence on the right has a much bigger amount of information. If you were to convert both of them into zip files, the zip file for the sequence on the right would be the same as unzipped, but the zip file for the sequence on the left would be smaller. This definition of information is meaningful because both sequences have the same number of zeros and the same number of ones, but one has more information and one has less. By some stretch, "could be otherwise" is one way you can describe this. If you give me five zeros and five ones and I arrange them into some sequence, that sequence "could be otherwise", only in the sense that there are other possible sequences. But this has nothing to do with determinism and the "could be otherwise" that relates to that. If it did, you could take out a piece of paper, write "123" on the left and "312" on the right and somehow that would be a time machine. So this trick called equivocation is something to look out for in reasoning in general. And as for matters of free will and determinism, if someone says "x requires things could be otherwise, so the existence of x disproves determinism," make sure you're looking out for this slippery trick of using the same word or phrase twice and meaning two different things in the two different places it's shows up. If someone offered you $50 to say which line is longer, it would do you $50 worth of good to know about the illusion. The illusion of free will is like that, but there's a lot more than $50 on the line. Lorem lorem - Suppose you knew about the Mueller-Lyer lines illusion and this other guy didn't. Then when you saw the lines you would know to correct for the illusion. Hard determinism: determinism is true, therefore free will is not real in whatever way I might be using that word. Stronger: free will is not real in any way that matters to moral responsibility. Isn't a deterministic world wondrous enough without having to search for fairies underneath? A simplified way of talking about determinism is all deterministic laws of physics, predictability, no free will, lorem. To be more accurate, physical laws are not all deterministic, there are major issues relating determinism to predictability, free will can mean things other than the negation of determinism, lorem. The general pattern of hard determinists is to say "the rest doesn't matter" at an early point in the analysis and then to say that it's time to eliminate a big portion of what we do. When one is trying to be inflammatory or noteworthy he will put too many things in this category. When one is more tempered, he tends to have good points about how our attitudes toward these things have shifted and set out prospects for how we could add some more to how we shift and what some of the next things to do away with are. How many of these things will we have done away with in a thousand years if we're still around then? That's an interesting question. If the answer is "all forms of blame and praise" what's the more proximal timeline for which things we're doing away with next and how to manage that? If we do eliminate all forms of blame and praise, to what extent do we continue to do things similar to what we do now, but reframed? Would motivation really be the same if there were no accolades and only the other motives? Soft determinists tend to talk about the topics on the intersection of metaphysics and ethics. We call those topics more metaphysics than ethics e.g blame, praise, deserving, responsibility. The talk that we more commonly call ethics is about slightly different topics. Soft determinism has to be more than ethnography. You can collect observations like "in this situation people tend to do this" and "in this other situation people tend to do this other," and build up a taxonomy like that. For the task to be philosophy you have to do something additional like categorize the observations in terms of what principles are involved. When you do that, it's easy to find that in this one situation we tend to act on this one principle but in this other situation we tend to act in the opposite of that principle. That's when either you have to reformulate what principles you think you've seen and try to make it all consistent, or assert that in the one situation our tendencies are appropriate and in other situations our tendencies are wrong and perhaps we need to confront and dissolve our intuitions in those cases. I'll mostly be using the word 'indeterminist' to mean 'libertarian' and the term 'etiogenic free will' to mean 'libertarian free will'. It's easy to hear a hard determinist take a particularly eliminativist stance one day and then use words like 'deserve' the next day. When he does, it's plausible he's using it in the boo hurrah sense. Doesn't that just make him a soft determinist? Is a hard determinist sometimes a person who can only stick to his guns when he has time to prepare a book on the topic, but who quickly turns into a soft determinist under questioning? "You say you're a hard determinist, but we've just been talking about 'deserve'. Doesn't that make you a soft determinist?" "No, I meant 'deserve' just in terms of 'those societal things we do when we say 'deserve''." "But that has something to do with volition or agency or the difference between intentional and unintentional actions, or voluntary and involuntary actions, which is just what soft determinists pick out when they talk about the kinds of free will that soft determinists talk about. So now your stance is the same as that of a soft determinist." "Maybe, but I'm still saying that free will means exactly the negation of determinism." "So all you're saying now is that the only difference between a soft determinist and a hard determinist is the definition of one word, even if the stances are the same." "A computer program has a decision process, but we think of the program as deterministic. A person has a decision process. Should we think it's other than deterministic?" "Maybe." "Why?" "First attempt at a reason: it feels like it's indeterministic." It's true that the process of making a hard decision comes with the sense that one is weighing between possible courses that the future really could take. It won't be any quick writeoff that does away with things like "it feels that way", "why does it feel that way", and "why is that feel an illusion". The burden of proof dealing with these things is real. There are some major differences between the decision process of a typical computer and the decision process of a hooman. But they're both deterministic. That's something they have in common. By one set of definitions, the negation of determinism is free will. Then if you take the definition free will and run with it you get a number of analyses about things that are not the negation of determinism. What other words can you do this with? The negation of determinism is free action? It seems sort of tangential to go from "there's determinism" to "what word have we been using to call the negation of determinism" to "what's the commonsense use of that word when we're using it in general". At that point, it just seems to have strayed wide from what the line of questioning was about when it was still about determinism. In a lot of these thought experiments, there is an unstated premise "suppose this were a purely deterministic world" i.e. suppose the hidden variables theory is true, or just suppose we were still working with Newtonian mechanics and still hadn't yet figured out the quantum indeterminacy stuff. In these cases of assuming determinism, perhaps in all of those the thought experiment holds just fine, because of effective determinism. If not, they're probably still fine at pointing out ideas relevant to the discussion of our actual world. I typed 'determinism' into the YouTube search bar. Of the top results, there were two super short videos that say almost nothing more than "Good news: determinism isn't true because of quantum mechanics, so you have free will!" There were also several really good videos in the top results - things that contribute strongly to the present state of the discussions. I don't know why I typed 'determinism' into the YouTube search bar that day. I think I might have been actively trying to acquire a headache that day. From those top results, it's clear that the algorithm was basically saying "I don't know whether you're smart or stupid, but here's a mix of the kind of videos you might like if you're a total idiot and the kind of videos you might like if you've been thinking seriously about this topic." "Evolution takes randomness and applies selection to produce ordered results. Likewise the agency of a person takes quantum randomness and applies selection to produce the effects of its will." This is an example of one of those arguments that purports to solve something - rescuing free will - while actually doing nothing to address any of the possible regress problems that would have to be solved by such a rescue effort if one were possible. [] Yinyang Determinism [][] Laozian Paradox (Yinyang Determinism / Laozian Paradox) Imagine we have a word like this in English. Suppose that word is hig. One day someone asks you the meaning of 'hig', and you say, "It means the enduring," and the next day someone else asks you the meaning of 'hig', and you say, "It means the fleeting." And you don't notice that there's a seeming contradiction there. And there really isn't. We do have some contranyms in English. The word 'dust' can mean "remove dust from," as in "dust the bookcase," and the word 'dust' can mean "add dust to," as in "dust for fingerprints". The word 'fast' can mean "not moving," as in "stuck fast," and the word 'fast' can mean "moving lots," as in "fast car". But chang is not a word with two different meanings that are opposites. It's a word with one meaning that has connotations that are opposites. So when I think of the idea 'determinism', or I think of the word 'determinism' by itself, it comes with a lot of connotations that have opposite feels, but they're all things that are made consistent under scrutiny. [][] Picking a Label, and Arbitrariness (Yinyang Determinism / Picking a Label, and Arbitrariness) "Sometimes when we use the term 'free will' we mean exactly the negation of determinism, and that's why I don't call myself a soft determinist. Sometimes when we use the term 'free will' we mean something other than the negation of determinism, and that's why I don't call myself a hard determinist. I just call myself a determinist, without either the hard or the soft prefix." "Aren't there problems with calling yourself just a determinist?" [goes with: call myself a determinist even though...] Calling oneself a determinist requires saying hidden variables is true? Or doesn't? [][] Lorem (Yinyang Determinism / Lorem) To say that metaphors are true is metaphorically true. To say that metaphors are false is factually true. To say that metaphors are factually true is factually false. The first time I had an in-person interaction on mushrooms, it had a deterministic aesthetic. My background at the time: I had recently read two of the works of Henrik Ibsen from a book that has four of those, and also Birdman is one of my favorite movies, which I have watched about five times. So my friend Victor showed up at my place for a while before we both headed out together to go to a thing. During the time he was at my place, we had a conversation. During the time we had that conversation, in my mind that coversation was seeming to do something other than what's normally done when you're having a conversation. He said a thing, then I said a thing, then he said a thing, et cetera. But that whole time, it seemed a lot more like we were reading the script of a play from a book, or watching that part of the movie Birdman that takes place on a theater stage made up to look like a kitchen without a fourth wall. It had a deterministic aesthetic. It had a strong aesthetic sense in my drugged up state, and that sense was one of determinism. As we proceeded, one after another to say things, one after another, here's more like where my brain was at: "Oh, these two characters.. they have quite markedly different backgrounds, values, experiences, but this whole chat scene is designed to exposit those differences while also expositing what they have in common. Both of these characters respect each other highly, even though each has limited access even to what the mode of mind of the other on a typical day is. The scene really nails that, even though they're only making small talk. It's the perfect kind of scene you'd put in an act 1 of a play to get to these points in a minimum of time. You can tell that Victor has a great deal of respect for Bob, even if he can't understand how his mind works on a typical day, and you can tell that Bob has a great deal of respect for Victor, even if he can't understand how his mind works on a typical day. You can see how they're marking out where there's common ground between where they both know how to relate, how they're also trying to scope out how utterly different the two people are, and you can see how they're also trying to see if they can move some of that separated ground into the set of common ground they have. And all with an economy of words. Not one moment wasted. Very well written." Only it wasn't written by anyone ahead of time. It was a goddamn conversation unravelling in real time. Apparently, I did have enough presence of mind to contribute my lines as one of the characters, but where I was really at was in the position of a third-person observer. And I've seen (been in) conversations that were a lot less productive, even just in terms of those things I identified as the tasks of the conversation. Even if these two guys don't always relate in terms of actual habitual concerns, they were doing what they could in order to understand each others' differences. And that's why it was a wonderfully written scene even if the amount of ground they were able to make common was still limited. So, various people have had various philosophical stances either undermined or reinforced by certain altered states of mind. And I'm a determinist rationally in a normal state of mind, and I'm a determinist twice over when I see how things work in certain altered states of mind. Don't have the phenomenology of free will? That can be a bad sign that you're on autopilot. Is it possible to be a person to accept determinism, become settled on all the extensions of that along some consistent scheme, and also not have the phenomenology of free will when facing a difficult decision? I don't know about that. It's too difficult to work out whether the arguments for determinism are best or the arguments for free will are best. Reality is fundamentally indeterministic, but effectively deterministic on the level of physics that we inhabit. Even if determinism is reliably true at every scale bigger than a few atoms, sometimes the best thing you can do is to act as close as possible to undetermined, which can be very close indeed. So we have more than one layer of "effectively one way even though fundamentally the other way". At one level, fundamentally indeterministic but effectively deterministic. At another level, fundamentally deterministic but effectively indeterministic (sometimes). What ought we to do about all this? When it's all said and done, what's the take-home message? It's a bit of a hodge-podge, to be sure. Different conceptions at different times for different circumstances, and there's no way to sum it up in 10 words or less. But I've given you a lot to get right, and a lot to avoid getting wrong. Suppose I say determinism effectively true and indeterminism effectively true and what to do about it is think like a determinist sometimes and think like an indeterminist sometimes. Is that dishonest? Facile? Inconsistent? One answer I've heard: I'm a determinist on Mondays, Wednesdays, Fridays, and Sundays, an indeterminist on Tuesdays, Thursdays, and Saturdays. One answer I've heard: I'm a determinist when bad things happen to me, an indeterminist when good things happen to me (lorem attribution bias). Determinism? Indeterminism? If you understand the things, your mind floops between the two like a Necker cube. When someone does something extremely nice for me, is my gratitude of the type "you didn't have to, but you did, and I appreciate your decision of choosing that option" or is it "it was determined you would, and by no one's ultimate choosing, and I appreciate impersonal factors for making me that lucky (temporarily, for the time being, until impersonal factors inevitably kill me)"? There's so much overlap - determinism versus indeterminism, actual versus effective - I don't see it as a confusion of thought to think some combination of both of the above, even if I'm giving little thought to "this appraisal in terms of this formulation of how the universe works, and that appraisal based on that formulation of how the universe works". Lorem: somewhat related idea: Laozi's use of chang2. He uses the word (in the Heraclitean sense of the only thing everything has in common is that everything is always changing) to mean two oppsite things. Lorem you really do get a better sense of the work when you read it in the original language. But when you can understand the language, and the text, and you go along for the ride, you could be understanding these two uses for a while before you realize "but wait, this word is being used for two different meanings, and those two meanings are opposites to each other," but it still works, because it's always clear enough in each case which sense he's using it in. When I say one minute that I'm a determinist and then the next minute say something that seems like it can only work if I'm an indeterminist, I pretty much don't see a conflict. This sounds terrible, wishy-washy. I'm really not like that with other things. I value clear thinking and consistency. I'm quick to say "that's poppycock" when someone I respect says something that's not becoming of their level of intelligence. I'll point out inconsistencies if I'm talking to someone I can count on to appreciate that it's not a slight, but part of the rational activity of discourse. I make elaborate constructions with all the logical nuts and bolts exactly right, in writing, in computer coding, in other things. All that sort of thing. And yet I talk like I'm in an almost undifferentiated superposition of being a determinist and being an indeterminist. Almost undifferentiated. Not completely undifferentiated. There are people who think with a muddle of deterministic and indeterministic framings and in a way that causes suffering due to its lack of clarity. Lorem: the temporarily embarrassed millionaire Lorem: determinist v indeterminist framings of informing a fren's decision. Fren had been wrestling with a difficult problem for a while. I said he has to come see something I have to show him. Based on what I knew about his decision and the factors going into it, and what this item would reveal to him about the parts yet unknown to him, it would make his decision a lot easier. It didn't take one minute for me to show him the item and explain the relevance, and his decision was definite, off the fence it had been sitting on for weeks. Is this an example of determinism, or free will, or both? When you have a determinism that says you can act almost perfectly as if undetermined, then indeterminism doesn't need theological rescuing. Can it be good to use an illusion? Lorem: sunrise and sunset. Lorem: movies vs plays. When evolution invented the illusion of free will, it is on balance a good invention, like the hooman invention of moving pictures. Before moving pictures, more commonly known as movies, or now just videos of any kind, you could go to a stage play and see people actually moving around. Now we have movies, which appear to us like people moving around, but the apparent motion is not real motion. And that's even better. It's a damn good thing this illusion works, because now buying a movie is a lot cheaper than hiring a cast of hoomans to move around on a stage for you. Suppose someone set out to make a moving picture, so he takes a photo, prints it out, then looks at this picture thinking, "What do you do to this to get it to move?" which is of course not any workable way to make a moving picture. This is an example of someone going about the illusion in the wrong way. You can't do away with the illusion of free will when making an important decision. It will be there. Indeterminism will be apparent. Etiogenesis will be the crux of the deliberation even though it's not real. I call myself a determinist even though determinism isn't strictly true and in many hooman decision-making cases it isn't even effectively true. It's probably because I don't care much about words in that kind of way. [] Appendix on Etiology [Lorem, this is a first draft, and below it are some bits that are absorbed into that draft] Groundwork for a metaphysics of deductive etiology In the global sense of cause and effect, the "mere functioning of totality" at one time was neccessary and sufficient to cause the totality at any subsequent time. It's a definiton that works, but it doesn't do much that's practical. Can we reason further from this to something that has more working parts and does more useful things? We have our definition of cause and effect in the global sense. What about a definition of cause and effect in a less than global sense? Consider this proposed definition of cause and effect (in the less than global sense): "x caused z == if x had not happened, then z would not have happened (where x is some event that preceded z)". Let's call this the counterfactual definition of cause and effect. Is this the correct definition of cause and effect? If this is the correct definition of cause and effect, could this be codified in some formal logic system? If this is not the full definition of cause and effect, is it missing one or some number of other factors that could be added to the counterfactual definition and made into a formal logic system? Let's try. Let's consider a conundrum before we start trying to construct a formal system. If the following story is fictionalized, the idea behind it is one that applies to real law courts and what they decide in them. One day, a manufacturing defect in building x caused a fire in building x. At almost the same time, a manufacturing defect in building y caused a fire in building y. Building x and building y both adjoin building z. On that day, building x, building y, and building z all burned down. The owner of building z wanted to sue for damages. His building burned down, and not due to a manufacturing defect in his own building - he was scrupulous in making sure the construction was better than that which would spring an unexpected fire. That owner of building z considered suing the owner of building x and considered suing the owner of building y. Suppose the court said the following: "If the fire in building x hadn't happened, then the fire in building y would have burned down building z. So the owner of building x is not culpable. If the fire in building y hadn't happened, then the fire in building x would have burned down building z. So the owner of building y is not culpable. Well, based on this reasoning, you can't sue the owner of building x, and you can't sue the owner of building y, and surely no one else might have been culpable, so there's no one you can sue." This seems unsatisfactory. If that's how we're defining cause and effect, and if that's how we're administering law, then your building can get burned down through no fault of your own, and quite certainly through the fault of two other people, and none will be culpable. It seems like there must be some problem in how our imagined judge defined cause and effect or culpability. In a simpler case than our conundrum, the definiton seems to work. Suppose I turned the light switch and then the light turned on. Turning the light switch preceded the light turning on. But did turning the light switch cause the light to turn on? Well, if I hadn't turned the light switch, then the light wouldn't have turned on. This satisfies the counterfactual condition. So not only was there a sequence, turning the switch and then the light coming on, but there was also cause and effect: turning the light switch caused the light to turn on. But there are already problems with this account if we're applying the kind of rigor you need to handle if you're designing a formal system. We must consider possible objections that we might already have run into. One might say: by what right have you asserted that if you hadn't turned the light switch that the light would not have turned on? You didn't observe not turning the light switch and the light not turning on. It can't be empirical if you're making assertions about things you didn't observe. Did you use your imagination? And is imagination part of this formal logic system? Answer: yes, and yes. I used my imagination, and imagination is part of this formal logic system. This shouldn't be too frightening. Sentential logic already requires using your imagination every time you use the if-then operator. Let us define this more rigorously. I said "If I had not turned the switch, then the light would not have turned on." Indeed, this does not refer to an actual observation on the particular occasion when I did turn the switch. But it's something I know from other observations that were similar enough in the relevant ways. Further, I can relate this to our foundation in global cause and effect. Our first principle, recall, is that the totality of things at one time caused the totality of things at any later time. The world wherein I didn't turn the switch is one of those things that philosophers tend to call a "possible world", which really means "self-consistent hypothetical world". It's a world that is not our actual world, but that is otherwise self-consistent, which is to say it doesn't contain any contradictions itself. So, how does this "possible world" relate to our world? Let's say that this "possible world" is one in which turning the light switch didn't happen at the time when in our world I did turn the switch, but is otherwise as similar as possible to our world. Or, in the set of all "possible worlds", find the one in which I didn't turn the switch but is otherwise as similar as possible to our world. A determinist may still balk. That would require that your brain in that world at the moment before you turned the switch to be different from the brain you had in our world at that time, and that would have required the totality of things at the moment before that to be different in exactly what ways? My answer to that is that the exact details don't really matter. Maybe it would have required things 13 billion years ago to have been different. The only thing that matters is that we can imagine a "possible world" that has the relevant features at the relevant times, and doesn't contain self-contradictions. We can even imagine that this "possble world" sprung into existence earlier that day, not 13 billion years before. Let's imagine a possible world that sprung into existence this morning, that from that moment forth didn't contain any self-contradictions, and that was a lot like ours aside from a minimum of differences which included my brain in that world being slightly different from my brain in this world, the degree of that difference being only that in that world I decided not to turn that switch this afternoon whereas in this world I did. I assert that this act of imagination is not a problem. Okay, one might say, but does this solve the conundrum about buildings x, and y, and z? Answer: not yet. But if you're still with me, I can get at least that far in the next steps. Have I so far described a system that works, or one that doesn't? I assert that the system so far is fine. Let's recap where our system is so far. "x caused z == if x hadn't happened, then z wouldn't have happened". The left side of this equality refers to the sort of thing we want to be able to assert about our world. We want to be able to say that things caused things in our world in some sense other than the global "the totality at one time caused the totality at some later time". The right side refers to something you have to do with your imagination, but there are rules about when that counts as a real act of logic, for example, that you're imagining a world that, although it isn't our world, at least doesn't contain self-contradictions. We'll get to the next step soon, but not before we address another possible objection that we might have at this time. One might say: Circular reasoning! You're trying to come up with some way of saying things about cause and effect, but that takes steps in which you're saying things about cause and effect that assume that the things you're saying about cause and effect already work! To that I would reply: not so. I'm trying to come up with some way of saying things about cause and effect in the less than global sense, and the steps I've defined so far do not require saying things that assume that what I'm trying to prove has already been proved. Cause and effect in the global sense is already either proved or it's an axiom. The things so far that I've tried to say about cause and effect in the less than global sense refer to cause and effect in the global sense, and those are two different things. Okay, so when I imagine a world wherein I didn't turn the light switch this afternoon, I'm imagining a "possible world" that is not our world, but is self-consistent. And when I imagine things happening in that world, they do follow the laws of cause and effect in the global sense, as applied to that global scope of that world. So there's the imaginary world, there's what happened in that world this morning, there's no self-contradictions there, and then there's the progression of things that happen in that world according to global cause and effect, the totality of things in that scene causing the totality of things at some later time in that scene. No need to invoke cause and effect in the less than global sense. Now, one might object: you said you know enough about this world to say that you can imagine another similar world starting this morning in enough detail to discern that it contains no self-contradictions, and now you're saying that your understanding of how it operates is based on only global cause and effect, the notion that the totality of things in that world causes the totality of things at some later time? To that I say, yes, or at least that I still know enough about these things to assert everything I've been asserting with if not certainty then the closest thing to certainty. I'm more sure of these logical steps so far than I'm sure that I'm not a brain in a vat in reality. I do have serious doubts that I'm not a brain in a vat in reality, but my doubts so far about the formal logic system I'm describing right now are almost nothing in comparison. And now I've also described how there's no circular reasoning. One more possible objection: just what do you mean by "of all the possible worlds, the one that contains not x but is otherwise as close as possible to the real world?" To that I say that there are cases where this is unproblematic. And today I'll derive what I can in those unproblematic cases. And after a few more steps, that will include dealing with the three fires conundrum. Whatever goes beyond that and has to deal with "which of the possible worlds has this condition but is otherwise closest" that indeed may be a problem in some cases you can come up with once you know this system - indeed, that's where things get interesting - but it's not a problem in the cases we will see today. In other words, the steps I'm outlining today have something to do with qualifying the real interesting cases. At this point, I've asserted that there are several things you can do unproblematically with your imagination in at least some cases, and I stand by those assertions. Alright, to recap where we're at now, I've described the first axiom of a system, which is just global cause and effect, or determinism, and I've almost finished describing the first rule beyond just determinism, and the first step you can take using that rule. If this doesn't take too much longer, we'll soon have a system that's up to determinism plus one rule and examples of how you can take one step accoring to that rule, in a way that has also addressed all possible objections to the system so far. What this logic system eventually gets us to is reasoning in terms of possible worlds, but we're not there yet. Basically, we need to have a number of steps that get us from the real world to counterfactual worlds. Once we're safely in counterfactual worlds, we can do things there, and we can take what we did there and make assertions about the real world. But it will take more than one step to get all the way from the real world to counterfactual worlds. Rule 1 of this system, you assert NSCR(x + e1 | z + e2). NSCR is short for "necessary and sufficient cause in the real world". In NSCR(x + e1 | z + e2), the cause is to the left of the vertical bar, and effect is to the right of the vertical bar. In the simple exampe of just turning a light switch and the light coming on, x is turning the light switch and z is the light turning on. NSCR(x + e1 | z + e2) is short for something like "In the real world, x was a necessary and sufficient cause of z", but what are those terms e1 and e2? What NSCR(x + e1 | z + e2) really means in full is "In the real world, x and everything else at time 1 was a necessary and sufficient cause of z and everything else at time 2". That's just universal cause and effect, or determinism. Apparently, it's slow going to get out of this world and into counterfactual worlds. But we'll soon be done describing this first step, and after that it won't take much longer to be all the way into counterfactual worlds. When we get to the three fires example, we will see why step 1 wasn't as simple as saying "C(x|z) means x caused z" and why we instead have to start with "x plus everything else at the time was the necessary and sufficient cause in the real world of z plus everything else at that time". The way we're setting up right now is going to get us out of trouble when we get to the three fires example. So, no matter what you do in step 1, the statement you get asserts only NSCR(everything at some time | everything at some later time). But when you write it as, for example, NSCR(x + e1 | z + e2), you've started to pick things out for later steps, and the later steps will operate on the things x and z, as long as you've done a good enough job of picking them out in this step 1. That's step 1. Whatever you write just asserts global cause and effect determinism, but it starts to pick things out for later steps. Rule 2: NSCR(x + e1 | z + e2) & "if x hadn't happened then z wouldn't have happened" -> NSC(x|z) Rule 2 introduces the NSC operator. Where NSCR meant "necessary and sufficient conditions in the real world", NSC means "necessary and sufficient conditions". This gets us all the way into counterfactual worlds in some cases but not others. That's because the "+" operator is neither the conjunction operator (AND), nor it is the union operator (U). It's an operator that's neither of those, but something else. This reflects the fact that in the real world, when there are two events that occur, the relation between them is neither the things that the AND operator (in its standard usage) implies, nor the things that the U operator (in its standard usage) implies. We shall refer to this operator as "together with", as in "when x happened, together with everything else in the world at the time, this caused z, togerhter with everything else in the world at the time". I should mention now that in Rule 1, when you assert NSCR(x + e1 | z + e2), x can be a plurality of things, but if it is, you have to put the + operator between them. So in the case of the three fires, when you use Rule 1 to make an assertion, you say NSCR((x + y) + e1 | z + e2). This means that in the real world, (where, by the way, there was one timeline with all the things in it) what you write in this notation means "fire x together with fire y together with everything else that morning caused fire z plus everything else later that day". It substitutes (x + y) in place of x in the Rule 1 formula, but it's still nothing other than a statement of determinism: the totality of things that morning caused the totality of things that night. There's a rule between Rule 2 and Rule 3. It's not a numbered rule, but it explains why we still might not be done after Rule 2. Rule x: any statement with a + operator does not make a real statement of NC (necessary cause) or SC (sufficient cause) or NSC (necessary and sufficient cause) as you may assert it about the real world. The corollary to that rule: you must eliminate all + operators (when you have them) to make a real statement of NC or SC or NSC. In simpler cases, we run into no problem here. In the three fires case, this requires us to do more. Let us consider the simple case of turning a light switch. I turned the light switch and then the light came on. Per Rule 1, I assert NSCR(x + e1 | z + e2), or "Turning the light switch (x) together with the rest of the state of the world at that time was in the real world a necessary and sufficient cause of the light coming on (z) together with the state of the rest of the world at that time" (which states nothing other than global cause and effect determinism, with some handy partitions). By Rule 2, I get from NSCR(x + e1 | z + e2) plus the assertion "If I had not turned the switch, then the light would not have come on" to NSC(x|z), which means "turning the light switch was a necessary and sufficient cause of the light coming on". This last statement is a statement that's "made from a domain that's all the way in counterfactual land" and therefore it's also a statement about the real world. By Rule 1 and Rule 2, in that case, I have now applied this formal logic system to make a cause-effect statement about the real world. And it's true. Turning the light switch was a necessary and sufficient cause of the light coming on. Only now have I been able to say that unconditionally with a formal logic system behind it. In the case of the three fires, we apply Rule 1 with (x + y) in place of where x appears in the general form of Rule 1, and we get NSCR((x + y) + e1 | z + e2), which (as always with Rule 1) is just a fancy way of stating global cause and effect determinism. We can then apply Rule 2 to that and get NSC(x + y | z), but Rule x tells us that this is not any meaningful statement about cause and effect in the real world, because anything with the "+" operator in it doesn't make any meaningful statement about cause and effect in the real world. In this case, Rule 2 has not got us all the way to counterfactual worlds, but it gets us another step of the way there, but we'll need more in our system if we're getting to something we're allowed to call a real statement. At this point, in an example like this one, we seem to be in an intermediate zone between where we started stuck in the one real world and where we want to get to, "fully in counterfactual land". The statement NSC(x + y | z) doesn't really mean anything easily translatable in any of those contexts. In a case like this, Rule 2 is like the ferryman: we're with it when we're neither where we started nor entirely where we want to get to. Once we're done Rule 3 (in a case like this), we get all the way to statements about counterfactual worlds and to cause-effect statements about the real world. Think of the series of steps as a process that takes us from the real world, through some intermediate struggle, to counterfactual worlds, and then finally to making cause-effect statements about the real world. We have to start in the real world, then sluff off the taint that comes from starting in that world, then finally get into the counterfactual domain. Once we're there, we can make statements that apply both to our world and to worlds in the counterfactual domain. That will take a third step. Rule 1 was a way of saying things about the real world while making some relevant distinctions. Rule 2 is intermediate and applies neither to the real world nor to the domain of counterfactual worlds. Rule 3 gets us from the intermediate realm of Rule 2 to being fully in the counterfactual realm. Recall what I said right near the start about a tentative definiton of how cause and effect work on a less than global scale: "x caused z == if x had not happened, then z would not have happened (where x is some event that preceded z)". Rule 1 really had nothing to do with this idea. Rule 1 is all about just taking the picture of global determinism and drawing borderlines on it based on where you think it might cleave. But we need Rule 1 to do its thing before we can apply Rule 2 to do things in terms of exact rigor. And we need Rule 2 to do its thing before we can apply Rule 3 to do things in terms of exact rigor. By the time we apply Rule 3, we're working on things that have had all the taint of starting in the real world removed. Only then can we make counterfactual statements for real. In the simple case of turning the light switch and then the light coming on, we only needed Rules 1 and 2 to do that work, because the example was simple enough. Handy in that case, but we'll need Rules 1 and 2 and 3 to do that work in the three fires case. Before we finish the part of this analysis we're doing today, let's consider what happened in the simpler example of just turning the light switch and the light coming on. Turning the light switch was a necessary and sufficient cause of the light coming on - not only in this world where I indeed did turn the switch, but in terms of that plus at least one counterfactual world. So Step 1 and Step 2 together have got us from our own world, through a bit of a struggle getting out of that context, to part of the set of counterfactual worlds, to being able to make statements about a set of worlds that contains at least one of those plus our world, and finally to the satisfaction that those statements may be assertions of facts about our own world. Back to the idea about what objectors might say about what we've been doing, one might object "That's a lot of steps that involve asserting how your imagination works". To that, one could reply, "No, there's only a few steps where things about imaginations are asserted, and there's a lot of qualifications delimiting exactly how much is being asserted about imaginations, and when I apply these to examples, I can see that if I were to assert these things, then other people would agree about the mechanisms, and if they also agreed with the premises, they would also agree with the conclusions, and in no ways that allowed anything to intervene other than how minds work in tandem whenever we talk in any counterfactual manner, and therefore in manners that agreee with how intelligent minds work in general." If this is satisfactory, and I assert that we still haven't made any unwarranted leaps, what have we just jumped to? Now our considerations are safely in a set that contains our world and one counterfactual world, the one of those wherein one thing that didn't happen in our world did happen, but is otherwise was similar as possible at that time. Maybe. That's what our simplest example has revealed. What about our system? What has our system so far been implying? And could there be objections to that now? Rule 1 just says assert NSCR(x + e1 | z + e2). That just means to assert global cause and effect determinism and draw whatever borderlines you care to imagine on it at any given time. Rule 2: NSCR(x + e1 | z + e2) & "if x hadn't happened then z wouldn't have happened" -> NSC(x|z). Does this assert more than we have investigated it to? Yes, it turns out, infinitely more. Sneaky. Let's break it down. On the left side, there are two conditions. The first is just asserting global cause and effect determinism. The second is to make a counterfactual claim and assert that your imagination qualifies you to make it. On the right side, we have a statement with the operator "necessary and sufficient cause". This operator applies at least to our world and to the one "possible world" where x hadn't happened. And infinity more possible worlds. I don't know if you noticed, but when I first stated Rule 2, I also snuck infinity worlds into what it implied about any given usage of it. It is now necessary to defend Rule 2 against any possible objections. The counter to any possible counters is similar to what I said before about "possible worlds", which, recall, means "non-self-contradictory hypothetical worlds". If Rule 2 runs into any problems, it doesn't run into any problems in the examples we're considering today. How and when exactly it runs into problems in more detailed examples is indeed something to be addressed. In other words, I mean to say that what I'm doing today is asserting that this is a system that requires asserting things about imagined counterfactual worlds, and that in the cases provided here, it doesn't run into any problems. I don't complete the system in this article. I get it to a certain point in this article. I assert that the system that I start in this article is merely a codification of how counterfactual thinking works when we do it. If counterfactual thinking runs into problems in general, what I'm doing here is providing a codification of where those problems can be located. I don't disagree with the statement "counterfactual thinking runs into problems sometimes". If you think counterfactual thinking runs into problems sometimes, as I do, I trust that you'll want to make those problems more specific with reference to some framework. So I'm like a craftsman fashioning a dartboard, and if you want to poke it full of holes, maybe I'm only trying to help you say where those holes are. But I do assert that there are no reasonable holes in the system as it applies to today's examples. Those examples are the three fires example, the example of turning a light switch, and one more that I will describe shortly. Okay, we saw that in the simplest example of turning a light switch, Rule 1 and Rule 2 were sufficient to arrive at "turning the light switch was a necessary and sufficient cause of the light coming on". The three fires example, after Rule 2, left us wondering what we can do to NSC(x + y | z) to get it to say something meaningful about cause and effect (since anything with the "+" operator is not done working out what it has to say about cause and effect). Before we make a further point of distinction, and before we define Rule 3, let us add another example that will present us with what's the rest we're trying to solve. Suppose I turned the light switch, and then I said "Venus being in orbit around the sun together with my action of turning the light switch caused the light to come on". According to that simple rule about counterfactuals that we started with, we say "If Venus being in orbit around the sun together with my action of turning the light switch all hadn't happened, would the light have come on?" and the answer is "no, it wouldn't have". Now we've introduced some kind of nuisance to take care of. LIke in the three fires example, when we apply rule 1 to that, we get NCSR((x + y) + e1 | z + e2) where x is Venus being in orbit around the sun, y is turning the light switch, and z is the light turning on. Like any application of Rule 1, this is only an assertion that universal cause and effect does its thing, but starts partitioning things out. So that line is true, even if there's something strange in it. After rule 2, we get NSC(x + y | z), just as in the three fires example. That line says nothing about anything. Recall Rule x, which says that any line with the "+" operator says nothing specific about cause and effect. So at this point, the three fires example and the light switch and Venus example are both at NSC(x + y | z), but the simplest example of the light switch is resolved. I'll introduce Rule 3 in today's article, but it's truncated. The version of Rule 3 that I'll describe now handles our set of 3 examples (1 of which is already solved and 2 of which still need resolving). It's not correct. The present article ends when I've described the tenataive version of Rule 3 that handles the rest of what specifically we still need solving. As we saw in the simplest example, sometimes we don't even need anything past Rule 2. That's, recall, what allowed us to say already "turning the light switch was a necessary and sufficient cause of the light coming on" and leave that example at its natural conclusion, enlightening us with a relevant fact about how to manage our household lives. Recall also that the two fires example and the light switch together with Venus example both left us at NSC(x + y | z) and wondering how any further rules might distinguish between the two and give us some notation to express how the two examples are relevantly different. Rule 3. Suppose that where you're at now can be expressed as NSC(x1 + x2 + x3 + ... + xn | z). If x was singular, you already finished up at Rule 2. If it wasn't, then you have something in that form. 3a: for each x, check using your counterfactual imagination whether if you didn't have that x you would still have z. For whichever of these cases you can have not x and still z, then remove it like it's redundant, along with its corresponding "+" sign. If we try Rule 3a on the light switch plus Venus example, we see that we can have turning the light switch plus Venus popping out of existence and the light still turns on, so this rule tells us we can turn NSC(x + y | z) into NSC(x | z). In this case, we now have eliminated all the "+" operators, and we can make the substantial statement "turning the light switch was a necessary and sufficient cause of the light coming on", as in the simplest example where we turned the light switch and didn't wonder if Venus being in orbit around the sun had anything to do about it. In the three fires example, Rule 3a gets us nothing, because it asks us "if fire y hadn't happened, but fire y had, would fire z have not happened", the answer to which is "no, fire z still would have happened". In this case, we must go yet further in the logic. Rule 3b is next, and in today's article, the Rule 3b presented is severely flawed, but that's as far as I'm going here. This tentative Rule 3b says: after you're done Rule 3a (and if you removed any items in Rule 3a, then refactor the names of the x items to x1, x2, ..., xn), now you can say NC(x1 OR x2 OR ... xn | z), and you can say SC (x1 | z), and you can say SC (x2 | z), and so on, to SC(xn | z), where NC(x | z) means "x necessarily caused z" and SC(x | z) means "x sufficiently caused z". In the three fires example, this gives us the conclusions NC(x OR y | z), SC(x|z), and SC(y|z), which translate to "[x OR y] necessarily caused z" and "x sufficiently caused z" and "y sufficiently caused z", that set being probably what you wanted us to end up at after the third paragraph of this article. If this has been done before and better, then this is a brief description of how to "invent the wheel" of causal formal logic. If this has not been done before, then this should be enough for other logic choppers to fill in the rest. This system breaks completely in more complicated proposals other than the 3 example cases. This system breaks down at step 3 in a case such as "x together with y would have caused w, x together with z would have caued w, so the conditions for causing w were [x AND (y OR z)], but whatever remains of that sort, it would be whatever a logic chopper can fix by refining rule 3. Rule 1 assert NSCR(x + e1 | z + e2) Rule 2 NSCR(x + e1 | z + e2) & "if x hadn't happened then z wouldn't have happened" -> NSC(x|z) Rule x (not 2 or 3) Any statement with a + operator does not make a real statement of NC or SC or NSC Corollary: Must eliminate all + operators (when you have them) to make a real statement of NC or SC or NSC Rule 3 NSC(x1 + x2 + x3 + ... + xn | z) -> (you're done already at rule 2 if x is singular) -> (3a) For each x, see (in counterfactual imagination) if not x and still z. If in any of these cases not x and still z, then remove as redundant e.g. if x1 is redundant then (x2 + x3 + ... + xn) -> (3b) If there are still + symbols remaining after 3a, then (refactoring from 1 to n) NC(x1 OR x2 OR ... OR xn | z) and SC(x1 | z) and SC(x2 | z) and ... and SC(xn | z) fire NSCR((x + y) + e1 | z + e2) [Rule 1] NSC(x + y | z) [Rule 2] NC(x OR y | z) SC(x|z) SC(y|z) light switch NSCR(x + e1 | z + e2) [Rule 1] NSC(x|z) [Rule 2] NSC(x|z) [Rule 3] light switch plus redundant NCSR((x + y) + e1 | z + e2) [Rule 1] NSC(x + y | z) [Rule 2] NSC(x|z) [Rule 3] (definitions of necessary and sufficient in terms of if-then) if P then Q Q necessary for P P sufficient for Q Lorem - It has been asserted that if you know enough about the real world, you can make an assertion of this kind, at least sometimes. Lorem - A very rough description of cause in the less than global sense. X preceded Y in this world. Of all the possible worlds in which X didn't happen, in the one of those that's most similar to ours, did Y happen? If no, then X caused Y. If yes, then X did not cause Y. X caused Y if and only if if X had not happened then Y would not have happened. X did not cause Y if and only if if X had not happened then Y still would have happened. Lorem - At the global level, the totality of how things were at any given time caused (or will cause) the totality of how things were (or will be) at any subsequent time. At the less than global level, cause (which is a different concept than cause at the global level) may just be a useful construct with no straightforward definition - definitions may be satisfactory for a given purpose, but in general the conditions of what counts as a cause are fuzzy - there may always be edge cases that make this or that part of the working definition problematic or unsatisfactory. Lorem - The fire in x didn't cause the fire in z. The fire in y didn't cause the fire in z. Well it seems intuitive to say that some one or more manufacturing defects in either x or y or both caused the fire in z. Both defects together did. If you remove both, then z would not have burned. So the cause of the fire in z was together the defect in x and the defect in y (the conjunction). But that seems like it's saying too much. Suppose I said that the cause of the light turning on was together me turning the switch and Venus being in orbit around the sun. True by our definition. If you remove Venus and turning the switch, the light would have remained off. Now there seems a redundancy. If I remove Venus from the conjunction, then turning the switch caused the light to turn on. Still true. What's the rule? If you can remove one thing from the conjunction and the other one alone produces the effect, then parse. If you do that to the conjunction x and y, now x caused the fire and y caused the fire. But x didn't and y didn't by our earlier rules, so there's a problem with the rule set we've built up now because it's produced contradictions. Unless x caused z and x did not cause z is not a contradiction. If you have a conjunctive cause and you try removing one and try removing the other and you still have cause in the one case and not the other, then remove the redundant one. If you have a conjunctive cause and you try removing one and try removing the other and you still have cause in both cases, then stick with the conjunction as the cause. That might solve it. But no one person is responsible for the conjunction. That still might be satisfactory. There are other unrelated problems. Lorem - Conjunction x and y was necessary. X alone was sufficient, y alone was sufficient. Note About the Unsorted Piles Below The unsorted piles below are in chronological order of when I wrote them, pretty much. If you were to interleave unsorted piles 1a and 1b, then together they would be chronological. Same with 2a and 2b. Some of them are writing from before the stuff above and some of them are from after. From before the part above: unsorted piles 1a, 1b, 2a, 2b, 3. From after the part above: unsorted piles 4, 5, 6, 7, 8. Unsorted Pile 1a Every day, get the brain into an unfamiliar mode at least for a while. The mode can be something you haven't done before, or just something you haven't done in a while. The goal is defamiliarization, then re-familiarization. Get into a liminal space, then get to navigating it. It can be starting something new that you haven't done before. It can be something you dropped a while ago that you're revisiting. It can be reviewing something you put together as a retrospective. It can be just viewing a piece of art that you haven't seen before. The task is to do this somehow every day. On most days, you should be making steady progress on things you're staying familiar with, but not all day. But don't ever neglect to go somewhere else in a day at least briefly. "Just six more months now, then I'll have a baby." "Did you buy any books on parenting?" "No..?" "Are you going to?" "Why the hell would I buy any books on parenting? Are you saying you think I'll be a bad parent if I don't?" "No. Just, when you have something to do that's complicated, and they didn't teach you about it in school.. isn't that when you buy a book or two on it?" "You shut the fuck up right now about this idea of reading up on how to parent! I don't like this, you suggesting I'm about to fuck this up." "Aren't there a lot of things, not you in particular, but aren't there a lot of things that are so complicated that everyone would fuck them up if they didn't read some guides first?" "[not buying it and getting more incensed]" "You're more skeptical than those people who believe in all sorts of things that are just fun to believe in. But does that mean there's really no spooky action in the world?" You hear "It does no good to get angry in that scenario," agree, and then get angry in that scenario. Why? You contemplate the advice again, and next time it works. Why? It takes time and maybe practice for it to take. If someone says, "the advice is pointless because it didn't work the first time, and people can figure it out by practice even if they haven't heard the advice," that's mistaken. The advice does help the process, even if it takes a few rounds for the advice to stick. However you're doing now and whatever you've experienced in the past HAVE been the direct result of a butterfly flapping its wings on the other side of the planet. The Butterfly Effect is a "has happened, has always happened" statement, not strictly a "can happen" statement. "He generally has trouble figuring out how to do something when he tries, but he's always trying," vs "He generally has no trouble figuring out how to do something when he tries, but he's never trying." Is this to the credit of the first of those people and to the discredit of the other one? "Yeah, he used to do shitty things all the time, and then he changed his ways and stopped doing shitty things. Changing his ways is to the credit of his decision to change. But he didn't choose to decide to change his patterns of everyday decision making." Pretheoretical does not always mean the same thing as axiomatic. Re the "O hai" factor: in reality, there are a ton of things to do for facilitating. It's like knowing 100 facts about the right and wrong ways of growing a plant. Get all 100 wrong and it's exceedingly likely that the plant will die, but still possible it will grow. Get all 100 right and it's exceedingly likely that the plant will grow, but still possible it will die. So there's a massive difference in how often you get the "O hai" moment when doing all those things right, compared to when doing all those things wrong. Natheless, even when you're doing all the things for facilitating, when exactly you get the "O hai" moment is still 100% outside of agency. Where is the paradox between determinism and indeterminism? It depends on where you're coming from. For someone who has left religion and its pathologies behind, the paradox arises somewhere around "Well, I must be subject to the laws of physics, but also if that's the nature of decisions, then what's to be done about desert?" For someone who reasons in terms of religious poppycock, maybe the paradox is at some completely different place, e.g. "I'm more than that which is subject to the laws of physics, because I have an immaterial soul, but if god knows all which is to happen in the future, how can that be squared with the idea that my soul has agency?" (On Determinism) There's a poker-playing robot (2) with a number of dials on its chest with labels e.g. "threshold for folding", "threshold for raising". The robot's decisions in the game are determined by the settings of those dials. The robot also has a program that analyzes the results of the games it's played and can adjust the dials accordingly. There's another robot (1) with the dial values hard-coded and it can't change them. What does it mean for the second robot to be ill programmed? If the dial values are not conducive to winning the game. Does robot 2 have free will, or are they both deterministic? What does it mean for robot 2 to be ill programmed? Depends on what part of the program you're referring to. Could mean the dial settings if that's what you're referring to. Could mean the program for adjusting the dials if that's what you're referring to. I think that what's commonly meant by "free will" is "something that's not like robot 1, but is more like robot 2". Here we find the muddle. Clearly, both robots are fully deterministic. Robot 1 is clearly deterministic. Robot 2 is just as deterministic as robot 1, but it's less clear how it is. In the case of a hooman, what's really going on is like the difference between robot 1 and robot 2, but with far greater breadth and far greater depth. That is to say that there are far more dials, and there are far more layers of dials. When it comes to dealing with real life and not just the set of decisions at a poker table, there are far more dials to deal with: maybe 100 dials compared to 10. And when it comes to the nuance of hooman decision-making, there are far more layers of "these settings on this level adjust the dials on these settings on this level": maybe 10 levels compared to 2. So you encounter some situation, and given the time constraints, you act according to your habitual ways of reacting. Suppose that works in that situtation, but hasn't always worked in all situations. So sometimes you reflect on how you make habitual reaction decisions. And the result of that reflexion is that you query your beliefs. And suppose you're doing this on some fully free afternoon, so you have a while to engage in this sort of introspection, and you get as far as asking "But when I query my beliefs, what criteria do I use to determine which beliefs I have and which I reject," and then further to "But when I query my beliefs according to some criteria, what do I use as criteria to determine those criteria," and then further to "But when I query the criteria I use to evaluate the criteria I use to determine my beliefs, what criteria is that according to?" and then you find the infinite regress that goes "What are the criteria for the criteria for the criteria for [et cetera] the criteria for the beliefs that determine my habits that determine my decisions". But this is just like the robot with one set of dials and two layers of decision making: it's just more layers of dials and more dials per layer! This is still every bit as deterministic as something like a set of traffic lights that floops from red to green and back to red according to nothing but a timer. More layers and more thickness per layer does not a contra-causal freedom make. "So if there's one person out there scamming, and he knows he's perpetrating a scam, and there's another person out there scamming because he's bought into a scam and thinks he's not scamming, there's no difference in culpability?" "Would it be crazy to reframe and just ask what we can do about both?" Why the roundaboutness? The daimon can see a light so great that it will burn your eyeballs out if you look at it directly. In hooman decision making, there is somewhere a core, something that's behind the thing that's behind the thing that's behind the thing that (...) the reasoning for this or that decision. What do we call that core? How does one decide it? What shapes it? And how can that go right or wrong? The right core is this: avoid excess. At a first take, the principle "avoid excess" may seem like it's at some middling level of specificity, and applies to some but not all decision making: perhaps one of a set of rules to live by. When we consider the layers to how the principle can work, and the range of considerations it can be applied to, we will see that "avoid excess" is the principle that is at the seat of all the principles that are at the seat of all decision making (unless you're doing it wrong). It turns out to be quite a vague rule. Of course, the seat of all principles has to be vague in order to be so broadly applicable. It's the further things that derive from it that get more specific. Why is your body not at room temperature? Because it has to be warmer in order for you to be alive. Your body temperature can get so hot that you die: an excess of hotness. It can become so mild that you die: an excess of coldness. The unconscious processes of the body work on principles of balance that avoid either of a pair of opposite excesses, when not ill. The unconscious processes of the mind also do, when not ill. The corollary to "avoid excess" is "get all the balances right". Here's how that has layers: Oscar Wilde said, "Everything in moderation, including moderation." So if you get drunk every day, that's an excess of drinking. But if you never touch a drop of alcohol, that's an excess of sobriety. So the real meaning of excess can mean the typical meaning, but there's a lot more it can also mean by extension. "The key to life is excess in moderation. They'll tell you that moderation is the key to life, and the people who tell you that are those full of shit people who think that buying drugs is easier than buying a newspaper. They've never done dick with their lives. You have to stretch it out every now and then if you're going to have any fun at all. Don't drink a couple of beers every night after work. Pick one night out of the week, like tonight, and drink all the fuckin' beers at once. Get completely shitty. Find your range. Tell the person that you're sitting next to what you've really been thinking about for the last six days, and then apologize for the next six days, and then start over. You can't play in the middle of the road if you've never seen the far curb - you don't know where the middle is, really. Push yourself. [...] Don't eat a mushroom stem and see colors: eat the whole bag and see god one time, a real god." - Doug Stanhope Please excuse me if the hypothesis seems untestable. What I am proposing is that there is something at the core of the unconscious level of decision making, and further, that it is a rather vague something. It can be stated quite succinctly as "avoid excess", but the many extensions of that get quite vague. Play along for now, if I may ask. I once read in a critical thinking textbook something like "If there is some debated issue, and at a first take, there seem to be people with good arguments on both sides, then do consider both sides before rendering a judgment on the matter." I liked that idea. For so many years, I thought I was living by the principle. That apprehension seemed to be unshook, until one day. My friend Bob said "If there is some debated issue, and at a first take, there seem to be people with good arguments on both sides, then I consider both sides before rendering a judgment on the matter," and then he proceeded to say the most outrageous things, things that absolutely could not, by the wildest of reaching, possibly be squared with the facts we had agreed we were taking as true. He was parroting conspiracy theories, saying "Couldn't this be the case?" and I was saying "We agreed that we're taking this and this as proven facts, and no, it could not be the case given those facts," but I also had every reason to believe that he was sincere when he stated what his criteria of decision making was. Somehow, this creature had said as much as "I am skeptical and balanced in decision making" and then extended agreed facts to where they nowise could be extended to. I still suspect that I have taken that lesson from that critical thinking textbook I read all those years ago. But how could I know for sure? Bob says he follows the same rule, then says things that most outrageously violate that rule. So how do we make decisions about making decisions? The occlusion is real. Can you say for sure? If you say "I only form a belief if [x]," and then I say, "Well how do you choose between x1 and x2?" and then you say "Well, I have a rule for that, too, and that is [y]," and then I say, "Well how do you choose between y1 and y2?" and we carry on this way, and find that the regress is not infinite, but terminates at some core of decision making about decision making about decision making, what is it? As stated earlier, I conjecture that it is "avoid excess," but only if you're doing a good job of operating a hooman brain. There are a number of other things it can be, especially if your faculties of thinking have had some certain unfortunate influences. I have a friend named Bob who once said this about the public schooling system: "I think they conditioned us into thinking freely." I now believe that the most important thing that schooling can do for developing minds, given the constraints on what can and what can't be accomplished by means of a school system, is just that. I asked Bob what that could possibly mean. He said this, "In high school English class, almost every essay assignment was on a prompt of 'argue either for or against [x]', and every student was free to first assess whether for x or against x seemed like truth to them, and to form arguments, and then a high grade was assigned to anyone who could argue for x with good reasons well stated, or against x with good reasons well stated, no matter whether the teacher believed x was true or false. And I found that this had an extended effect on me. University had more of the same format, and by the time I was out of university for a year or two, any time I encountered a hard problem with popular disagreement, I had a reflex to ask 'What's being said by people on both sides of this disagreement, which side seems to be more likely correct, and all the rest,' always before even having a gut feeling of belief one way or the other. It's happened so many times, it's like a reflex, or a 'second nature' matter of conditioning, and it's the schooling system that conditioned that into me (and heck knows it's not a natural reflex for anyone). If there are a lot of things I can say about how my schooling was awful, I did walk away from it with that procedure drilled into me." Another corollary of "avoid excesses" is "avoid deficiencies". Since we already said that excesses are always in terms of opposites, it naturally follows that excess of one thing is deficiency of its opposite. Suppose there's a guy named Bob who is never convinced that he's not a brain in a vat. And suppose there's also a guy named Alan who believes everything in the Mormon canon and everything his Mormon pastor tells him to believe, for the sole reason that he happened to be born in Utah. Sometimes Alan says "Bob, you're not sure that you're not a brain in a vat? Then how would I be talking to you right now," and Bob says, "Well, the electrodes attached to my brain, which might be in a vat, are giving me the perceptual impressions that I'm in a room and you are too, but it's just zap buzz signals from a bundle of electrodes, maybe." And sometimes Bob says "Alan, do you really think that the prophet Joseph Smith read the words of god from a golden tablet in a hat that ascended into heaven when they were done transcribing them? How can you navigate life on a daily basis when you're believing such ridiculous things," and Alan says, "Well I can't prove it, but you can't disprove it, because nobody can prove the non-existence of anything." Well, we could say that Bob has an excess of incredulity and Alan has an excess of credulity. But we could also say that Bob has a deficiency of credulity and that Alan has a deficiency of incredulity. See, there are so many problems that you can say reduce to "avoid excess" or its many extensions. Even when we imagine a skeptic arguing with a Mormon literalist, all the ways these two people see each other as ridiculous, it can all be related back to "Well I'm avoiding this certain excess, but clearly this guy is not avoiding that excess." It's still by its nature an untestable hypothesis, at least given current technology, that the core is "avoid excess". At the very least, I have intended to illustrate that it is a plausible hypothesis given all considerations. "Avoid excess" is vague. Of course, anything from which all other rules derive has to be vague. And disagreements of many kinds, perhaps all kinds, can be related to "avoid excess" in some way. Of course, because it's very, very vague. This is not to say that it's a perenially untestable hypothesis. It may be the case that brain scanning technology some day advances to such a level that this can be tested. If that happens, maybe some day we will see for sure whether "avoid excess" is or is not the core. So you read some good book with good advice. Something with real insights to live by. And the book says, "I beseech you, you must decide to do this, if you will do one thing for me or one thing for yourself, let it be this one." How to take that deterministically? You picked the book because it seemed like a good idea, and because you follow the program of choosing good ideas at least sometimes. And you follow that program of choosing good ideas because you've decided to. Because it seemed like a good idea to follow a program of choosing good ideas. Anyways, if there's one thing I would beseech you to do, it would be that your core guide be "avoid excess," but that beseeching is only going to work if I'm able to appeal to you in terms of some level of your stack of motives. I could say something like "If you avoid excesses, and you really take to heart what it means to avoid all excesses, then you will find the means of making lots of money in your career," or I could say something like that but ending in "you will find the means of obtaining happiness," or whatever thing. And that would be me attempting to appeal to some part of your stack of motives that is not the core. Maybe somewhere in the middle of your stack of motives you have "I do what I think will get me lots of money in my career," or "I do what I think will get me the means of obtaining happiness." Those are the parts of the stack that you have access to. Whether you know it or not, the base of that stack, whichever one it is, is "avoid excess." Then what would be the point of me telling you to live by the rule of "the core should be avoid excess" if the core is "avoid excess" no matter what the rest of the stack is? Because it has the potential to clear confusion. If you can understand that your core is "avoid excess" no matter what the rest of the stack is, then you can understand your own decision making (especially decision making about decision making about decision making), and it may be easier to find errors in whatever part of the stack you direct your attention to. There are many ways that the stack can go wrong, in which cases you think you're avoiding excess while actually engaging in excess. And if you know the nature of the whole thing better, it may be easier for you to find those errors. And if you're in the dark about the nature of the whole thing, it may be easier for you to persist in errors. So, of course, if I beseech you "avoid excesses," there's no way that can change your core, because that's what it will be no matter what. But it can help you in mapping out the extensions of that core to the rest of the network, to the more specific things, and help you in finding if any steps of that are in error. It is immoral to set a thermostat to a temperature that it cannot achieve with the widgets it's controlling, because that guarantees a frustrated intentionality. In the How I Built This Podcast (trigger warning: it is problematic sometimes), the catch phrase is at the end of an interview, the host asks "In your adventures, what percent was skill and what percent was luck?" Of course, to anyone of a philosophical bent, this is clearly like asking "What percent of your being alive do you attribute to your lungs and what percent to your brain?" It's meaningless to give percentages, but every guest is asked to answer the question in the form of two numbers. I imagine someone of a philosophical bent who somehow makes it onto the show answering as follows, "There's no meaningful way to answer that question in the form of two numbers without further defining the terms." "Okay, in a hundred words or less, define the terms and then if possible give numbers to them." "Okay. 50 percent skill, but of that skill, it was 100 percent luck to have had that skill, and aside from that, 50 percent luck. But even at that point, it's like saying my being alive is 50 percent attributable to my lungs and 50 percent attributable to my brain, so the numbers are still meaningless because the two items are both necessary conditions and either one without the other would not have worked. Was that 100 words or less?" To spell this out in full detail, here's the thought process in terms of alcohol. "What's a good amount of alcohol to take? 10 measures per day on average? That's an excess of alcohol. 0 measures per day on average? That's an excess of abstaining. 1 mesaure per day on average? Yeah, that's about right. 1 measure per day then, but according to what distribution? Take 1 measure per day every day and never more and never less? That's an excess of uniformity. How about 5 measures once every 5 days and none on each of the other four? Yes. So I will take no alcohol on four of every five days and five on the other, accoding to the series 0, 0, 0, 0, 5, 0, 0, 0, 0, 5, 0, 0, 0, 0, 5, ..." The principle in brief is this: no excess in total amount, no excess of abstaining, and no excess of consistency. The only problem left with that formulation is that it has an exact rhythm, which is an excess of second-order uniformity. See how many levels result from "apply this meaning of excess to this meaning of excess to this meaning of excess." [lorem the next step] Etymology is here illuminating. This Chinese word [guo4] can mean 'to pass' or 'to cross', but can also mean 'to exceed', and can also mean 'error'. So you can at the same time say "Your error was" and "Your excess was". It does not apply to all kinds of error - there are other words that mean errors of various kinds - but it is illustrative that for many kinds of error, the word error and the word excess are one and the same word. This word has had both meanings for more than 2000 years. It works that way in modern Chinese and it appears in The Art of War ("The five excesses, or errors, of generals") [lorem in English, the word 'excess' can also mean abuse of power or akratic indulgence. In the first case, this ultimately refers to an error in a power structure got abused, and in the second case the issue is when there's a bungle pertaining to the levels of decision making] Re the o hai factor. It is never a conscious process. Every time I've said, "and the very next thing I will do with my brain is come up with the next idea that goes into this topic," it has never worked when the goal was coming up with something novel. When the o hai factor works, the result always has a characteristic of "okay I can see how that could have come from considering x and then applying creative technique y to it," but that activity of applying is in the opaque zone outside of experience when it happens. It always pops up fully formed, or so it seems to the conscious experience. As for facilitating, control can be exerted right down to a pretty fine level of detail just short of being on demand. For example, I can say, "Okay, I'm just going to write a paragraph that explains x," where that task is fairly mechanical, as if setting up the kind of legwork that takes little innovation to do right. Then if I'm lucky, the next thing that happens is I type the first two sentences of an ordinary paragraph and then the next sentence is something completely unexpected, and not really part of that stated task, but something just tangentially related. Since this can never be aimed with precision, "process" in this kind of writing requires being a manager of chaos - there's often a session of moving fragments around according to what happened to be related to what. This technique I described is roughly what's meant by "thinking on paper," and it's also related to other "process" techniques. When I write something that has 40 fully novel ideas in it, typically 20 of those came from the thinking on paper exercise, and the other 20 popped up while I was in the shower and naively thinking that the present activity occupying my mind was the details of precisely how I was scrubbing my buttcrack. Re about facilitating: more distally, one must learn the techniques, explain the jokes of good artists, analyze, study and review and review again how analysis is done, and glossaries of the techniques. The study of those things is conscious. The analysis of works in terms of those things is also conscious. The application of them in content generation is never conscious. "I am a robot who can adjust his own dials, but one of the deepest layers of dials is the set that tunes the program called 'I am a robot who desires to be good at tuning his dials'." [quotation, check wording] "It's easy if you have the right attitude, but it's hard to get the right attitude." About the layers. Perhaps? the number of layers is infinite, but like a converging sum, in terms of [lorem - amount of influence? amount of scrutability? amount of adjustability?]. And? when we say "the core", we're referring to something like all the layers from the 7th layer to the infinitieth? Or it's like atoms and objects? The atoms of your mind and personality are made of 'balance', but that alone does little to explain the emergent decisions and attitudes? You chose slavery because you wanted freedom. You chose conspiracy because you wanted to be rational. (= doubling down on the wrong layer?) Philosophy quick mode example. Does free will exist? Let us first consider the "hard" stance. If free will exists, then that means a person has the ability to make decisions that are not determined by the laws of physics. To a person who makes decisions, it sure seems at a first glance like that's what's going on. But how could that be? Aren't we made exclusively of the stuff of physics, and aren't all things that are the stuff of physics bound by the laws of physics? Some people say that we're made of more than just the stuff of physics. Perhaps we have our physical bits and an immaterial soul or some such thing. But if we have an immaterial part, wouldn't the immaterial entity be bound by something like the laws of physics, however they apply to the immaterial realm? Yeah, maybe. Or maybe the entities of the immaterial realm are not bound by any laws. So those can be the source of uncaused causes. How does that mode of uncaused causes work? No convincing account of that has ever been made (arguable). And supposing there's a good answer to that question, how does an immaterial entity exert influence on the things of physics? Could there be some part of the brain that can interact with entities of the immaterial realm? It has never been described how a thing of that sort might work, in any way satisfactory. But wait! At the base level, the laws of physics are fully indeterministic. The laws of physics can never say when any particle is going to be in this or that place. It's all randomness at the level of extremely small objects and extremely short time scales. That sure does knock down determinism as an explanation of how things fundamentally work. But does that exonerate the notion of free will? Hardly. Indeed, I have had the experience of doing something seemingly random, with no discernable motivation in terms of anything else. People observing even said, "That's so random!" Is there anything about that sort of action that suggests free will? Can we imagine that a world full of people acting that way all the time would be world full of people with free will? No. We can suppose that seemingly random actions that happen once in a while are the result of randomness on the particulate level, but if there's free will, it would have to be something other than that? What's left to supervene the laws of physics or to be the source of freedom in any way aside? Not only will they know things we don't know about things we know about, but also they will know things we don't know yet about things we don't know about yet. Is there something constitutive of psychical experience that is beyond the present detection instruments of known science? A 'yes' answer does not require an immaterial realm. When people say, "Oh and is there some magic cutoff point between a hooman getting a soul and a monkey not getting a soul? And where would that be in evolutionary history?" Heck, there could be a consortium of alien greys in the seventh dimension who deliberated all day and came up with a definitive answer on where to put that. If you are too reluctant to change your beliefs when someone says something that contradicts those, it is possible to make an effort to floop that attitude. This is one of the deeper levels of dials that we nonetheless have access too. Turning that dial changes a lot of emergent behaviors in the way that dials affect other dials that affect other dials that directly determine behavior. "Yeah, I'll try to be more welcome to saying 'let's try that' in that scenario." Still, there are limits to this. When someone says, "What's your sun sign," even though I consider myself open to questioning my beliefs, I'm more likely to say, "Uhhh, none of them," and then walk away than to say, "Perhaps this person can tell me something I don't know about how plausible the Western zodiac is!" That's not ill calibration. I once spoke briefly with a pastor (the word etymologically derives from a word meaning "manager of sheep") who told me that he speaks to Jesus, and that Jesus tells him things that there would be no other way of him knowing, and that he can channel the Jesus power to heal people who have maladies that can't be treated with any known science." When one of the congregation invited me earnestly to have a longer conversation with the guy, I did think about it for a long time, then every line of thought of how that might go lead to, "If I did that, it would have a 100% chance of turning out to be one of those conversations that has already been done before, and documented, and pointless." Then I refused. So I know when to say, "Questioning that belief, or the contra belief someone else has, would definitely be a waste of time." And still, there are many beliefs that I have that I would be willing to question. I'm not even sure that I exist, even in as much as existing as a thing that thinks. You could try and convince me otherwise about that, and if you're saying anything self-consistent, I would be delighted to hear what you might be able to do about it. I'm even more willing about politics and other such things. The present moment is real (assuming this is clear: it's what's happening right now). The future is real in as much as it is the present moment for someone else. The present and the future have that reality in common, but the past in that sense is that which has passed into no longer real. When you eat something delicious and you forget to enjoy the experience because you were too distracted, it's like when you read a paragraph and you forget to comprehend it because you were too distracted. Do this with every paragraph you read every time and you're functionally illiterate. Do this with every enjoyable thing you eat every time and you're..? someone who forgot to enjoy what little there is to enjoy about existence more than a rock did. O noes. Standard arguments for indeterminism are back-rationalizing. "We have the notions of praise and blame because they're founded on freedom of the will, but if there's no freedom of the will, then what happens with the notions of praise and blame? It wouldn't work if we did away with those." Convincing justification perhaps, but it's not a real reason. "There are all those people 600+ years ago who said 'the sun goes around the stationary earth,' but if the earth weren't really stationary and the sun weren't really in orbit around the earth, then why did all those people say that?". Or: "If the god of christianity isn't real, then why are there all those churches?" "Okay, well if the god of christianity is real, then why are there all those mosques?" The point here is that the formula does nothing: it's bogus reasoning. "We do X because of Y, but if Y isn't a thing, then why do we do X?" "Maybe because we're idiots." It doesn't prove Y - that's what I'm trying to get at. The standard arguments for freedom of the will tend to that pattern - and it's justified to write those off. It's justified to write them off, but it's still an imporant question, "If freedom of the will isn't real, then what happens to the notions of praise and blame?" and it's also an important question, "Are there other good arguments for freedom of the will?" My formulation for indeterminism - it's self-consistent but I don't even know if it disagrees with science - is the following. Suppose there's a yet unknown mode of matter (like dark energy, or whatever else we find next that bears further figuring out), and the action of that mode of matter is to stack the decks of quantum randomness. In that mode of matter, there are entities who make deliberate decisions to stack those decks. But those entities are not bound by universal causality. So in the world we know as physical, there's usually universal causality on the macro scale, but that can be subverted at any time by enough coincidences of quantum randomness. In the famous example, you might be looking at a bronze statue and that statue might wave hello to you and then freeze solid again. You can calculate the odds of that happening, but those odds are astronomically rare because of the sheer number of quantum-level coincidences it would require, so rare that it has probably never happened to any bronze statue that anyone was ever looking at or not looking at. But hooman decisions are less bound than bronze statues. Still bound by the laws of physics, as far as we have discerned, but like all physical causality, that breaks down at the level of quantum randomness. For example, if I were to say to you, "Say the most random thing you can possibly think of! Something utterly undetermined by anything that's ever happened to you in the past before this command!" you might say, "Hold the newsreader's nose squarely, waiter, or friendly milk will countermand my trousers," and you could be pretty sure that nobody has ever said it before in the history of hooman communication. Of course even that would be within the constraints of words you know, or even if you were to say something even less linguistically bound, it would still be bound by the phonology that your vocal apparatus is capable of producing. The point is that the things we usually do tend to be explainable, but at any time you can do something that's much less explainable, but still bound by some constraints of the world. Clearly there's something in the hooman brain that can respond to things beyond easy detection, and amplify them to the point of macroscopic action. Now, what if the deep source of that is quantum randomness, and what if the quantum random processes that produce those prompts are actually stacked-deck process, and what if there's some as-yet-unknown mode of matter, the entities of which can stack those decks? We need not even imagine that they are omnipotent. Perhaps they have not enough power to cause your favorite bronze statue to wave its hand, but do have enough power to tip the scales of your decision-making deep in your brain one way or the other. Fine.. what about 'em? What would it mean for them to be uncaused? Suppose they had the motive to do whatever they can within their influence to cause this world to be the best of all possible worlds of which are in their ability to influence. That's not gonna work. If they stacked a quantum random process that made the gunman decide not pull the trigger, and thereby saved the life of the person on the other side of the gun, that manipulation would be determined by the criterion "These entities do whatever they can to prevent harm, therefore they decided to prevent the harm that could have come from this one decision." This kind of thinking is still stuck in a causal mode. However they work, it's not like that. Here's how they work. They do whatever the heck they feel like doing. Sometimes it's good, sometimes it's harm, sometimes it's something else. Now let's not fall into the same trap of thinking again. Sometimes they do something just to keep things interesting, but that's not because they have a criteron of "We will do whatever we can to keep things interesting." They don't think that way - they're uncaused. Sometimes they do something that will keep things interesting. Sometimes they do something that makes things less interesting. Sometimes they cause wellbeing. Sometimes they cause harm. What all these actions of them have in common is this: the reason for the action is "I fucking felt like it, that's why" (on the part of the hidden entity). They'll do the worst things to you sometimes, like a little boy with a magnifying glass burning an ant. Because they felt like it. They'll do the nicest things to you sometimes, like diverting a disaster and turning it into a near-miss. Because they felt like it. They'll keep things interesting for you sometimes, like preventing a satisfaction if it will lead to a longer adventure for you, even if it's more tedious than meaningful. Because they felt like it. What determines the "felt like it" criterion? Nothing. It is the seat of reality. Everything else derives from it, and it derives from nothing. And it's not the same as randomness. Perfect randomness on the quantum level would produce what we more or less call determinism at the macro level. *There is no physical universe. A group of entities is deliberating.* "Yo! I just figured out: if we all pooled our efforts together and.." "Yeah, not sure I feel like it." "Hang on. You know how there's that physical realm and nothing in it? If we created a really big matter-antimatter split, right, there would be a really big splosion." "Yes?" "Well, it would create matter there. Just for a while. Like, it would only last a while before turning into soup, and then eventually recollapse, but for a while there would be all these imbalances. Matter would do all sorts of shit. There would probably be giraffes and shit for a while." "You sure?" "Well, not sure. A lot would depend on random processes, ultimately it would all come down to whether shit like lightning bolts join together amino acids, and whether radioactive decay would turn the DNA interesting. But there's like a chance. Yeah, if we all worked on stacking enough quantum random processes in that nothing world, there would be a really big splosion, and like, decent odds that some pretty weird shit would emerge for a while." "Fuck it.. I'm in." *They pool their efforts enough to cause a quantum splosion that we know as the initiating process of the physical world as we know it. After that, they're pretty exhausted and can only intervene a little.* Unsorted Pile 1b Are you feeling down that the future of intelligent life on Earth is doomed? It's just a small rock, and the galaxy will be fine. But who can really be that detached, aside from briefly with the help of shrooms? It is possible to stop being bothered by it? Can one person be feeling gloomy at the state of hoomankind and another person be thinking the same things without getting gloomy about it? If it's possible to be that detached, is it also possible to remain attached enough to do the best you can about it? Our brains got us into this mess, and now we're beyond avoiding it all by avoiding the use of our brains. We'll need to use our brains to get us out of this pickle that our brains got us into. "We are so very similar to inert rocks, different only in being more ert by just the thinnest of margins. Yet there's such an uneasy feeling - it immerses, it becomes the whole thing - the whole experience is of being not at ease. Even though it ends up mattering nothing whether you do this thing or that, all the while it's as if we're unsettled, because there's something about it all we can never settle on." "I did it the best because I'm good at it. I'm good at it because I've practised. I'm practised because I enjoy it. And I enjoy it because I find it intriguing. If I weren't inclined to find things intriguing, I would be fucked." "[something]" "That was a very incomplete story. It's really because I find everything except for superhero movies intriguing." Do you deserve wellness and to be treated nicely? How do you even answer that question? The universe does not deserve to exist. It doesn't make sense that there's something at all rather than nothing. By what means could we even approach the question of whether you deserve to have a good time there? "The question of whether I'm too obsessive or just obsessive enough, it bothers me endlessly." Having a brain that's highly disposed to angst: what's up with that? The point: it's good and bad. It's a "mixed blessing." The plain truth is that it's a tradeoff much like any other. This is one that different people would put on different parts of a tier list. That's gonna depend on who you ask. I'll put it at S-tier and I'll take it. "Complacency is complicity." "Who accepts most embiggens himself most." There are so many feeble things that people do / To help them feel like they're cheating death, / Or like they're extricated from decay: / Somehow exempted from our simple bonds. The best time to write down an idea is before you've figured it out. "Good enough." You know the meaning of it, but only in certain contexts. You know what it's like to finish some piece of work up to a status of "good enough," and then submit it and forget about it. But what about in the grandest of contexts? Do you know what it's like to say, "Everything is good enough. Everything I'm doing, and everything else, it's all just good enough"? You're lucky if you do. If I don't know exactly who you are, maybe you're one of those lucky few or maybe you aren't. Let's just say that there are a lot of people who know what the concept of "good enough" is like when referring to this or that specific little thing, but who don't know how to take that idea to a more encompassing level and say "Good enough, all this!" It doesn't matter what set of terms you use as long as you pick a set, say which one you're using, and stick to it, always meaning the same thing by the same word. I will be using both interchangeably and not indicating when I mean one thing or another. The greed of the most corrupt has robbed 99.99 percent of people of elective time. Many people call it leisure time, or make it leisure time when they have elective time. Other people when they have elective time do things with it other than leisure, and that's often what makes a lame person into am amazing person. And that's what's been robbed from us all. There are so few amazing people now because everyone's too busy to be amazing. Are you matter that has become animated? Are you animation that has become seated in matter? Or both? Or some other thing? The fairy tales of hollywood, like that people can do truly original things, like whoever happens to be the protagonist will definitely do something that changes the world, even magic wizard powers: there's a mix of harmful and helpful in all those things. They're entertaining, and to a person who has no real 'personal' issues, and has settled those sorts of things, these stories are entertaining and just that. To people who do have 'personal' issues, and who have trouble settling those, these stories only make real life seem deprived of meaning, and this effect is much stronger than the opposite effect of the stories being entertaining. A person of that sort can only heal once he is 'deprogrammed' of these effects of the entertainment industry. That can be hard. Even getting past "You will never shoot magic fireballs out of your hands, no matter how much you study magic incantations written in Latin" can be hard. To become healthy, one must get past that, and past "Just because your perspective on the world puts you in the protagonist position of that perception, it doesn't mean you're special in any meaningful way" and other harder things like that. The chief source of feeling uneasy is the desire to live forever, coupled with the evidence that it tends not to happen. This is called making an ideal out of a flight of fancy. It happens because we have the power of imagination. It's sort of just weird and random that this of all things would be the chief source of feeling uneasy, but anyways, "That's the loadout." Why is there mind instead of nothing? Until there's a satisfying answer to this question, there will be no satisfying answers to any other questions the answers to which might be satisfying. As for the mind and nothingness question, there is no satisfying answer. Let's even take the nothingness out of the phrasing of the question. It distracts somewhat. Let us phrase this most fundamentally infurating problem thus: "Mind: that's a thing. Why?" As for the question "Why does mind exist?", it is possible that next week there's a definitive answer to the question. Whatever the answer is, it will not be satisfying in the same sense as [lorem]. The whole task of learning how to be a hooman is founded in coming to terms with that. There is a novice kind of unease that would not be cured by the pill of immortality, and a more mature kind of unease that would be. "I have no complaint" is a valid complaint. (1) Everyone is special in his or her own way. (2) But only in the combinatorial sense. You are a technically unique combination of fully mundane characteristics. (3) Whatever of that you can express either doesn't matter or matters very little. (4) Most people are deeply pained by wanting to express something unique in the field of expression, but the target of this desire tends toward wanting to express what is beyond expressing, which guarantees dissatisfaction. The reason there's mind to the point of conscious experience is because there's a sense of self. The reason there's a sense of self is because you built that up as a way of distinguishing the differences between you and other hoomans. Because the focal point is those differences, you have a big blind spot which is how utterly similar you are to other hoomans. "There's absolutely nothing special about you", though technically not true, is so close to the truth that it's only off by about a hair's breadth. But there's a natural resistance to that fact as a consequence of that mechanism of how we achieved mind, which was to focus on our differences. What is the sun? To us, it is a source of heat and light, and that's pretty much all we could use it for. In that regard, let's call our type B. Now suppose there's an alien civilization living on Mars, and we haven't detected it becuase it's underground, and suppose that group launches a ship to the sun, equipped with some really good kind of heat shield that surpasses our own technology, and that ship puts a big straw into the sun, sucks up some hydrogen, and returns to Mars with that bounty, and they use that hydrogen for some purpose that's also beyond us. Let's call them A. Now suppose there's an even more advanced alien civilization visiting from a distant solar system, and they have really long lifespans, like a million years is a short time to them, and they're parked just outside the orbit of Pluto waiting for our sun to explode so they can take some of the resultant iron home. Let's call them C. So what is the sun? To these different types, the sun is different things (or things-to-be). To A, the sun is the hydrogen which is the current substance of the sun. To C, the sun is the iron which is a substance that will be produced by the sun at some future time. A and C both see the sun as substances, one as a present substance, and one as the product that the sun will be after its process of burning is finished. We B people see the sun as a process, not as a substance - it is only useful to us as long as it's a fire. We can't take the substance, the hydrogen, from it now and do anything with it other than let it burn over there. And we can't take the eventual substance, the iron, from it after it blows up (assuming we're going to remain earthbound until then). We don't use it as a present substance or a future substance. We only use it as a fire, which is not a substance, but a process. All the things that happened are the things that physics allowed, and all the things that didn't happen are the things that physics forbade (including decisions, including conscious decisions). Beyond conscious control versus within conscious control: there's a big difference between those two kinds of actions, but between them there is no difference in the amount of contra-causal free will. "Try this: next time you drop something on your foot, DO get angry at gravity." "Why?" "Because that would square up when you get angry at the inevitable. Whether something happens by way of gravity or happens by way of someone's deciding, they're both inevitable, but if you get angry at both, then your attitude is consistent across all cases." You are meat that has become complex enough to have subjective experience. But you are also a computer program that has been instantiated in meat. Even if this last suggests a directionality that runs contra to the history of how it happened, it's still true. It's hypothetically possible that someone may have described your exact personality before you were born, and then you were born, and then the described program happened to be instantiated. Is a math formula 'invented' or is it 'discovered'? This is the general question regarding mathematical platonism. Some say 'invented', some say 'discovered', some say you would need to listen to my arguments for both sides. And some get even more extreme. For example, when I'm organizing the stuff in an office, I'm never thinking "I decided that I'm putting this here and that there", but it's always "I figured out where this goes", or "I figured out what system I'm using for this." This is pretheoretical: in an organizing task like that, the 'discovered' thoughts are the ones that pop into my mind and the 'invented' ones are not. This goes even further. Of the most novel things I've ever created, all of those are also acts of "Oh, I figured out how I do this," not "I created this novel arrangement in a flash of contra-causal intervention". Sometimes I look back on something I was once very invested in, and I see that it was actually not very good. Then it's, "When I was working on this, I thought I was figuring out how I do this, and I thought that the way I do this was good, but all the while I was only figuring out some crummy way of doing this, and my 'way' was misaligned with good." There is a long journey between that misaligned state and some later state in which you can consistently align your 'way' with good and figure out that way of yours. Heck, even a good job of getting there never results in getting it right every time. But that's the discovery process (it's not an invention process). So recognize your journey as figuring out how to align your way with good, and figuring out how you do things your way. When the universe winds down, then there's no longer objects. Then there continues to be no objects. Why was there ever the other part when there were objects? When things lose their thingness, that makes sense. When no longer thingness remains the way it is, that makes sense. What could make sense of that other part? In typical storytelling, the hero has some great hardship, and from that gets a great drive. In real life, it is possible to derive great drive from the hardship of having no real hardship. It is in some ways similar, in some ways different. This factor that's needed in my style is the "O hai" factor. It's like when you look out your window, and a friendly neighbor cat is out there waiting for you. O hai. Naturally, it's time to drop whatever you're doing and go out and play with it, and there's little else you can do to improve your odds he'll come back. If he knows you'll drop everything and meet him when he shows up, then he might show up again, otherwise not. Nothing else to be done about that, and no way to guarantee it. Not coincidentally, this can also be taken as an explanation of why every great writer loves cats. "I'm blindingly brilliant for 30 seconds per day. I do not get to choose which 30. They are not consecutive." - [lorem username, also check wording]. How common is it that someone gets the brain enhancement from video games and doesn't bungle it in any of the many ways e.g. addicted to video games, or moves on to addicted to some other escapism, or just used rubbish video games all along? Ditto social media, drugs, novels, poetry, a science education, acceptance of mortality, lorem. How often do you meet someone about whom all of the following could be said? "He uses video games for the brain exercise, but doesn't misuse them. He uses social media for the excellent content, but doesn't misuse it. He uses drugs for the insight and creativity they can provide, but doesn't misuse them. He reads novels for the nuanced insights into the hooman condition they can provide, but doesn't misuse them. He uses poetry to explore the breadth of thoughts that can be assailed, but doesn't misuse it. He has a science education and uses it, but doesn't misuse it. He's done acceptance of mortality, and somehow it hasn't made him apathetic. Lorem." "Mind, that's a thing. Why?" This question decomposes neatly into two more fundamental questions, but it strikes us as singular, atomic. To have a great impact on the world, you need to be fiercely unique. You are not completely generic. Here's a great tension. You have to dedicate an incredible amount of energy into understanding who you are as distinct from other people. But if you're to prevent that from driving you mad, you must also have in perspective how little your uniqueness is. You are not completely generic, but close to. So you must remember (1) a great deal about how you do you, and how you're not all those other people, (2) that there are a lot of other hoomans who are quite unlike you, but they are like you a lot more than they're unlike you, and (3) that there is the rest of the universe. Don't suppress, supplant. The prospect of ultimate personal extinction is terrifying whereas the prospect of continuing to be incapable of flying is less terrifying. Why? Similarities: both are prospects that can be imagined, but not achieved. Differences: the first is about a change whereas the second is about a stasis, the first is an extension of an innate shorter-term goal whereas the second is not. There are people who are afraid to do their own thing. There are people who act like they're not afraid to do their own thing, but what they're really doing is something other than their own thing. Bob, to Bob: "If you think you know what your real thing is, and it's both glorious and terrifying, then you might be right. If you think you know what it is and it's not both glorious and terrifying, then you're wrong about what your real thing is." (YMMV) Why are there these two states of mind called awake and dreaming, and not more like 20 that fully chart this possibility space? And if there were, what's the space of factors that the set would give us a fuller sense of? How strange there's just 2 instead of 20. How sad what portion of people have complete disregard for fully 1 of those 2. There are (at least) 2 types of negative feedback mechanism: let's consider the flyball governor and the pendulum. The non-conscious processes of the brain that regulate the autonomic systems of the body are more like flyball governors. The subconscious processes of the brain that regulate the conscious mind are more like pendulums. (There is overlap in the sense that some of the non-conscious processes are to an extent like pendulums and some of the subconscious processes are to an extent like flyball governors). When someone neglects to pursue his real thing, it is because he has been taken in by an impostor. For some, willingly. For some, unwillingly. These are akrasia and misalignment. Both are when layers of decision making conflict and the one that should be weaker is getting the treatment due to the stronger. In the one case, knowingly. In the other case, unknowingly. The second-worst crime perpetrated by corporate and political greed (and it's a direct outcome of the first) is that it has engineered a world wherein for most people there is genuinely more incentive to alienate one's subconscious from their conscious mind than to do otherwise. Victims of this may be interesting, even amazing, even by way of being complex, but still their potential has been vitiated from them. [This is re everyone is too busy to be amazing and maybe that earlier snipped goes in the middle here.] A terrible outcome of this is that the daimon can lay extremely convincing traps. They've trapped us because they've made us trap ourselves. Goals and energile are two ways of thinking about things like decision making, and both provide valuable perspectives. But goals is a mental model that refers to something fictional whereas energile is a mental model that refers to something real. That's why goals is lower in the heirarchy. That's why it's important to avoid getting that backwards. Is it a fallacy to conflate total utility and marginal utility? Simple answer: yes. Not so simple answer: not exactly yes. To the extent that a hooman brain is more powerful than a nonhooman animal brain, and to the extent that a nonhooman animal brain is more powerful than a brainless process, what is achieved is a resistance to impermanence, a persistence. Fallen leaves disintegrate between autumn and spring. Long-term hooman memories do not. This is one factor that seeds avoision to accepting death. Re the creative writing process. In book writing, there are so many tasks that are: don't make a deliberate decision to employ this principle, but make a good study of the principles at one time, then do a good creative process, and then look back and say, "Oh, look at all the principles at work there. I see what you did." (It's like doing a literary analysis of someone else's work, but to your work, but there's an indentical number of revelations of what processes are being put to work). Then once the decisions are confabulated, do make adjustments with finally a direct knowledge of what principles are at work. Among these tasks: designing a book for target properties. Even the macro-est benchmarkish properties have to be emergent. The mere act of an editor ordering a book to be written by an author based on a description, e.g. subject and page count, is stifling, even in nonfiction. A long time ago, it was theorized that a hooman brain works something like a system of levers and pulleys. A shorter time ago, after steam engines were invented, it was theorized that a hooman brain works something like a steam engine. A shorter time ago, after digital computers were invented, it was theorized that a hooman brain works something like a computer. It turns out that this last one was right. A hooman brain is a kind of computer. It is, however, a computer running a simulation of a steam engine. I would like to figure out my own guidance. There is an infinte regression problem to resolving that. That's why there's no simple and correct answer to how this is done (e.g. "follow your passion"). But figuring it out will be better than forgoing the potential decisions of my guidance. Suppose there's some part of my brain, or some domain of my thinking that's called "the guide", and if I do that part right, I will have the right guidance in all my efforts. Then who guides the guide? There's some other part of the brain or some other domain of my thinking that's called "the guide of the guide"? Then what guides the guide of the guide? The guide of the guide of the guide guides the guide of the guide. Et cetera. Aha, so if you're to avoid letting someone else decide who you are and what you do, and if your ability to decide such things is going to be right, you need some way of resolving paradoxes in the manner of the infinitesimal calculus. Example. (1) I know nothing, except for one thing, and that exception is statement (2). (2) I know nothing, except for one thing, and that exception is statement (3). (3) I know nothing, except for one thing, and that exception is statement (4). Et cetera. However, this resolves. (1) I know nothing except for one thing, and that exception is statement (1). It's like a converging sum or a recursive definiton that altogether is cromulent. "You do you. Then who does that?" How can a person have a healthy skepticism and a healthy activism? It's been bungled more times than it's been got right. [with the socrates infinite recursion thing] There are some paradoxes that are solved by calculus and many that are not. What most people don't realize is that among these are some of the things that are fundamental to owning and operating a hooman brain in the hooman world. So let's take a really shallow dive into what it means for calculus to solve a paradox, and then we will use that understanding to analyze some of the problems that emerge from the general condition of wanting to be a competent operator of a hooman brain that has good guidance. First, one such paradox that has an exact numerical solution: the paradox of Achilles and the tortoise. [lorem the problem statement]. Well, by our plain knowledge of how movement works, we know that Achilles will overtake the tortoise and beat him to the finish line. The paradox is, "But that requires closing an infinite series of finite distances, but that should take an infinite amount of time." The resolution of the paradox is, "Naw, sometimes you can finish an infinite number of those in a finite time." Once you understand that, there's no more apparent contradiction. Let's talk about notation, and then we'll get to the solution to the paradox. Here I've written the numeral "2", and here I've written the expression "1 + 1". If you evaluate the expression "1 + 1", the result is 2, and if you evaluate the numeral "2", the result is also 2. Now I will write an infinite series that also evaluates to 2: 1 + 1/2 + 1/4 + 1/8 + .... In this notation, the last thing written is "...", which is short for "and then this pattern continues to an unlimited number of terms according to the same pattern. When I write this expression this way, I can assume that whoever sees it can see the pattern, and extrapolate that the next term is 1/16, and the one after that is 1/32, and so on, each term being half the previous term. But this notation is not exactly rigorous, because a "..." can sometimes indicate a sequence that goes on linearly, or a sequence that goes on exponentially, or some other thing. I've made it clear enough that it's an exponential sequence in this example by putting four terms before the "..." and each of those four terms after the first one is half the previous one. But even this could be interpreted ambiguously. Consider the sequence [lorem - cook up an example "a sequence of fractions where each term is one over something about what letters are contained when you write out the denominators in English"]. Well, that sequence starts with the four terms "1 + 1/2 + 1/4 + 1/8 + ...", but the rest of the sequence is [lorem]. But anyways, suppose our sequence is one where each term after the first term is half the previous term. The sum of the sequence, that means the sum of all the terms, even though there is an infinite number of terms, is 2. I will provide part of an explanation of why. The sequence can also be written this way [lorem - the formula in capital sigma notation]. This is a fancy way of saying "A sequence that starts with 1 and then every term after that is half the previous term". Now this doesn't use the "...". So this can be written in a finite amount of space AND it doesn't use the "..." which is just shorthand for writing out the sequence on an infinitely big piece of paper. The next thing I will say is a handwave, but there's math formula that tells us that when we want to take a sequence of the form [lorem], it evaluates to [lorem], which in this case is 2. So now I've given examples of four different ways of writing something that evaluates to 2. Actually, five, since the word form also counts. There's: "2", there's "1 + 1", there's "1 + 1/2 + 1/4 + 1/8 + ...", there's [lorem the sigma notation], and there's "the sum of a sequence of terms such that the first term is one and all the other terms are half the previous term." Do you see a similarity here to what we looked at just before this? When we say "(1) I know nothing except for one thing, and that exception is (2). (2) I know nothing except for one thing, and that exception is (3). (3) I know nothing except for ...," that's like the sequence of numbers with the "..." notation, where you have to resort to writing "..." to truncate something infinitely long into something that will take you less than an infinite amount of time to say or an infinite amount of paper to write on. And when we figured out how to reformulate it so that it's, "(1) I know nothing except for one thing, and that exception is (1)," that's like the sigma notation where you use a finite amount of ink (to write, or breath to speak) something that represents something with an infinite number of terms, but in such a way that you don't have to resort to "..." as a shorthand for "I'm expecting that you can see the pattern here, and that you can imagine replacing these three dots with an infinite number of terms that follow the same pattern." Lorem something about so when you wonder how you can have good guidance you also have to figure out how to do something about resolving an infinite sequence. But that is not the impossible task of taking on a contradiction. It is the possible task of finding out how a paradox resolves into something quite simple, and how it really doesn't contain a contradiction at all! In the modern hooman condition, fear has come to be the worst-calibrated guidance instrument ever to be in popular use. Fear of accepting the major problems of the world leads to ignoring them and taking on pursuits that add to them. Fear of accepting what our brains make us capable of understanding about how the universe works leads to denying them and pursuing ignorance as a virtue. [lorem - et cetera]. Alan: "That's Bob. He's fully aware at every waking moment that we're on a rock flying at 20,000 meters per second through space." Bob: "AAAAAAAAAAAAA!!!!" Charlie: "What's that like?" Alan: "Nobody knows except for Bob." Bob: "AAAAAAAAAAAAAA!!!!" Some people say that duality is the only good way to understand things. Some people say that unity (nonduality) is the only good way to understand things. Of these two modes of thinking, some people say that one is right and the other one is wrong, and some people say that they are two modes of going about one activity. "Everything's been done before, at least in painting and illustration"? If so, it happened very recently, and people who said is less recently were abjectly wrong. It's plain to me that recent advancements in illustration have not only included evocative art about how recent technology affects us, but have also included genuinely new styles, styles that didn't exist a relatively short time ago that can be used to evoke things that have existed since long ago. To compare the state of two-dimensional art now with when the style of Van Gogh was new is to notice that new subject matter has been roughly at pace with new developments in the hooman world, but there has been a flourishing of new styles to say things in. In this installment of case studies in bogus wisdom: "Do or do not: there is no 'try'." Okay, it's something Yoda said in the Starwar, but lots of people say this, thinking they're saying something that's not utterly stupid. Sometimes phrased differently, such as, "Don't ever say you'll try to do something, because either you do it or you don't." In the case of Yoda, it's an incredible example of when someone says some things that are wise, then says some thing that are dumb, but specifically phrased to sound like they're wise, and then people take the parts that sound good, but that only sound good when your brain happens to be shut off. But anyways, in the general case, a lot of people say this thing, and presumably not for the purpose of sounding like a total idiot, and the idea is that 'try' is a concept that's best done away with. If this is true, then there's no such thing as a failed attempt at anything. Every time someone said they were going to do something, and then did some of the steps toward what might accomplish it, but didn't fully do the thing they had said they had set out to do, then they didn't do that thing - that part's true - but further, there is no distinguishing feature between that and any other form of not doing the thing, except maybe deception. Every time anyone ever did anything deliberately, they were certain that it was going to work as soon as they decided to, and every time someone did what we would normally call "trying and then failing at the attempt" - but there is no 'try' - they knew what the result would be when they decided to.. feign at convincing other people (or perhaps themselves) they had tried to, but only fooling those people foolish enough to think that 'trying' is even a thing? It follows that everyone has perfect foreknowledge of what the results of their actions will be before doing them. Uncertainty does not exist, as concerns what a person is or is not able to do at any given time. This is untenable. There are simply too many scenarios wherein it is impossible to say or think things that make sense if we're just doing away with the word and the concept of 'try'. The other day, my friend Bob said he was going to try to cross the street, but he was struck down by a meteorite half way, and tragically he died never having made it across. It follows that Bob had more knowledge about celestial bodies than any astronomer on Earth, and is a major drama queen (he was willing to die to make a point, and nobody has been able to discern what that point was!). Neither of those things were true of my dear departed friend Bob, and it's an offense to say things that imply them. So let's not be caught saying something so stupid. "This discounts all those cases where someone says they will try to do something, but says that for the purpose of deception or self-deception." "Nope. Only that those are not the only cases." "So sometimes someone says they'll try do to something, and sometimes it's for deception, and sometimes it's for self-deception, and sometimes there's a genuine trying and failing." "Yeah. All I've said is that it makes no sense to abolish the notion of 'try', because of that last case." "What about the other ones?" "Those are also important. They are of paramount importance! One of the most murky notions of the hooman condition, and one that's particularly difficult and important to understand, is what's going on when someone tells themself they'll try to do something, while on some level knowing that they're only going through the motions that they on some level know they know they're not completing." "And when someone says they'll try to do something, but only for the purpose of deceiving other people?" "Also of paramount importance!" "So there are lots of things to say aside from categorically rejecting the notion that 'try' implies deception?" "Aside from 'try' implying deception in every case, yes, absolutely. But not only that! There's even a pragmatic, and I use the term in the icky sense, pragmatic reason for saying that 'try doesn't exist." "What's that?" "For example, there's a credo in the organization National Geographic between the photographers and the bosses. And the bosses say, 'We sell photographs, not excuses'." "What does that mean there?" "When a photographer is sent out to get a picture of some rare animal or something, if they don't get a photo of it, they're fired. Doesn't matter if it's a matter of luck. Even if the guy did everything he could to maximize the odds of getting a picture of it, if he rolls unlucky despite his best efforts, he's getting fired." "Yikes." "Yeah, call it moral luck or something. But when there's a policy that says 'bad outcome means you lose your job', it could be that everyone there knows that it's a matter of both luck and skill, but you can be sure (almost sure) that the people gambling are trying their best when the outcome is the only determining factor in what happens next." "So even in those cases, when the whole scenario has nothing to do with deception or self-deception, there's still value in saying 'Do or do not: there is no try'?" "Yeah, in the pragmatic sense, and I use the term in the icky sense." "How's that." "That's the difference between policy and reality." "Why is there value in separating policy with reality?" "Because if you didn't have an unrealistic policy, then you wouldn't know if what looks like bad luck was really bad luck or whether it was the result of someone having unsorted shit in his head to sort out." "That's harsh." "Yeah, well reality is harsh." "And on top of that, separating policy with reality is even more harsh." "Yeah well, that's reality." "I am a strange loop" or "I am a converging sum"? Bogus spirituality is almost as attractive to the nonreligious as to the religious. Almost, but not quite equally. "Karma's gonna catch up with that guy some time, prolly really soon considering all he's done." "Yeah, or that's not a thing. The law might catch up with him - given all he's done, there's a pretty good chance of that happening soon. Or maybe nothing.. because that's how it works sometimes?" "Persistence is fertile." - Darth Vader Unsorted Pile 2a I was visiting a science laboratory, and the guy running the place told me to try putting on the headphones and placing the radioactive sample in front of the geiger counter. I put the headphones on, placed the sample, listened for a while, and took the headphones off. The guy said that what I heard was the sound of a purely random process. What's going on is it makes a click sound once every time a particle comes off the sample in the direction of the detector, but there's a quantum random process behind whether or not that happens in each short time interval like a thousandth of a second. The result is that every one tenth of a second, there's usually slightly more clicks or slightly fewer clicks than the previous one tenth of a second, which is why you can also hear it like a low-pitched sound that randomly goes a little lower pitched or a little higher pitches several times a second. What I actually heard was the first 9 bars of the song All Star by Smash Mouth. I told him this. He said it was not impossible, but a rather uncommon outcome. The deepest layer of dials you can't reach them to tune them, and they proceed to deepest darkness. They comprise cellular respiration, DNA transcription, orbital electrons, et cetera. How do you do determinism without fatalism? I have decided to give things a good shot, I think. But I didn't decide to decide that. It was a combination of disposition, opportunity, and influences. There are other people who had the same opportunity and influences, but because of disposition decided to be irrational, apathetic, sportsball fans, or whatever else. The only question it's natural to ask next is how I can help bring about situations wherein other people are convinced and able to give it a good shot. There is a moral motivation to do some part of providing opportunity, influence, access to information, and to make that information relevant, accurate, and appealing. Indeterminism is not possible, but freedom approaching it, conditionally surpassing it, that's a great value, worth doing about. Neo-feudalism is the greatest insult to freedom, even if the effect is purely financial. It is meaningless to have the legal right to take a ski trip if you never have the financial status for it. Even if someone achieves a combination of wisdom and activity, asking why he did is the same thing as asking why he sneezed at 3:02 PM and not 3:04 PM: a combination of disposition and circumstance. His nose feels a sneezing reaction to sunlight, and 3:02 is when he walked around that corner of the building into the sunlight. There's usually no point in disagreeing about the meaning of a word. If all parties agree, "There's word X in the sense A, and there's word X in the sense B," and if there's no disagreement about what the word means in each sense, then there's no disagreement." Some patterns of meaningless rational argument: "X means A!" "No! X means B!" Some patterns of real rational argument: "I would like to argue that X in the sense of A is real, and further, that since X in the sense of A is real, that entails Z." "You have provided an argument that X in the sense of A is real, but I disagree with that reasoning, and I argue that there is no good justification that X in the sense of A is real." "You have argued that if X in the sense of A is real, that entails Z. Whether or not I agree that X in the sense of A is real (or optionally: 'although I agree that X in the sense of A is real' or 'although I don't agree that X in the sense of A is real, but setting that aside for the moment') I think there's a problem in your justification for the notion that the reality of X in the sense of A entails Z, and I think that there's no good justification that someone who believes that X in the sense of A should also believe Z." The disagreement between hard determinism and soft determinism is represented as a disagreement, but it's a bogus disagreement. Okay, there's 'free' in the sense meaning 'indeterministic', and 'free' in the sense meaning 'uncoerced'. What's left to disagree about? If they both agree to use the words 'indeterministic' and 'uncoerced' to delineate the two meanings of 'free', and then continue conversation on those two terms, what's left of their disagreement? Is it a disagreement over which sense of the word 'free' is deserving of the exalted title of "what we mean when we say freedom of the will"? I haven't seen it being anything other than that. In good debate, this is when you agree on separate terms for separate meanings, find that you have no disagreement about those more specific terms, and then say that there's no disagreement left and something else is better worth your attention. Part of doing one's real thing is figuring out what one's real thing is. "Complete the present checklist as I figure out more about this" goes on the present checklist. To leave that item off any checklists is insufficient documentation, and to put it on some other checklist is disorganized documentation. It's got an element of self-referentiality, but it is not joke to say that that item goes on that list. As for the hooman adventure, people can be stifled at either level: one can be prevented from figuring out what his real thing is, or if he's figured it out, he can still be prevented from doing that something. The best source of morality comes from the story of the king and the slave from the Liezi - call it the Mr Yin of Chou [per Legge translation, is WG romanization? Zhou?] princple. Maybe conciousness is an illusion in the sense that it seems to be a continuity when really it may be a series of infinitesimally short intervals. Suppose I'm running RollerCoaster Tycoon on a modern computer and it's running smoothly. If I copy my save over to a really old computer that can only run the game in a laggy fashion, then the game will run more slowly, but the characters in the game wouldn't know the difference. Likewise, if the earth is a simulation, it might have been running on a computer that could run it in real time 1000 years ago, but it runs much slower rendering the present world. We wouldn't know the difference. Now suppose I have a video game of the sort that you can save and quit at any moment, and resume a game any time you've left it (for example, FTL). And suppose I resume the game I have running, play it for a tenth of a second, then save and quit again. Then relaunch the game, resume, play for a tenth of a second, save and quit again, and repeat this process. After doing this for several days, I might be able to run a round of the game from start to finish, but the characters in that game would never know the difference between if I were doing it that way or just doing it all in one go. Consciousness, likewise, can't distinguish what sort of mode it is within. Calling an assemblage 'object' is a matter of convention. Calling an occurrence 'event' is similarly a matter of convention. Similarly, cause and effect are conventions. That does not mean that these are futile. Using these conventions is part of generating real knowledge. Even though each such convention has built-in vagueness. Still not futile. About cause and effect. He moved the cue stick, then that caused the movement of the cue ball upon collision, then that caused the movement of the red ball upon collision. Or you can say "On his turn, he pocketed the red ball in the straightforward manner" (assuming the table at the start of his turn was such that 'straightforward' in that sentence was obvious e.g. the red ball was near one of the corner pockets and the cue ball had an open line to it). That description on that level (he pocketed the red ball on his turn) encapsulates several events and cause-effect relations of a finer level. But notice the character of the idea. When someone says "that's a tree," you don't think, "there's a trunk near the middle of this mass of things, branches on all sides of that, so a tree it is!" It's just a tree. Likewise when we think of the guy's pocketing the red ball in that scenario where that's done in the straightforward manner, there's no residue of thinking about momentum physics when you engage with the description on that level. This can be taken to the level of the whole universe. Supposing this is one of those universes that starts with a bang and ends with a crunch (even if that's been disproven, let's just take that as an example of where we may have found ourselves). You can refer to all that, and all the events that consists of as "all the stuff that happened". This encapsulates a lot of cause and effect relations. For a while, there was a nebula in the shape of a horse's head, and the cause of that was when a certain star exploded, and that explosion was caused by it running out of fuel, and so on. For a while, this planet had a satellite, and that body being there in its gravitational well was caused by a collision that broke off a piece of the bigger body. All this is encapsulated when you refer to all the universe's processes from start to end as "all the stuff that happened". This maybe leaves one more question of cause, "What caused the big bang at the start of that universe," and whether the answer to that is "God did it," or "It was one of those empty space fluctuation things, but one that happened to be particularly big" or "Cause doesn't apply to that level of things" or whatever the answer to that one is, still, all that stuff about stars exploding into horse shapes and pieces getting knocked off of bodies to form satellites. None of those things appear at that level when the history of the universe is referred to on the level of "all the stuff that happened". That even encapsulates many games of billiards, many instances of pocketing a ball, many instances of a billiard ball transferring momentum to another billiard ball. They're not neglected. They're packaged up in encapsulation. Any principle that leads to any of the flavors of "and that's why we're stuck and this whole exercise is futile" is at that same instant reductio'ed and we can work on figuring out what's wrong with it, or whatever the next activity is. These are useful exercises, since knowing what's right depends on having some view of the landscape of right and wrong. Let's not burn every book about skepticism and anon wrestle anew with the next person who says "We can't know anything!" The gorillas will never freeze and we must neither. [lorem] I think he's an indeterminist (he does say that determinism is false, but only in the sense that chaos theory is true). He's done a great service in doing writings that are big helps to other smart people. But his attitude that ideas pop out of nowhere and I'm the best because the good ideas popped into my brain and other people are stupid because they're the people who had the dumb ideas pop into their brains, and also I'm better than poor people.. suddenly I see how immoral indeterminism is. It can creep in a little or a lot, and a rich and smart person gets their head inflated in whatever proportion to that shit gets in. "I was lucky to be born with the skills I have" (said by Bill Gates. Lorem: check wording) is an extra-hard determinism. It sounds absurd, but it is fully true (by a certain construal of definitions, a good one). This statement has the great virtue of acknowledging all the layers at once. Rationality must account for irrationality in the same brain. This is not just a funny way of chopping words. Your brain has rational and irrational parts. The irrational parts don't heed much of any of that. The rational parts, on a good day, must account for both. Some chaotic perturbation during your formation, which you didn't choose, combined with 'environment', which you also didn't choose, altogether account for the full set of your outcomes. Decision is an important concept, but at this level of coarseness, it floops out of the ontology. There's a level of encapsulation where you can refer to "the life of Bob" and that encapsulates all the decisions Bob ever made, and a whole lot else that happened to him. At the finer level, you can distinguish between things that Bob decided and things that happened that were not in his control. These are all meaningful distinctions. So when I say that there's a level of encapsulation that no longer refers to decisions, I don't mean to say "'Decision' is a meaningless word, hurr durr." It's just one of those many objects/events/patterns/whatever that makes sense to talk about at some level of description but not others. Lol, I referred to one chaotic perturbation out of your control that determines your outcomes aside from environment, but that was to simplify. Now multiply that part by about a hundred million. That's how many chaotic perturbations out of your control determined your outcomes aside from environment. "There's a pretty easy rule of thumb for ruling out no-go principles. The hooman species has proven capable of doing a whole lot of something, far beyond any known other process: spacefaring, for example. No other animal has come close, none of the processes that produce hoomans have done anything like that in any other way. This general observation is beyond denying. How many working parts does a functioning spaceship have, and how many of those have also been made by chimpanzees, or by other processes of DNA? Things like this: it's the biggest, most noticable thing you can possibly observe. And it's all the outcome of knowledge creation. That's why I know it's bogus as soon as I hear someone say 'It's impossible to have knowledge'. Then what are all those space ships doing there?" "Yeah, and what's it got us? Hoomans used to be in a pretty good state before all this technology and all this knowledge. Maybe we would all be better off if we stopped trying to discern one thing from another and put a stop to knowledge creation, and then technology, and then maybe we could get out of all the trouble that's got us into." Consider the difference between an ontology that includes the concept of 'tree' and one that does not. The second kind might include 'trunk', 'branch', 'root', 'xylem', and so on, which we (the way we do things at present) understand are all parts of trees, but doesn't include 'tree' as a thing that joins all those. It would still be possible to make sense that way, but it would just be inefficient. You would have to say things like "He chopped through that trunk, and then the trunk and those branches fell", and "There are about 1 billion trunks in this country, and 100 billion branches". To someone who has the word 'tree', this will all amount to a lot of circumlocution, but there wouldn't be any impasses. So there are different levels of abstraction, but to use the most handy ones is simply a matter of convenience, which is also a matter of efficiency. Use an inconvenient ontology and you can still make just as much sense as with a convenient one, but less efficiently. But part of progress is efficiency in saying things that make sense. So there's an important difference between an ontology that's handy and one that's clumsy. The clumsy one impedes progress and the handy one facilitates progress. Now we can see how certain somewhat-popular forms of eliminativism are not going to pan out. For example, "Once we know more about physical mechanisms, the word 'decision' is just going to fall out of use." Yeah? Well, there are things about trees that we still don't understand completely. We understand that when a leaf falls off a tree branch to the ground, it's because of gravity, but we haven't fully understood gravity yet, but is there good reason to suspect that one quantum gravity is solved, we will no longer use the word 'tree'? The same goes for things like: "Once we understand more about biochemistry, we're not even going to use the word 'pain' any more. People in the future will be saying things like 'and then my nerve fiber C134B was firing with quite some rapidity'." In general, these forms of eliminativism all amount to saying that once things are understood better on some level of abstraction, then concepts on other levels of abstraction will no longer have any use. And that amounts to saying that there are levels of abstraction that can be discounted. It's wrong because any level of abstraction that's useful in the sense that you can make explanations on that level is a level that's there to stay and to remain useful. Unless you have a better explanation of how we would ever productively do away with a level of abstraction that happens to be useful at present. I don't know if eliminativism has ever bothered to take on a challenge of that level of difficulty. Determinism entails hypothetical or face-down predictability. But it does not entail implemented or face-up predictability. Take the frustrator gendankenexperiment for example. Determinism does not say that it's possible to make that machine. That's what I mean by 'implemented' or 'hypothetical'. The gedankenexperiment imagines that, for example, if this world is a simulation, then the prediction machine is something that's hooked up to the outer-verse that's simulating this one. In other words, if you pause a game of Rollercoaster Tycoon and analyze all the bits, you can determine exactly what will happen if you resume the game and leave it running, but that does not mean that there's an item in Rollercoaster Tycoon that can do such prediction and communicate it to the simulated people within the game. That's the difference between 'hypothetical' predictability and 'implemented' predictability. As for face-down and face-up.. yeah it is a really tough problem to imagine how the prediction machine works in the face-down variation, but not in the face-up variation, and in a way that can be squared with determinism. Here I will offer only part of a solution to that issue. Suppose both of the participants in that game are machines. So there's the prediction machine and the frustrator machine. The frustrator machine is programmed to raise whatever hand contradicts the prediction of the prediction machine. As someone who is observing the set of both machines, there are things I can say for sure about that system. "Whatever happens, there will be one left arm raised and one right arm raised on every round of that game. Sometimes the predictor machine raises its left arm and the frustrator raises its right arm. Sometimes the predictor machine raises its right arm and the frustrator raises its left arm. But this I guarantee: on every round one left arm and one right arm will be raised." And then you could run a million rounds of that game and every time that analysis will hold. This leaves a good chunk of the problem unsolved. Why does determinism entail face-down predictability but not face-up? Maybe I'll solve the rest of that some other day. There's an expression "It's more than the sum of its parts." This is such an odd idea. Because.. every thing, by definition, is exactly the sum of its parts. So here's what the saying might really mean: the utility of the thing is greater than the sum of the utility of its parts. Okay, now it makes sense how the saying is shorthand for something that has meaning without also amounting to saying "OnE pLuS oNe Is NoT aLwAyS eQuAl To TwO!" Additionally, there's another thing it might mean. Perhaps it means: the thing has emergent properties that are not the sum of the properties of the parts. In other words: there are parts, and when you put them together you get an object that makes sense at a higher level of abstraction. Fun fact: these two explanations of what the saying might mean are really just two ways of saying the same thing - they're functionally identical. So any time an assemblage of parts has more utility than the sum of the utility of the parts, what you've identified is an object that's emerged to a higher level of abstraction than the level its parts are on. As for laxity in language use: a pedant might say, "You should never use the expression 'It's more than the sum of its parts', because this muddies the meanings of words," and someone in reply might say, "Use the expression because it takes less time than saying something that's clearer." On this point, I'm definitely gonna be one of those people who strongly advises against employing word uses that muddy meanings, even if the better option means you have to be a bit more wordy. There's an expression "less is more". This is a bit of an odd idea. The definition of 'less' is simply the exact opposite of the definiton of 'more', so what the heck might the expression mean? Does this amount to saying "five is seven" or something similarly nonsensical? It means "less of one thing is more of some other thing." For example, "Less needless elaboration is more brevity (and more efficiency)." (I should worry). But this is, I swear, another one of those things that pollutes when you keep to the muddy form in your verbiage. Please, do not ever say "Less is more." Take those precious extra seconds to say "Less of x is more of y," or "Less is better," or whatever such thing when it's called for. Do not muddy words and muddy thought! If there is one platoon that consists of 100 soldiers, that does not mean that there's 101 of anything in particular there. I am not 100% sure that knowledge is possible. I am not 100% sure that a physical world exists. I am not 100% sure that I exist in a physical world. I am not 100% sure that I exist even as a thing that thinks. Even if I had some way of knowing for certain that I exist in a physical world, I am not 100% sure that other people exist and aren't actually holograms. I am 100% sure that these are all no-go policies, and that insisting on the part that's short of 100% e.g. "Knowledge is impossible!", "The phyical world doesn't exist!", "I'm the only consciousness in the world!", et cetera, are bogus things to insist on, and not helpful. A no-go principle denies progress, so there's no value in insisting on it. So progress depends on disqualifying them categorically. Does that mean they're false and that unlimited progress is possible? Tough question. It has been argued suchwise. Maybe it only proves a matter of utility: that futility never did any good so far. So far, every time progress has been made, it's been by ignoring no-go theorems that insist on futility. By inductive reasoning, we can say that this probably will remain a good way of proceeding, but that doesn't guarantee anything. The main point of contention between determinism and indeterminism is in the arena of responsibility and feelings toward responsibility. For example, "If everything that happens is inevitable, then how does blame for an immoral action make sense?", and so on. Not only is this the main arena of contention, but it really does seem, all prejudice one way or another aside, like determinists are hard-pressed here, and without recourse to ever having come up with a good set of answers. Suggested answers include that deterrence is the only valid motive for legal punishment of crimes (and perhaps we even have to overhaul the whole legal system to align it with this revaluation). Suggested answers also include that the only attitude it makes sense to have toward people who make bad decisions is pity, so you pity the person who is in a hard financial situation and you pity the person who just keeps making bad decisions. What all of these resolutions ignore is how much of decision-making is up to attitude, and why it might make sense to treat a person's attitude in a different manner from how you treat any other of their circumstances. To a first approximation, a person with a good attitude, no matter what their other circumstances are, is deserving of respect, and respect is a thing that it makes sense even for a determinist to direct toward them. And a person with a bad attitude, no matter what their other circumstances are, is deserving of disrespect, and disrespect is a thing that it makes sense even for a determinist to direct toward them. The difference between a numeral and a number: it's one of those few philosophical things that's both fundamental to how we do reasoning but also difficult for a child to understand. "No, you didn't write the number 3. You wrote the numeral 3, but the number 3 is something you can't write." It's because they're both fictitious, and at first it's hard to remember what part of the set of the fiction you're calling the numeral and what part you're calling the number, and what's even the point of calling it two fictions anyways. Abolishing moral luck breaks everything. Sure, there's a difference between something being forbidden to look up and something being what they don't teach in schools. But when there's a million things they teach in schools that don't matter, and a million things they don't teach in schools that would matter if they did teach them, there's a loss. If a bunch of people make it through that process, and then they're not legally restriced from learning the rest on their own, does that mean that the whole process went okay? No, it does not. But it's not even just that. There's some amount of legal enforcement that people go to school until a certain age. We agree to unfreedom in that context. And if that schooling results in learning things that are not worthwhile, and not learning things that are worthwhile, and considering the situation that many people can only afford that much unfreedom and no more, this is much more dangerously close to saying, "Yeah, you don't have the freedom to learn things that are worthwhile, and you do have the unfreedom to be forced into learning things that are not worthwhile." What we end up with is this: you have the freedom to look up a great many things after you're done with school, unless you happen to be a wage slave as soon as you're done with school, in which case, sorry, Bub, you could have learned those things while you had the time, but we were too busy making space for things that don't matter. What does it mean for a determinist to decide to help other people? It means I would like for more people to be free of constraints that prevent them from deciding things, and for more of the people who have that kind of freedom to have the means for knowing how to make good decisions. When constraints prevent a person from making a decision, then it's inevitable that they won't make that decision. When there are no such constraints, some people will inevitably make the better choice and some people will inevitably make the worse choice. The ratio of this good decision making and bad decision making is a function of their knowledge and their judgment (both of which are inevitably some exact way for a given person in a given situation). Matters of knowedge (facts) can be taught. Matters of judgment (rationality, attitude, and creativity) can also be taught. Altogether, this is why the quantity and quality of good decision making are maximized when these three objectives are done well: (1) freeing people from outside constraints, (2) teaching people knowledge, and (3) teaching people judgment (3a 3b and 3c, teaching people rationality, attitude, and creativity). You can do what you decide to do, but you can't decide what you decide to do. Is this a good statement of determinism? Good aside from leaving something out. You can decide to do certain things that will partly determine what you decide to do later. For example, you can decide "I would like to make mostly good decisions" at a time when you're mostly making bad decisions, when it comes down to it, at whatever moments. Now suppose that also you happen to know where to find a repository of quotations from wise Greek and Roman philosophers, but at the moment you haven't checked it out and there aren't many good advice quotations that you could rattle off if asked. But one day you make the decision, "I'm going to read some of those, really read them and study some of them intently." Fantastic. It's an excellent decision. And down the line, that's going to result in you making a lot more good decisions, when it comes down to it, at whatever moments. But when you made that decision to read good things, or at whatever turning point in your life when you decide to do something that's going to have lots of good knock-on effects, you don't know exactly how those are going to come down to helping you make decisions on the granular level at some later time. So yeah, in a sense you can't decide what you decide to do, but sure as heck you can decide to do things that are going to help you on the level of other decisions a lot better than if you hadn't decided to. To other people, you can be like a broken fuel injector or like a working fuel injector. If the fuel injector works, and enough other things go right, then the car works. If the fuel injector is broken, then the car doesn't work. The one device helps the machine in a deterministic way, or it fails in a deterministic way. And whether the device is working or broken, that's deterministic. "It seems like I have a choice." "You do, but that choice is deterministic." "Then what's the point of you telling me this?" "Because I want to be a working part for you, so that you can be a working part for someone else." "But whether I choose to be a working one of whether I choose to be a broken one, that's still at bottom an outcome of physics?" "Whether you choose to try to be a working one or you choose to try to be a broken one, that's at bottom an outcome of physics. But good intentions often go wrong, and bad intentions occasionally go right. But it's more likely to go right if you happen to choose to try to make it go right." "And what can help the odds of it going right if I attempt to do right?" "A number of things. One is by listening to the things that I and other smart people have to say, I think, but I can't be sure about that, because that's exactly what a stupid person might say, so I have no way of knowing for sure whether I'm one of those smart people you should listen to or one of those stupid people you shouldn't listen to." "And how could I attempt to tell the differencce?" Lorem "Based on what I know about you, I don't think you want to be a broken one. I can guess that with pretty high probability. If I'm right about that, then it's in your nature to want to try to be a good one. There are some people who genuinely do want to be a broken one. But I don't think you're among those few." A decision between A and B is always either a decision to do A or a decision to do B - either the one or the other. And whether it's the one decision or the other, that's deterministic. You can call that a choice, but as for word preference, and because this stuff is genuinely really hard to understand, I prefer to use the word 'decision', because this helps you gradually to understand that 'decision' always unpacks to either 'decision to do A' or 'decision to do B'. There's no major problem to use the word 'choice' for that, but the word 'choice' does suggest a contracausal mechanism more than the word 'decision' does. Asterisk: when I said 'decision to do A' or 'decision to do B' I meant 'decision to attempt to do A' or 'decision to attempt to do B'. So when you hear the word 'choice', you can think something like: there were two options, which means something like there were two sides to a coin that was flipped, but it was either a coin flip that landed 'heads' or a coin flip that landed 'tails' - one of those, not the other, and there was nothing fundamentally random about the outcome (such as a quantum random process). It was quasi-random when the coin was in the air, which is just to say that (1) there was nothing about the statement "I will flip a coin" that constrained against either the heads or the tails outcome, and (2) none of the people who happened to be observing the coin flip happened to know enough about the coin's airborne momentum and suchlike to know the outcome before it landed. A car engine can be firing on 5 of 6 cylinders and still be working. So it needs 5 or 6 of the fuel injectors to be working. 4 or fewer working and the engine is not working. As for the hydraulic controls on a large aircraft, the oil pump has a redundant pump and an extra double redundant pump. So 1 or 2 or 3 out of 3 working are all good enough, and only if 0 out of 3 are working is the system broken. This is extending the metaphor pretty far, but it still works at this next step when I say that a good world is one like the pumps system where in this analogy the pumps are like the direct interactions between people when people are trying to relate to each other - usually all 3 out of 3 are working but 2 or 1 working are fine for people less lucky about who they're interacting with. But this world we're in is more like the cylinders example where a person has to be lucky enough to have 5 or 6 out of 6 good interactions to get a good result. But the ways you indirectly affect other people can, in this analogy, amount to adding more redundancies in the mechanism to improve odds that someone in their direct interactions finds themself in one of the good enough working conditions. Lorem - reset the metaphor after introducing the cylinders and the oil pumps before continuing. The TL;DR precis of the writings on determinism. You are literally a robot, and once you understand that, and how it might work, it is a wonderful and empowering experience, and you will understand much else aside. You have to get into the hang of trying new things. Now, isn't that a paradox? It makes fine sense to say you get into the hang of doing something that's repetitive. But if you're talking about a new thing each time, how's there getting into the hang of anything? What's the point of talking about determinism? It helps us understand what the heck we're doing on this earth. Indeterminism makes some kind of intuitive 'sense', but it's not tenable, because it can't be made self-consistent. Determinism takes more effort to understand in terms of practical matters, but it's consistent, it's the true interpretation, and once you put enough effort into understanding it, then you understand practical matters better. In some ways it's like the difference between using an AI while thinking it's a person in a box and using an AI while understanding the talking points about AI. "It seems like this is all common sense." "It's all stuff that relates back to common sense, and not exactly from afar. But if it has all been common sense, it's the kind that's not all that common." It's not downgrading or degrading. It's not like swapping out some piece of equipment for one that you like any less. For the most part, things continue to go on in your brain much like before, but with a bit more clarity. For all we have to say about what this means about decisions, it makes no change to your decision-making processes about most things. We've evolved to be born with idiosyncracies and without instruction manuals. For people of some dispositions, the lack of congenital instruction manual is no big matter. For others, it ends in tragedy. Is it possible for someone to have done other than what he actually did do? This seems to me a reductio. If you say "It's possible for that person to have done other than what he did do," to me this evokes the idea of rewinding time and ALSO changing some factor, however small, with the result that the person's decision was something different. But suppose you could rewind time, but you couldn't change any factors, however small. Then when you replay it, the same thing will happen. Any person's decision can only turn out different if the earlier state of the universe is different from what it was. Replay the same earlier state of the universe, and the replay is the same, including the decisions of people. It's one tape. There's no exalted status called "a decision a person makes" that causes some kind of branching in the timeline of things. And at a future time, you'll be able to say all this about whatever you're doing around this time, including whatever decisions you're making. It's one tape. I find that it comes to a head when you consider that queer question "Is it possible for someone to have done other than what he actually did do?" - it seems to me patent nonsense. It has the same flavor as the question "Could it have turned out that one plus one equalled three?" No. And when I rewind a tape, that doesn't cause the later part of the tape branch into several alternate tapes. It's one tape. I've never had the experience that I rewind a tape and the result is a strange bifurcated tape where two possible stories lie ahead and something decides which one the player will read along (in other words, the universe is one such that you can't release Bandersnatch on VHS). We reject no-go theorems because of? a disproof of their truth? or of their utility? Then it becomes easy to get mixed up between "I'm sure I'm proceeding as if this is true" and "I'm sure this is true." "Does the universe have a purpose?" "It depends on what you mean by 'have'." "How's that?" "If you say 'the universe has a purpose', you might mean 'there is a purpose to the universe'. But also if you say 'the universe has a purpose', you might mean 'the universe contains things that have purposes.'" "What's the difference?" "The difference comes down to levels of encapsulation." "How's that?" "Well, there's a number of things, and they constitute the universe. And we can talk about the universe as an assemblage, encapsulated, as one thing, and at other times we can talk about the myriad things, on their own terms, setting aside whether or not they add up to a universe." "So the question 'Does the universe have a purpose?' depends on possible meanings of the question, and the two possible meanings of the question depend on what level of granularity we're talking with reference to, and on one of those possible levels the answer is 'yes', and on the other the answer is 'no'?" "That's right." "How's that?" "Well, the answer is 'no' if the question is 'Does the universe, the assemblage, spoken of as a singular object, have a purpose?', but the answer is 'yes' if the question is 'Are there purposes to parts of the universe'." "You mean there are purposes to parts of the universe, but no purpose to the whole universe? How can that be?" "It's about levels of granularity. The universe, call it a single object, has no purpose. On a finer level of granularity, there are objects. Purposes can be found there, but only in a certain sense of the word 'purpose'." "What sense?" "Well, there's no purpose to any of it, if you're talking about in a top-down sense, from some level of agency designing things from a level beyond the universe. Not to the universe taken together, not to the things in it." "Then in what sense is there purpose to any of it on any level." "There are two." "What are they?" "One is adaptive value and how it comes about. One is design by way of intelligence." "There's intelligent design?" "Only within the universe." "How's that?" "Intelligence has come about in at least one part of the universe as the result of a bottom-up process. And that intelligence can make designs with top-down purposes, but they're only top-down relative to that intelligence and what it's working on. For example, when a hooman designs a hammer for hitting nails. The hammer has a purpose, at that level of granularity." "So that's a top-down purpose, but relative to the objects in the universe. But that doesn't mean the universe altogether has a purpose?" "Not unless there's a being powerful enough to create yottagrams of matter just for the purpose of having a chance at ending up with little hammers and nails in it." "And the other sense of purpose found within the universe is 'adaptive value and how it comes about? What's that?" "Well, consider a single animal, like the last beetle you saw. It probably had legs. If you want to know why the beetle has legs, at the coarsest level of explanation, it's 'The beetle has legs because beetles with legs tend to survive better than beetles without legs'. And that's the level of explanation that corresponds with the most fundamental reason of what drives processes like beetles having legs. But you can take the explanation to finer level, and say 'Beetles without legs don't survive well because beetles without legs can't walk, and beetles with legs tend to survive better because beetles with legs can walk.' And a handy way of shortening that statement is to say 'The purpose of a beetles's legs is that the legs enable the beetle to walk,' but only if you don't mistake that for a statement of top-down design from an intelligence." "All those purposes in it." "In relation to other things in it. No purpose to it." "Aha." "Now I've just said something, but I don't know if what I've said has really said anything, or if it's said nothing." You can be both altruistic and hedonistic in the sense of enjoying altruism. That's (conveniently) in hooman nature. This is not a paradox. There is a vastly complex interplay between the conscious and subconcsious minds - so incredibly more rich than any writings about it that anyone's done, and I have contributed some small part to that discussion elsewhere, but a good portion of it is beyond exposition. The idea of 'layers of dials' does correspond to it roughly, approximately - it is for the purpose of proof and formulation, and for that it's not bad, not bad even for gaining some understanding, but it does fall quite short of the labyrinthine complexity of how the layers and modules of the brain really work. I do maintain that the model is for illustrating how it is all deterministic, and that the far more nuanced reality of it is also deterministic. It's been said "Determinism is the belief that all hooman decisions are determined by things such as childhood experiences, primal fears, and the like." It's interesting, but it's not a very good characterization. Determinism is the belief that all actions, including all hooman actions, including all hooman decisions, are determined by the laws of physics, because they're in no way separate from physics. Whatever squishy, churning machinations of the brain provide the intermediary processes between atoms and emotions is of secondary support to the assertion. Those are of the conscious and unconscious kinds - things like beliefs and reasoning on the one hand, and things like defense mechanisms and hidden goals on the other hand. So the primary concern is: "What of any of this removes us from physics? Nothing." And of secondary concern is the question: "What exactly are the processes of thought?" Whether you find triumphal displays of authority impressive and utterly convincing, or whether you find them frivolous and utterly unconvincing, this is as pseudo-random as whether you're born with a penis or a vagina or a ridge flipper or other. This is one of the core components of temperament. About robots level 1 and 2: some people use the word 'deterministic', lowercase 'd', to mean the sort of thing that robot 1 is and robot 2 is not. In this lowercase sense, you might say something like "I dropped a mug and it broke when it hit the ground, and that's deterministic, but when you launch a ball on the rim of a roulette wheel that's not deterministic." This is a different sense of the word from the sense we mean when we say 'determinism' (with a capital 'D' if you prefer). In the sense of 'deterministic' that we're using, robots 1 and 2 are both deterministic. We can mark the difference between them using a word such as 'simple': robot 1 is simple in some sense, and robot 2 is not simple in that sense, but both are deterministic. Some people get mixed up between these two notions of 'deterministic', and as soon as they think of something that's like robot 2, they get the idea that somehow indeterminism has been introduced. This is simply a muddle of thinking. Soft determinists draw a distinction between (something like) robots 1 and 2 when they say that robot 2 has free will and robot 1 doesn't. But the soft determinist definition of free will is not the same as indeterminism - that difference is the main jam of soft determinism. Oh, I never thought of it that way. No, wait. I had thought of it that way all along, but you've just made me aware of that. Alright, allow me to venture a formulation. You may say that this is outside the bounds of verifiable hypothesis testing and all that, but hear me out. Granted, this (at least for now) is in the realm of "maybe, maybe not, but there's no way we could know either way at present". But I contest that it's on the same grounds as certain other things that are of prime seriousness, even for academics. For example, pretty much everyone who isn't fooling himself would agree to the proposition "We can't know for sure that god exists, and we can't know for sure that god doesn't exist. Nobody knows either of those things with certainty." But we do have things like The Ontological Argument, The Cosmological Argument, The Teleological Argument, and many others, that we even spell with capital letters, all which address the question "Does god exist, and can this be arrived at, pro or contra, from deductive sound arguments?" And there's no shortage of people who say that any of these prove god's existence in the "necessary truth" manner, and people who say that they're all disproved, and likewise arguments that they disprove god's existence in the "necessary truth" manner, and people who say that those disproofs are disproved. And here I drive a massive stake in the ground, somewhere in this domain. A cosmology of sorts. There is the god, the hods, and the prods. The god, or one of its godless alternatives, begat a universe that is in some ways similar to the one we seem to be familiar with, and in some ways not similar. In that verse, natural processes begat evolution to the point of intelligence, much as how we're familiar with that process here (or maybe by some other process). Those intelligences, we call them the hods. They're like us in many ways, and in many ways not like us. They got to a level of technology that, from our perspective, we would characterize as "futuristic" (which is entirely an anthropomorphic use of the term "futuristic" - let us not forget that Darth Vader's technology existed a long long time ago, but we would also call it "futuristic"). The hods did what we would call fulfilling what we call the simulation hypothesis, but there's a catch. They created what we call the domain of matter and the domain of what we call dark matter (not necessarily dark matter, but substitute any of the possible forms of 'object stuff' that we don't have much or any ability to detect at present but might some day, perhaps soon). For a long time, this realm of matter in this verse (that now we and rocks are made of) was empty, aside from the occasional vacuum fluctuation that promptly resolved itself back into emptiness. But the dark matter realm of this verse was much more lively in comparison. And in the dark matter realm of this verse, intelligence also came about somehow, perhaps by a bottom-up evolutionary process like the one we're familiar with, or maybe some other way. These intelligences, we call the prods. Okay, a quick recap. The fundamental verse was created by the god, or whatever atheistic equivalent. That verse became populated by the hods. The hods created a universe simulation on their computers (and just because it's on a computer in a verse outer to ours, that doesn't make it any less majestic than is deserving of the term 'universe' - in fact, it's equivalent). In that universe simulation, the dark matter realm became populated by the prods, but the 'ordinary' matter realm remained (essentially) empty. Lovely. Onward. The prods (in the dark matter realm of this verse) achieved a level of scientific progress that we would also in our myopic manner of speaking also call futuristic. And much like how we know little about dark matter other than it exists, and a rough approximation of how much of it there is, they did similar to that mode of matter that in our myopic way of speaking we call 'ordinary' matter. Then they learned a lot more about it. Their technology got to the point that they could perform limited manipulations to the realm of 'ordinary' matter, which heretofore only consisted of quark and an antiquark here and there for a microsecond before they fizzled out. These prods got to the point that if they put enough directed effort into a 'seed' project, they could cause a big enough vacuum fluctuation in our verse that more than a few quarks and antiquarks would pop into separation for more than a few moments. And we call that "The Big Bang". This project used up a great deal of their resources, and they weren't able to make manipulations of quite that magnitude thereafter. It was rather unusual that they would even take all the effort to seed that big bang of ours, considering their dispositions (they're rather capricious), but a rather convincing one of them at some point had said "Come onnnnnnnn, there might be giraffes and shit," and that became a big enough sticking point for them to rally together just enough. So the prods still make frequent manipulations to our verse, but now those manipulations are much smaller than big bangs. Between them and us there's like a one way mirror, plus a small one-way opening for instruments of manipulation. What all this amounts to is this: limited gods! O heresy! Paganism! Nobody takes the notion of limited gods seriously any more! All serious discussion these days is limited either to a supremely powerful god, or a deistic god, or no god! When's the last time anyone ever said there was a serious treatment left for gods with limited manipulation ability? Maybe someone who had an indoctrinated upbringing and never thought things through seriously. Surely no-one with the capacity to be serious ever said seriously that we might have limited gods, not since the major monotheistic religions swept the world of.. Of limited gods! That was the right formulation all along. This was only ever swept under the rug by an unfortunate set of memetic evolutionary pressures (in our minds in our realm of our verse), combined with the fact that limited gods are limited in their ability to provide empirical proofs, and memetic pressures considered, that sadly lost out to the limits (possibly zero?) of any of the ways of making coherent sense whatsoever of a supremely unlimited god. When polytheism was in vogue, the belief in many limited gods was also an effect mostly of an earlier pressure in memetic evolution (how lovely, innocent, and bountiful the memescape was back then, before it was introduced to the next level of virality). But polytheism was not an outcome purely of wish fulfillment and suchlike. The limited gods also popped up once in a while to say "Yo, sup" in their limited way, which is typically hard for us to detect, because, hellooooo, did I mention they're limited? But our forebears were better at detecting that kind of stuff than we are now, our diminished ability being due to all the distractions and confusions of technology and institutions and all that, things that people in earlier eras were not burdened by. And that's why the polytheists were right all along. Those deductive modes of arguing, such as The Ontological Argument, and The Cosmological Argument, and The Teleological Argument are still good for what they're good for. But they're only good for talking about the relation between the god and the hods, and we know virtually nothing about either of those. You thought we knew virtually nothing about our limited gods, the prods? We know a lot less about the beings of the outer verse and the maybe-maybe-not being that started that one. Heck, let's try one of those deductive games on for a whirl. Is this the best of all possible universes? Then why is there eyeball cancer? Yeah, well, maybe it's the best of all possible universes for the hods, and maybe that's how the god designed it for them. But there had to be some amount of suffering - a 'best of all possible worlds' is one with a minimum of suffering, but there's no way of throwing things together in a way such that there's zero of that. So the verse of the hods (that, again, is the one that's 'outer' to ours - we live on their computers) has a minimum of suffering for hods. But the hods are substantially more amazing than even we are. They have things like emotions, but 'happiness' as we know it doesn't even come close to what a good day for a hod is like - we don't have a word for that. And likewise sadness. It is possible that every hod has in their brain the computational equivalent of one quintillion of our neurons and that the hods are one quintillion in count. So the god optimized the best of all possble worlds for them. That left about one part per billion consideration for how the intelligences on their computers feel. For example, if the average experience of you and everyone you know has been on balance more miserable than delightful, you can chalk that up to the fact that the god cares one sliver for us as much as he cares about how the hods are feeling. "I can't prove it, but you can't disprove it either," one said a wise philosopher of our verse (supposedly of our realm of our verse, but I have my doubts). So does this amount to a no-go principle? Maybe. Maybe not even. One look at the wikipedia page for "Assassination attempts on Adolf Hitler" and one must wonder whether something fishy hasn't been manipulating our verse. To.. keep Hitler alive? Well, the prods work in mysterious ways. They, being roughly as inferior to the hods as we are, seem to be motivated mostly to keep things interesting in our realm of our verse, with little to no consideration for how we feel about whether it's the kind of interesting that makes us delight or despair. But does this offer us no further conjectures or hypotheses by which we can try to improve things for ourselves in ways such as.. preventing people such as Hitler from becoming powerful? Maybe, maybe not. We might call it a wild conjecture now, and we might call it an accurate map of how to navigate things in a week from now. And it was once said of the heliocentric model of the solar system "That's implausible for certain reasons and also what would we do next about it even if it were true?" and then it turned out we settled both of those issues in the positive. This is literally all more plausible than The Ontological Argument, except for when you apply The Ontological Argument to the relation between the god and the hods, and that has less bearing on what effects us than the gods/hods/prods conjecture does. Q.E.D.: this counts as academic philosophy. Beyond that, I would go as far as to say that this matters. I have one other proof, but if I told you what it is, I would lose something, and that something I would lose is a means to further proofs of this. Determinism is not the same thing as fatalism (this is another one of those boilerplate things you have to mention when you're doing a treatment of determinism). The way I will treat this is just to point at two certain extremes, and then say "Okay, the difference between determinism and fatalism is treated." The book Man's Search for Meaning by Viktor Frankl is a short book that serves as a treatise to found a certain branch of psychotherapy. But the first three quarters of the book is in the narrative style, and it's about the author's experience surviving the concentration camps of World War II. The extract from the story is that the hooman mind is the one thing in the world that is not subject to determinism, and that it can transcend all conditions that might possibly determine it one way or another. It's worth reading. There's great value there. However, and don't call me a holocaust denier for saying this, but.. this doesn't chop any of the legs out from under determinism. His point is nonetheless to be cherished. I don't mean to belittle the book in any way as a work of nonfiction narrative, or literature, or any way except for just this one: it does nothing to disprove determinsim. The correct formulation of his point is as follows: the hooman mind is the one thing in this world that comes the closest to seeming like it can resist being determined. You can have 100 people subject to an experience that tends to produce despair, so extreme that 99 of those people indeed do despair, but the other 1 person remains undaunted. How can that be explained!? Is it like this? Look, those conditions seem like such a certain way to induce despair, as evidenced by the 99 subjects, but at least some people can resist all determining forces, as evidenced by the 1 person who would remain undaunted in the face of all that. It's a nice sentiment. Perhaps even it's the soundest proof we've ever had that an experience, even if it is fundamentally deterministic, should by all means be construed by the experiencer as something that's indeterministic while he's experiencing it. But I'm talking about theory and that's practice: two different things. Any my theory does lead to practice. And maybe the practice that my theory leads to happens to be identical to Frankl's recommendations. I do talk plenty here about practice, but I would prefer if we don't muddy the foundation of theory that underlies it. A hooman brain, after encountering almost any possible thing it can bear, can still decide almost anything. From this we can conclude that the hooman brain is capable of doing things that resist the simplest sorts of cause-effect mechanisms by way of more complex mechanisms. And it can resist almost every conceivable cause-effect mechanism you can throw at it, but all that means is that the most resilient of them has a more resilient cause-effect mechanism than the one you've thrown at it. This gives it some characteristics that look an awful lot like indeterminism, but they're not fully the same characteristics as an indeterministic object would have, and what it comes down to is that it still falls squarely in the domain of deterministic things. Now, have you ever considered the converse of Frankl's adventure? Let's consider a guy named Bob. Bob was born in the 1960s in North America, grew up in the most bountiful economic conditions the world has so far seen, had an education that was called middle class at the time, was surrounded by a community of thoroughly intelligent people, and despite all that, Bob took on a career as a tarot card reader and astrological forecaster, made a living doing that for all of the second half of his life. How can a hooman brain that's been so thoroughly acquainted with the modes of rational thought and its benefits, who has been warned what harm comes from anti-rationality, nonetheless take a career as a pusher of poppycock, after how thoroughly he should have known that it's a bad idea in every possible way? Not indeterminism. The only point here is that Bob and Viktor Frankl both made decisions that very few people in their situations would have chosen, one by overcoming bad circumstances, and one by undercoming good circumstances, but the only thing on exhibit in these two examples is that the decision-making faculties of the hooman brain are extremely elastic. A rubber band couldn't hold a candle to how elastic the brainial faculties of a hooman brain are, but this is not indeterminism. There are deterministic objects that are inelastic, like rocks, and there are deterministic objects that are highly elastic, like hooman brains, but they're both deterministic. Like robots level 1 and 2. Okay, the difference between determinism and fatalism is treated.. almost. Don't ever give up on rationality, whether your circumstances have been those that make it easy to give up or less easy to give up. So much for fatalism. You can be a determinist without being a fatalist. Altogether, being a determinist and not a fatalist is part of being rational. We must always countermand the trousers of oppression, of catastrophe, and of moral laziness. Let us resolve that: if there's anything worth being always unshaking about, that's it. (Just to be clear, what I meant by that is to do whatever it takes to act as though you're self-determined - and that's whether your conditions are adverse or not adverse) "I'm intolerant of any kind of intolerance, unless it's intolerance of intolerance." "So you support intolerance of intolerance?" "That's right." "What if it's intolerance of intolerance of intolerance." "Then an exception. I can't tolerate that." "So you don't tolerate intolerance of intolerance of intolerance?" "That's right." "What if it's intolerance of intolerance of intolerance of intolerance?" "Then an exception. I must tolerate that." "So you do tolerate intolerance (of intolerance)^3?" "That's right." "What if it's intolerance (of intolerance)^4?" "Then an exception. I can't tolerate that." "So you don't tolerate intolerance (of intolerance)^4?" "That's right." "What if it's intolerance (of intolerance)^5?" "Then an exception. I must tolerate that." [...] (This is an infinite recursion, but it's not a recursion of the simplest kind. It's a recursion that alternately flips and flops). 'Tis only a fool who plods thinking of every step. When life was invented, it was a great accomplishment in countermanding some tendencies of nature - achieving localized negentropy in a generally entropic environment. When advanced thought was invented, it was a great accomplishment in countermanding all of the rest of the tendencies of nature that can be locally reversed. A brief rundown on meanings of 'freedom and 'unfreedom'. The central topic here is 'metaphysical' freedom or the lack of it, which is what we normally mean by 'determinism'. There's also political freedom or unfreedom, et cetera. There are important connexions between those other freedoms/unfreedoms and freedom/unfreedom in the metaphysical sense, probably that bear mentioning when talking about determinism. Despite whatever amount of freedom we may achieve in countermanding our circumstances, we're not free to break the laws of physics. You may be subjected to oppression in all the nastiest ways and still maintain an attitude that few people in similar circumstances might give up on, but at no point does that gain you the ability to enter your house through the second storey window. Or to be unbounded from the laws of physics in the sense of being essentially deterministic, subject to the laws of physics. Caprice is the fundamental particle. But you can't know if I'm trolling when I say that. Naturally. The philosophical idea 'deconstruction' is the mistake of saying that all talk of heaps is meaningless just because there's no specific number of beans that counts as exactly enough to call a heap of beans. What's the 100 prisoners thing supposed to mean about determinism? 99 prisoners had their spirits broken by the oppressive circumstances, but 1 didn't, and that proves that hoomans are not deterministic? Because of the 1 who didn't do the normally deterministic thing? Does that mean that 99 percent of hoomans are deterministic and 1 percent aren't? I think if the exercise proves anything it's that hoomans are deterministic, but they all have the capacity to do what seems less than deterministic sometimes. If it were brutal conditions and then 2 had their spirits broken and 98 didn't, I could see that being some kind of support for indeterminism. If you find yourself with conscious experience- and I'm not saying I do or not - you can assume I'm saying this to you as a zombie. If you find that you're the subject of a conscious experience, there's no convincing reason to assume (1) that you're not a brain in a vat, or if you assume that, then further (2) that you're not the only conscious experiencer (that all the other people aren't zombies). These two assumptions are normal for most people, but they're the sort of things that only work when you don't think about them too much, and you write them off without good reasons. Gaining these assumptions is a normal part of childhood development of the mind, and most people achieve them. Issue: as of the previous sentence, I'm already speaking as if we're agreeing that both assumptions are true. If (2) is false, then what's really going on is that most zombies act as though they take on both assumptions when they're child zombies. If (1) is false, then what's really going on is the matrix machine is projecting to you the image of other people acting as though they take on both assumptions when they're child images. See how much more verbose we have to be when we're not giving these assumptions the automatic pass? Anyways, I choose to operate on both assumptions because the brain vat scenario and the zombies scenario are both no-go principles. If you're not ready to drop them both, then "nothing to be done" - and doing things is slightly more interesting than apathy. What would you do if you did think the brain vat were real? The same thing as if it weren't real probably. So it's a good working assumption, just not a justified assumption, to say that your brain is outside all vats. Indeterminism may be true if the whims of the capricious gods are uncaused causes. Their decision basis is perhaps very similar to what we know as "cuz I felt like it", but fully undetermined. This may be very similar to "cuz I felt like it" as we know it, considering how close such decisions can be to undetermined for a hooman. And it's not necessarily a no-go principle to theorize the capricious gods. From their possibility and probable evidence, we might infer "I want to do whatever maximizes making things interesting. If I can have a big effect in making things interesting, then I might be able to earn their protection." "Cuz that's what worked for Hitler?" Did you just say that Hitler's your role model?" "Not necessarily. I think it's likely that their 'cuz I felt like it' results in benevolence more often than malevolence. Like, they've probably given protection to good people more often than to bad people. They might have already struck down upwards of 100 super Hitlers. I definitely think that trying to make things interesting in a good way is a better mode of operation than trying to make things interesting in a bad way, both in the sense that the outcome tends to be better independent of the limited gods, and also because it's more likely to earn their favors." "So the gods work in mysterious ways, but you do nice things because they probably want that?" You're a christian in practice is what the line of theorizing comes down to?" "No, and please don't call me a christian again. No for three reasons 1 - The gods I theorize don't require subscribing to a book that says terrible things and contradictory things. 2 - Many christians do not take being nice as the conclusion of how they should act, often because of the issues mentioned in the previous point. And 3 - The gods I theorize do not require believing in a supremely powerful and benevolent god who has personally bestowed eyeball cancer millions of times. Evil might in most cases be due to the limits of how often the limited gods can intervene in what physics alone brings about - or what's also known as the good ole' deistic god solution to the problem of evil." "If they contrived to keep Hitler alive for the duration that he did, despite all those assassination attempts failing in the most unlikely ways, why did they do that and also strike down 100 super Hitlers?" "Perhaps they like keeping global issues on a knife's edge.. you know why?" "No. Why?" "Because that keeps things interesting. It is what is in line with 'cuz I felt like it' for them." "So usually when they strike someone down, it's because that's a bad person, and usually when they bestow favors to someone, it's because that's a good person. But they'll also strike down some good people and bestow favors on some bad people, just whatever keeps the balance of power and extinction interesting." "Yeah. It's like this. Sometimes you're the ant under the magnifying glass and sometimes you're the bird at the bird feeder. At this moment, how many birds are there at bird feeders and how many ants are there under magnifying glasses? More birds getting favors than ants getting disfavors. And why do we do that? Why do we do either of those things? It's just whatever's interesting." "I can see how this entails something like a hard techno-optimism, a trust that everything's going to work out for the best, an attitude that there's nothing to worry about, especially when it comes to mitigating the outcomes of your decisions." "Maybe a naive version of this would. I could imagine someone formulating it that way. I don't derive that from it." "Why not?" "Because of how agnostic it is. We don't know how many prods there are, or how much intervention power any one of them has, or how much consensus there is between them to converge their efforts onto how much it takes to do an intervention. We don't even know if any of that is right - I never even said for sure that the prods exist. And there are so many other things that could complicate it, like maybe the limited gods have interests other than earth, and intervening in earth matters is now as popular to them as 1950s pop music is to us 21st century hoomans. Even this whole ontology could be wrong and there might not be prods at all. I don't even contend that when I propose all this. All this I'm proposing is just a self-consistent maybe." "And this 'maybe', considering that it might be right and might be wrong, considering that even if it's right you don't have firm estimates on how big any of the factors are, considering that even if it's both correct and relevant, you could still do your best job, a real job of promoting wellbeing in this world, and you still might get stricken down by the gods because it was ill-fitting to their plans, after all that does it do anything for your decision-making?" "Yeah." "What do you do, then, in light of all that?" "It's almost identical to what I would do if I had never come up with the idea, or what I would do if the idea were somehow disconfirmed and some kind of ordinary determinism of matter as we know it was all that decides things for us." "Almost identical?" "Yeah." "The difference being what?" "Remember when I said that I have more proofs, but I would lose something if I told you?" "Yeah." "Also that regarding what I do about it all." They have things in common with the standard theistic god and things in common with the standard deistic god. They're like the deistic god in that most of how they're intervened was to set up the boundary conditions for the universe of matter to have started, but unlike the deistic god in that they do have intervention power that goes beyond that. They're like the theistic god in that they can intervene in the present world based on their choices, but unlike the theistic god in that they're limited in how much they can do that. Hods and prods theory admittedly does kick the can of the most fundamental questions, but it takes care of a number of intermediate-level things. The Ontological Argument is still interesting. Arguments in favor of and against it are still interesting it (as are arguments against arguments in favor of and against it, and so on), and likewise for The Cosmological Argument, and The Teleological Argument, and all those deductive logic games. Those are still tough games. And they still matter if you want to know things about the entities that are superior to us and to our gods. A lot of what I want to do is to demonstrate that there's a fundamental divide between the fundamental laws of the universe and what facts and rules might be relevant to the only entities that we could call gods in any relevant sense, and that might account for a 99.9 percent divide between what's fundamentally true cosmologically and what's fundamentally relevant to hoomans cosmologically. If we needed more saving than they could have done for us, they would have left us to self-destruct. But the amount of saving we actually did need was an amount they could begrudge us, and that they did. It may be that subjective experience is subjective reality and reality is subjective. But that's a no-go principle. If subjective experience is not a lens onto something objective, and it's all as made up as a dream is, how can you make any sense of how it works or what to do about it? You can't do those things at all. Even if you say those words, you'll still be defaulting to what we do when we assume that subjective experience is a lens onto an objective reality. In the law books, it says that we take as a foundational fact that determinism isn't how things work, and contra-causal free will is the truth of how decisions work. That's fine. The determinism-based legal code was tried. It did not work at doing anything other than gamebreaking cost cuts that funded kickbacks and harmed everyone other than a few politicians and the special friends of them. Re the frustrator. "I've discerned all the relevant factors about this guy's decision processes, and I know that if I say left then he will do right and if I say right then he will do left, so there's no way I can win this game under these conditions." Then either the machine gives up and terminates, or plays a doomed scenario. This is plain. What does it illustrate? Contra-causal free will? No. Consider a machine that can track a coin while it's flipping, do some quick math, and announce the result before the coin lands. The coin can't frustrate based on that information. To say that the coin is deterministic and the hooman has contra-causal free will is a similar mistake as saying poker robot 1 is deterministic and poker robot 2 has free will. Not analogous, but similar in lorem ways. And lorem about the difference between the face-up rules and the face-down rules. Related: 'chaotic' is just another word for "deterministic but with a runaway level of difficulty to predict", but it's not closely related to the frustrator. What does it mean to say that determinism is right yet the robot loses the frustrator game? This, I admit, is one of the toughest questions. But it can be untangled. Suppose we replace the hooman in that example with an extremely simple robot with just that programming necessary to win every round. Did you just, with such a simple program, give that robot contra-causal free will? Clearly not. Or if that's all it takes for something to have contra-causal free will then our having it is not very exalted at all. lorem - replace left and right with green and red in all frustrator examples. If the chief among your decision criteria is fellow feeling, then you count as spiritual in my books ("spiritual but not religious" if you prefer). You don't need to have communed with the ineffable, but bonus points if you have. There's a shrinking middle ground between apathy and insensitivity. So now most people have given up on having any agency that extends any broader than to their own interests, like to the interests of other people. Half of them have given up completely and are indifferent to the prospect of being killed at any moment, and the other half are like that guy whose license plate holder says "Whoever brings the most toys to his grave wins." That leaves only a small portion of people who are on the middle ground of being neither apathetic or insensitive, and most of them are dogmatic and not making much sense of any kind. That leaves an even smaller sub-subset of people who are neither apathetic nor insensitive and also still making sense. The realest thing is everything that's real. If you say that subjective experience is the realest thing, then you've mispronounced the words 'most salient'. Rocks and trees are real. Your subjective experience is real. The number 3 is real. Bob's subjective experience is real. But the most salient thing to you is your subjective experience, and the most salient thing to Bob is Bob's subjective experience. That doesn't make them any more real than that rock. They're more salient. The rock is less salient, not less real. If x can profitably be called the cause of anything physical, then x is real? But that would mean that Harry Potter and the number 3 have the same ontological status? They're both 'real fictions'? "Most people who think they should do x would actually not be well off attempting x" -> "If you think you should do x then you're probably wrong" is dishonest and incorrect. Invalid inference! Abandon hope. There are alternatives both complete and superior. I prefer a mixture of mostly mirth, although I do appreciate that for many people that's not an option. Does the frustrator plus extensions prove that nothing is deterministic? Maybe. Does the frustrator plus extensions work if the assumptions are deterministic? Lorem. The objects of pure geometry: are they some kind of "real but not physical"? or fictions? or both (are fictions real?)? Tough question. Suppose they're a kind of fiction. But they have all these rules that have a kind of inherent plausibility, like "angles interior to parallel lines are supplementary." Okay, does that separate this kind of thing from the other kinds of fiction? Every good story about wizards has in-universe rules with a kind of inherent plausibility, and sticks to those rules, so the plot developments follow from the rules logically. So maybe euclidean objects and wizards are not so dissimilar. Between a fallacious inference and a story that fails verisimilitude, is there really any difference in type? Exemplar of the Yin of Chou principle, and one that doesn't get much airtime. Psychotherapists get a lot of 'psychic spiritualist' clients whose conscience troubles them greatly (and rightly). There are no official stats on this, as the only reports of it necessarily come by way of breaking confidentiality agreements. If a politician or a fat cat has a troubled conscience, it will not to him be as salient as when a fortune teller's conscience troubles him, since a politician has a remove, a degree of turning 'person' into an abstraction - not just an abstraction, an abstraction and a game token. To a politician or a business fat cat, "person" is a factor that you manipulate in order to maximize things other than anything to do with "person": it's a game token. The most effective of these people can build a wall in their minds between "person" the game token and "person" the actual entity they see hundreds of times a day, in person, right in front of them, with their own eyes. This is a psychological defense mechanism. However, you can't dupe the Yin of Chou principle. It's too strong. No defense mechanism can build a wall that it can't scale. The conscience will be guilty if it deserves to be. A busuiness fat cat will look out the back-seat window of his limousine, see a homeless person, and the deepest parts of the mind won't be fooled that this is an instance of "person", that "person" is more than just a game token. You can make a computer program to simulate a world with billiard ball determinism and put in it robots that play all the variations of the frustrator game. So the frustrator thought experiments do not prove that a universe with mechanisms that thwart prediction must be indeterministic, and therefore does not prove that hoomans are indeterministic. There's just a law that even in a deterministic world, you can't perfectly predict the result of a thing that can react to your prediction. If there's a Laplace's demon, by definition he doesn't put out press releases reporting what he knows. Our universe is compatible with his existence, as long as his cards are face-down. Asterisk: there's still a possible objection, a weak one, that says "This last step still doesn't prove anything, because when you make a computer program in a world that is at base indeterministic, then that computer program is also indeterministic." To that, yes our universe is at base indeterministic, but if you understand the nature of how that aggregates, then you understand how particulate indeterminism results in macro near-determinism. And computer infrastructure has "error-checking" that will indicate whether any stack of quantum coincidences amount to something contra the macro-deterministic assumptions, and then take care of it. So in a universe that's at base indeterministic, and using the emergent properties of how that shows up, indeed you can make a computer whose operations act like they're perfectly deterministic. Re about w t f is that missing piece alluded to in freakonomics chapter 1. What it means to say is this. Microeconomics is an incredibly powerful tool for detecting differences, or predicting something when a base rate is given, but when it comes down to determining (explaining, predicting) base rates, there's nothing microeconomics says other than "je ne sais quoi" or "too many factors that are just not part of this model. No computation." Accountability and desert ought to be assigned as if indeterminism were simply true, thuswise. We blame you because the events that took place in your brain, though they were inevitable processes in direct continuation from the big bang, fell x amount short of what could have happened in there were the arrangement different in the ideal manner. We commend you because those processes rose y amount above the worst. Tenable? It's not possible to predict the next thing you will think? Sometimes, sometimes not. There are things you can do such that your next thought can always be predicted ahead of time. There are things you can do such that your next thought can't be predicted ahead of time. Some people lose the ability to ever do anything of that second kind, have it extinguished in them. This is called having no chaos left. Chaos in the strictly mathematical sense. Creative imagination works by mathematical chaos. I would have to be an idiot to say I disbelieve plainly and categorically in all forms of spooky action. Yet I persist. How do a bunch of letters in DNA produce something like an arm or a lung? It's pretty complicated. And how do the primary motivations of survival and reproduction (or whatever some person's motivations are) produce everyday activities and more remarkable accomplishments? It's almost as complicated. When a person takes credit of any kind for having any virtue(even if he indeed does have that virtue), this makes as much sense as saying, "it sure was a good idea when I decided to be 6 feet 3 inches tall." The forms of matter and the objects made of them: these are the flocking patterns of the anarchistic. When people don't know the psycho-mechanisms, it's much easier to play them like deterministic fiddles. Just knowing what the mechanisms of the psyche are is freeing, and therefore empowering. It's known now some of the mental mechanisms that help people survive prolonged hard times. In the POW camps in the Vietnam war, the people who gave up on ever getting out died in despair, and the people who kept saying things like "We'll be out by Christmas" also tended to give up on that and die after Christmas, or maybe after one more year and another Christmas, but the people who kept saying "We'll get out, but not gonna guess when exactly that is" tended to live. So now we know the mental mechanisms for surviving that sort of thing with the right attitude. What matters is that when people know that, it improves their odds of not doing the 'more deterministic' thing of giving up and dying. So when we're asking questions like "How can we act like we're indeterministic," facts like those are the ones that will get you those abilities. The thought "which of these things would it feel good to believe" has zero reliability. To choose some other criterion is not some abstract and hard to justify virtue. A story of some random fucking thing that happened at the vape store. I chose a pack of vape pods. The cashier rang it up. The price came up as 23 dollars and 12 cents. The cashier asked me my fake phone number for my account. I told him it. He typed it in. Then he asked me if I wanted $5 off for points. I had a $20 and a $5 in my hands. I said, "Yeah, right on." He pressed the button to apply the discount. Then he said "83-12". I extended the hand with the $20 in it and I said "What!" He said, "Uhh, 18-12." He grabbed the $20 and started making change. I said, "That was a strange mixup." He said, "Sure was." (Coincidentally, this was day 2 of when I first put the audiobook Surfaces and Essences on my audio player, but that had nothing to do with it). I said "83 for 18.. where does that even come from?" He said, "I don't know where from," and handed me the $1.90 change. That was no purely random mixup. Far from it. There were depths to that. How do you mix up 18 and say 83 instead? I can explain it. It's plain to me how that could happen. But it's not a mixup that would ever happen in my brain. It takes a brain a lot better than mine to do that. Okay. Some math. First, if you talk to someone who does a lot of working with numbers, and you ask, he will tell you that the number 72 is an extremely round number. In ordinary parlance, when we say a 'round' number, we normally mean a number that's a multiple of 5, or even better of 10. But that's just a parochialism due to how many fingers we happen to have on each hand, that number being 5, 5 being an incredibly 'unnatural' number. So a hooman with 5 fingers on each hand and not too many cares in the mind will have an understanding that 70 and 80 are much 'rounder' numbers than 72 is, because 70 and 80 are both multiples of 10. But the prime factors of 72 are 2x2x2x3x3. And someone who really works with numbers will have a feel for that fact - I mean if you go to someone who works with numbers and you have these 6 numbers written on a piece of paper, "70, 72, 74, 76, 78, 80" and you say "Which of these is the roundest number!" guaranteed, he will single out 72, because it's the one with all small prime factors. Okay, what does this have to do with 18 and 83? Well, 83 is a prime number. In fact, 83 is the third-biggest two-digit prime number. And 18? 18 is one of those numbers that has all small prime factors, the prime factors of 18 being 2x3x3. So in a certain sense, these two numbers have opposite 'feels' to them, one being prime and the other one having all small prime factors. However, if you're working on a higher level of abstraction, you need not feel like there two numbers have such opposite 'feels'. That's because, at a higher level of abstraction, you can distinguish between "numbers that have no middling number of small prime factors for numbers of around the same magnitude" and "numbers that do have some middling number of small prime factors for numbers of around the same magnitude". Consider the numbers one greater than and one less than 83, those being 82 and 84. The prime factors of 82 are 2 and 41. That's somewhat few small prime factors for a number of around that magnitude. The prime factors of 84 are 2 and 2 and 3 and 7. That's somewhat many small prime factors for a number of around that magnitude. But if you have the 'math feel', there are two extremely round numbers between 70 and 100, and those are 72 and 96. The prime factors of 72 being 2x2x2x3x3 and the prime factors of 96 being 2x2x2x2x2x3. Okay, so the numbers between 70 and 100 that have all small prime factors (specifically, now, by 'small prime factors' we mean 2s and 3s, and 1s don't count) are 72 and 96, and the numbers between 70 and 100 that are prime (have no small prime factors if 1 doesn't count) are 71, 73, 79, 83, 89, and 97 (the prime factors of each of those being itself and 1). Therefore, if there are 8 ideal candidates between 70 and 100 for "numbers that have no middling number of small prime factors for numbers of roughly their magnitude", those are: 71, 72, 73, 79, 83, 89, 96, and 97. These 8 specimens all have either 0 or 5 or more prime factors that are 2 or 3, whereas the other 23 numbers in that range all have between 1 and 4 prime factors that are 2 or 3. So indeed, if you're working on a certain level of abstraction, you'll see the affinity within this group: 71, 72, 73, 79, 83, 89, 96, 97. Okay. A recap of what happened. The price rang up as 23.12, he applied the discount, the price adjusted to 18.12, and then he said "83.12". In terms of the digits of the dollar amounts, 83 has an 8 in common with the 18 and a 3 in common with the 23. Coincidentally, 23 is a prime number, and 18 and 12 are both numbers with all 2s and 3s for prime factors. This couldn't be "primed" (psychologically) any better if you contrived to cook up an example. And that's why "83 for 18" makes perfect sense here. The price rang up as 23.12. The guy applied the discount. The till said 18.12, but the guy said "83.12." But I can only envy someone capable of making that particular slip of the tongue. I can only aspire to work my brain on that level. (I might have written this already, or written something so similar that this is only a slight angle on it) Whether you do good or bad, there's gonna be collateral damage. If good, it is possible to clear the conscience. If bad, it is possible to desensitize but not possible to clear the conscience (short of changing one's ways). Between clearing the conscience in the one case and desensitizing in the other case, the distinction can be had to discern, but it is a difference in type. Doing nothing instead of either won't work either. Okay. Sometimes a person says that they have achieved some substantial kind of success through hard work and clever decision-making, and then they take credit for the cleverness and dedication. When that happens, another person might retort that those virtues of cleverness and dedication are just a matter of luck as much as any other part of what happened, and that it makes no sense to take credit for them. This point of discussion leads right to all the classic questions of determinism and free will, all the way back to points of discussion that have been raised ever since the start of the traditions of writing and reason. Whichever way you want to argue about that, I want to bring up an idea that's just one step to the side of all that. Now, what if a person achieved some substantial kind of success through a meandering path of careening chaos that nobody could have predicted, one that ended with dedication and cleverness, but required a kind of bumbling inconstancy for a long time before that. How much sense would it make for that person to take credit for the virtues of dedication and cleverness in the later part and also for the virtues of bumbling and inconstancy in the earlier part? Counting on the laws of science to hold: doing otherwise is a no-go. There's no a priori justification for being certain that the law of regularity holds, but it's a mighty lucky matter how consistent it's been so far. A lot of stupidity was evolved into hoomans just as a way of keeping things interesting. Even when it solved no practical purpose, did quite the opposite, to all purposes other than preventing things from getting boring. I assert that even dreaming is one of these things. "But nonhooman animals also dream." Yeah, it was evolved into them too for the same reason. (This is conjecture). Just like how you can program the frustrator robot in a computer that simulates a world that's perfectly deterministic, you can program the self-dial-adjusting robot in a similar computer simulation. The shuffler in the poker game can be pseudo-random. For example, let's define the process of the shuffling machine as "take the current clock time in milliseconds, do this and that to it, and decide on a pseudo-random sequence of how the cards are arranged in that deck" but the robot with the dials doesn't know what that algorithm is, so from its perspective it has to take each shuffle as genuinely random (even if it knows that the shuffles are pseudo-random and not genuinely random). None of these things are problems for saying "You can set this up in a manner as good as deterministic and these emergent features still hold." If you always know what you're doing, then the creative process is never what you're doing. "I'm trying to decide between doing x and doing y" is shorthand for "I don't know whether my brain is in one of those states such that choosing x would be inevitable or whether my brain is in one of those states such that choosing y would be inevitable." This is a statement about the incomplete knowledge of one person about the inside of his own head. Then when the person runs hypotheticals in his imagination, "If my brain were in one of those states that definitely does x, the result would be.. this, which would be bad. And if my brain were in one of those states that definitely does y, the result would be.. that, which would be good. Okay, I choose to do y." Which is to say, "Well, it turns out that my brain was in one of those states such that choosing y was inevitable all along." What complications do randomness bring to determinism? It's negligible. To say 'determinism plus randomness' is basically the same as to say 'determinism' for all or most practical purposes. Philosophy was abolished from the North American curriculum because they want us being unrealistic in certain ways, for everyone to think they're "temporarily embarassed millionaires," that "anyone can do anything if they put their mind to it," that "god exists and we live in his favorite country," and so on. They want us sleepwalking with the American dream, and philosophy is dangerous to that because it tends to make a person grounded and realistic. Even someone who has read "Harry Potter and the Sorcerer's Stone" might never have heard of philosophy. The hooman psyche is designed to have certain progressions, biologically programmed. Others it decides as mind. Others it decides subconsciously. A good portion of wellbeing has to do with aligning these three. The gods/hods/prods theology does come with something that operates like faith does. Faith plus a fallibility clause. "Is this really not going to blow up the world? Seems to me like it is, but they know what they're doing.. unless I'm wrong and they don't actually exist." Far from an unshaking faith, it's like a perpetually shaken faith, but it still does a nonzero amount of what faith does. If you know who you are, and you hold to certain principles, you can be genuine. If you don't know who you are, you can't be genuine - you can be honest or upfront about not knowing who you are, but it's not the same thing as being genuine. That's why the advice "be genuine" or "be yourself" is often bad advice: instructions unclear. That'll only increase the angst of someone who wants to follow the advice but who lacks the prerequisite. Everything in moderation, even moderation. But don't even be moderate about moderating moderation. And don't even be moderate about being moderate about moderating moderation. Et cetera. When there's abundant access to communication, and no critical thinking skills, tyranny becomes pretty easy to accomplish. When there's abundant access to communication and abundant critical thinking skills, tyranny becomes impossible to accomplish. There's no great conspiracy. There's no group that decided to gradually make us dumber to the point that most people would vote against their own interests. But people use what they find is handy, and a tyrant in modern times finds plenty that's handy to continue to make us stupider. Re about what's a real disagreement and what's only a disagreement on how we're using words, and how that's not a real disagreement. "Okay, well if you say that ethics has nothing to do with what's conducive to wellbeing or suffering, fine, you go ahead and say that that's what we're using words to mean when you're conducting a discussion. In that case, what I want to talk about is not ethics, but rather I want to talk about the things that are conducive to wellbeing and suffering, whether we're gonna have a word for that or not. When I conduct a conversation, I use the word 'ethics' to mean those things. When you conduct a conversation, you use the word 'ethics' to mean something else. This isn't a real disagreement. Do you want to agree to use some set of words to mean some set of things? If you don't want to agree to that, then I can't possibly imagine you're being serious right now." "Oh, wait.. your job is the head of a government department that has the word 'ethics' in the title. Okay, never mind.. that is an issue." Approval and disapproval do not necessarily presuppose contra-causal free will. That's the trick to squaring ethics with determinism. Approval and disapproval, in the commonsense form, usually presuppose contra-causal free will. Perhaps they seem like they must, but they don't have to. The right attitudes are thuswise pretty similar to the commonsense ones, only slightly different. Determinism is hard to understand fully, but attitudes of approval and disapproval are not things that need major reprogramming in your brain. So I say that the process of learning about determinsim has the following stages. (1) "Okay, the principles are sound and straightforward enough," (2) "But wait, how the hell would we get by if that's how things are?", (3) "All the alternatives to determinism are clearly untenable once you think about them," (4) "Okay, determinism is a bit hard to grok, but it's clearly worth a try," (5) "Oh, I can keep the commonsense notions of responsibility and other ethics stuff, and understanding determinism only modifies those slightly. Nice." The state of your brain is to be programmed to make a best guess at what the ideal next state of your brain will be, and then usually to attempt to get into that state. In the process of that guessing, you say "Maybe X, maybe Y," but it's a maybe of the ignorance type, not a maybe of the "actually could be" type. So it's "Could be X, as far as I know, or could be Y as far as I know (but I just don't know enough to know which)." It's not a "It is the case that it could actually be X and it could actually be Y." As determinists, we say that such a statement is always strictly nonsense. And it's a special type of ignorance. It's the type of ignorance that's illustrated by considering the difference between face-down predictability and face-up predictability. When a prediction can affect the system that it's predicting, you can't know afterwards whether the prediction was right or not. That's because the prediction contains a condition "If I don't announce this prediction to this system," so as soon as the prediction is announced to the system, the world no longer holds the same conditions as the prediction had, and just that quickly, the antecedent of the prediction becomes a counterfactual. So the actual result of the system can't tell us whether the prediction was right or wrong, because the prediction was about something else. Creativity has evolutionary advantage, so it was evolved into us, and that's why we have it. Creativity happens when the brain makes a prediction about itself, and then chooses to do something other than what it predicted about itself. Because the brain can thwart its own predictions about itself, it has that face-up complication to predictability. So the brain has a kind of ignorance about itself that can never be remedied, and that's why free will is such a persistent illusion. "Here we have a machine that specializes in manufacturing the very wrenches that can be thrown into its own machinery." - Tom Robbins, paraphrased [This is closer to good. Some details may still be way off - I may have crossed the wires completely in some of the inferences. But it's a hecking good nth draft.] Lorem - From the vantage point of LaPlace's demon, your brain is just a simple closed system. There's nothing about all this stuff about brains doing things to themselves that makes them anything other than a closed system. "A real human being is like this comparison." - Anonymous The thought experiment with the two boxes and the superintelligence. First the statement (maybe my own formulation of it), then an assessment. You're beamed up to an alien spaceship. You find yourself in a room with only a door. You go through the door. You and an alien are in a room with a table and chairs. There are two boxes on the table: one red and one green. You and the alien sit across from each other. The alien says that they're a hyperintelligent species with really powerful computers, and their capabilites are such that they can scan your brain and predict how you will react to anything you might sense. The first thing you said upon suddenly finding yourself in a ship was "Whoa, what the hell." The alien informs you that he knew you were going to say exactly that, not just "What the hell," not "Whoa, what the fuck," not "Whoa, what the heck," but precisely "Whoa, what the hell." It's because before you said that, he could scan your brain, run a simulation of what that brain would do when its sensory apparatus is presented with the inside of the alien spaceship, and how it would react in the form of an utterance. And so on. The alien says you're there to play a quick game, and then he'll send you back to where you were. In the red box is $100. In the green box is either $1 or $1000. Whichever it is, whether $1 or $1000, it's already there in the green box and he can't change it now. You have two options. Option 1 is you take the green box, shove the red box across the table and tell the alien to keep it. Option 2 is you take both boxes. The alien says, "If you choose to take both boxes, you will find that the green box has only $1 in it. If you choose to take only the green box and forsake the red box, you will find that the green box has $1000 in it." Okay, if this alien can be trusted, then taking both boxes will result in you getting $101, but taking the green box only will result in you getting $1000. But this presents a strange puzzle about free will and decisions. The green box has whatever it has in it, no matter what you choose, so at this point it only makes sense to take both boxes rather than just one of them, right? This is a great thought experiment. It does puzzle for a while. But I think that once you understand the solution, it stops being puzzling and remains straightforward. You take only the green box and shove the red box across the table and tell the alien to keep it. Why? He knows what your reasoning will be, like Wallace Shawn, only far better. Suppose your friend gave you a computer program and told you to run it. You run the program, and it outputs the text "I don't have to do what I'm programmed to do! I'll do something other than what I'm programmed to do!". Then you look at the source code, and the program is just [when user types 'run', then output: "I don't have to do what I'm programmed to do! I'll do something other than what I'm programmed to do!"] Would it be surprising to find that that's what the source code was? When you ran the program and saw the output, but before you looked at the source code, did you suspect that maybe the program was something so incredibly advanced that it was programmed to do something that includes some action, but also includes violating its own programming? So when the hyperintelligent alien looks at us, what he sees is much like what we see when we look at the source code of that simple program. That's a comparison based on levels, but there are differences to the levels. To understand the scenario with the alien and the hooman is not quite as simple as understanding the scenario with the hooman and the program. But the solution is very similar. We can suppose that the alien scanned your brain and chose whether to put $1 or $1000 in the green box, then set up the boxes, right before you entered the room. When the alien chose what to put in the green box, he knew "If this hooman is stupid enough to say 'But we're here with the boxes now and since whatever you put in them can't change now, it must be better to take both', I'll put only $1 in before you enter the room. But if this hooman is either smart enough to solve the puzzle, or just smart enough to trust the process, then I know he'll pick only the green box, so I'll put $1000 in it before he enters the room." He knows what your reasoning and decision will be when he scans your brain just like you know what your friend's computer program will do as soon as you look at the source code. There's nothing about your decision process that's fundamentally different from the movement of a bunch of billiard balls after a break, nothing that doesn't lend itself to being scanned and predicted by a computer faster than it. If you think you understand this, and you imagine being in the scenario, and you think you'll take both boxes, then, friend, you have a quite soluble illusion to dissolve and get past. Think you're smart enough to walk out of there with $1100? It won't be happening. Ain't one hooman on this earth who is smart enough to score $1100 in that game. If you still think you're that smart, then the alien will put only $1 in the green box. It knows you're that slick before it starts the game. Take the green box and forsake the red box. $1000 is the most you can score in that game. If you do that, you've got the highest possible score, and there's nothing more clever that you possibly could have done.\ [lorem, refactor it so that it's $10,000 in the red box and either $1 or $10,000 in the green box] Okay, now that you know the answer.. still don't fucking choose both boxes! Okay. You know the answer. Now suppose the alien picks you up tomorrow, you go to the table, hear the rules, and then you think, "Wait a minute. I know the rules. And now the alien knows that I know the rules. So he must have put the bigger amount in the green box, so NOW it's better to pick both." Don't! He knows beforehand whether you're about to suddenly think you're that clever, or whether you're actually smart enough to avoid tricking yourself. The highest possible score is still $10,000. You're still not clever enough to score $20,000. No one is. No one ever will be. The goddess of chaos is the one true god, and even she doesn't exist. "Alright, free will: does it exist, or not?" "Of course free will exists. I have the will to do whatever I want, and that will is free. Seems plain to me that free will exists." "Okay. Does a simple computer program have free will? Suppose I have a computer program, and when I press the button it goes *beep*, and that's the whole program. Does that program have free will?" "Clearly not." "Why not?" "Because it does whatever it's programmed to do, and never anything different." "In what way are you different from that program?" "I can decide what to do: get inputs, decide on an output." "Is there anything in what constitutes you that makes you different from that program?" "Like what?" "I don't know what. You can expect the program to do what it does because the program is made of an instruction that maps an input to an output, and you can expect the computer to run the program as expected because it's made of atoms arranged in such a way that programs work as expected on it. Aside from that program, it's a pretty complex computer, with an operating system and all the rest. But all those things work as expected because they're made of molecules that are set up to work as expected." "Then maybe my molecules are set up to work as unexpected sometimes." "Okay. But molecules individually can do what's expected of them, right?" "Yeah." "They don't have free will?" "No." "But you and I are made of some arrangement of molecules that somehow has free will emerging from it? There's some way of taking molecules, each one of which works fairly programatically, and arranging them into something that does not work programatically." "I don't know how, but clearly that's what's gone on." "Okay. In what way to you have a freedom of will that differs from a computer program?" "Well, a more advanced computer program, like one that makes effective decisions, it takes a set of inputs, then does some calculation on them to determine an output. And usually that calculation is one that a programmer has decided accomplishes a goal to do the most useful thing in the situation." "And you think you're any different?" "Absolutely." "How's that?" "Well, okay, let's take a decision that I made yesterday. Yesterday, I walked to the store, and the weather was cold out, so I made the decision of putting on warm clothes. But I could have made the decision to put on light clothes. Even though that output of light clothes is not congruent with the input of going out in cold weather. How do you explain that choice?" "Okay, so the normal decision to wear warm clothes in cold weather has something to do with avoiding pain. The decision goes something like this. There's something I need from the store. If I don't get it, then there will be some kind of pain, so I will go and get it. And the weather is cold. And if I wear light clothes in cold weather there will be pain, so I will go out in warm clothes. If I go according to that plan, then I will avoid the pain of exposure to cold weather, and also avoid the pain of being without that thing I need from the store. Right so far?" "Yeah, that's how the usual decision makes sense. Sometimes I even do go out in shorts and a t-shirt when it's snowing out, just to remind myself vividly that I have a choice to refuse to make sense whenever I want." "When you go out sometimes in light clothes in cold weather sometimes, even that's for a purpose? To get a strong feeling like you have a sense of agency?" "Yeah." "So when the weather is cold and you have to go out, you usually wear warm clothes to avoid the pain of exposure to cold, but sometimes you wear light clothes just to remind yourself that you have a feeling of agency. And supposedly if you never did anything as 'lol so rando' as wearing a t-shirt and shorts in the snow, you would lose the freshness of having a sense of agency, which... would be another kind of pain. So even when you do it that way, it's to avoid one kind of pain or another. Warm clothes or light clothes, whichever one you pick in cold weather, it's to avoid one kind of pain or another kind?" "Well, yeah." "Seems to me like a choice that's no more free than what a robot does." "Why would a robot want to have a sense of agency?" "Plenty of people want to have the sense that they have something they don't actually have. Ever met a stupid person who thinks he's smart? That's someone who wants to have a sense of intelligence but doesn't actually have it. And you like having a sense of contra-causal agency even though you don't have it." "Fuck. I'm programmed to feel like I'm not programmed?" "That's my stance. And maybe I'm just making argument for the fun of it. I don't know if that's how it really works." "But you can defend it, as an exercise in reasoning?" "As just demonstrated. I can defend it, at least against the simplest of objections." "Can you defend it against better objections?" The effect of the "don't say god" rule in public education is the following. Don't say "god" in public education -> don't think any thoughts pertaining to gods or their plausibility or implausibility in public education -> ensure that everyone who has gone through the public education system is fully vulnerable to the hokum relating to gods, or at least got no remedy there. The take-home lesson from learning about determinism is a balance. It's not a balance of "Okay, determinism is true somewhat and contra-causal free will is true somewhat." Of course, that's clumsy, and clumsy thinking is no good. But a lot of what it comes down to is the following. Okay, I see that things work somewhat like a naive understanding of determinism and somewhat like a naive understanding of contra-causal free will. Not that both are half-true each. But when you have a nuanced understanding of determinism, it's quite unlike a naive idea of determinism. A nuanced understanding of determinism has some things in common with a naive understanding of determinism and some things in common with a naive understanding of contra-causal free will. But it's also a lot better than just saying from the outset "Okay, I'll take half of that and half of that." Unsorted Pile 2b Most injustice in the hooman world takes on the character of, "I'm being stifled, and I can't do my real thing." The varieties of how this comes about are many. Liberty means it's legal to do whatever you want to do as long as you're not harming anyone else. There are a lot of ways to formulate that, depending on meanings and exceptions (e.g. what's the difference between setting the legal alcohol drinking age at 19 or at 21?) But a person also can't do what he wants to do if he is prevented from knowing what he really wants to do. So, factors that prevent a person from knowing what he really wants, those are also violations of liberty. For example, if an education system fails to include important things, and it generally makes people think they hate learning, then the land where that takes place indeed does not have liberty. If we call tyranny a regime that disallows liberty in the direct sense, then let's call it a sneak tyranny one that disallows liberty in the higher order sense. Off the top of my head, I can name several American states wherein education is almost entirely a thing of the past, and how many people in those places have any idea what they want? The wife of the pastor spoke for a few minutes and said, "I wanted to travel to South Africa to visit my mother, but they weren't letting unvaccinated people on airplanes, but I wasn't going to get vaccinated in order to get onto an airplane, but I held onto my ticket, and just a few weeks before the flight, the vaccine mandate for airplanes was lifted. Thank Jesus!" .. not like.. when people make decisions.. there are people experiencing things? No, everthing that happens outside of your own brain is what Jesus is doing.. Only you and a few other people you happen to be close with and interact with regularly are the real people who have brains that produce experiences. All the rest of the world is like a hologram or something, but there aren't really people there. Solipsism is a no-go policy. That doesn't mean that's it's necessarily wrong. It does mean that it's a bogus way of assuming things. And it does mean that you're exceedingly likely to do and say some really stupid shit if you insist on it. It has been said that all arguments about god or gods, whether arguing one way or the other, in modern times only arguing for or against monotheism has traction. I strongly disagree. Polytheism has not ceded the ground of plausibility to monotheism. It only seems that way because of who's talking. And ancient Greeks and others like them have sadly gone extinct among who's talking now - but that shift was never a matter of the plausibility of who has been thinking what. As for a supremely powerful singular god, I see no relevant grounds here for discussion - it's square circles everywhere you look - it's so entirely untenable, there's just nothing doing. Even if I'm a teapot agnostic, polytheism has massively more plausibility in my mind than monotheism. As for the idea that there may be many gods of limited power, the objection that reflects the flavor of our era is "If ThEy'Re LiMiTeD tHeN tHeY'rE nOt GoDs!" But this only reflects what the word/concept has come to mean. What it does not reflect is.. what the word/concept meant for a long time before that. So if the ancient Greeks and Romans could call their limited deities gods, or their limited gods deities, then that's a formulation - and modern discourse doesn't get to throw that out wholesale just because modern discourse is modern. As for the differences between what one thinks is likely and what extensions one uses to flavor that thinking, a godless universe is a good working hypothesis, but a polytheistic universe is the next most plausible thing, and it is not exactly a perversion of thought to sometimes entertain the possibilities of that and what it might entail. It is however absolutely a perversion of thought to entertain the possibility of a supremely powerful singular god. How the heck can one never compromise their conscience? In this economy? Well, for example, if my diet consists of 40% meat by mass, my conscience will be compromised. If 5%, maybe not. If 0%, maybe not. Fatalism doesn't count as a decision criterion. There are good arguments in favor of 0% and good arguments in favor of 5%, but no good arguments in favor of 40%. I hate plastic, but I have to use plastic. I hate pollution, but I have to generate pollution. But I can get part way informed about how bad all the options are and select what seems like the least harmful of them. I can't get fully informed, because that would take all the time, but, again, fatalism doesn't count as a decision criterion, and I can get part way informed. And I can select the least bad of available options sometimes. And I can, without having to drink any koolaid, believe that some variation of techno-optimism has a nonzero plausibility. Yes, you can live with an uncompromised conscience 'in this economy'. Rejecting fatalism in a number of different ways is part of it, but you don't have to kid yourself. It's handy that you don't have to fool yourself as part of getting an uncompromised conscience, seeing as how you can't have an uncompromised conscience if you're fooling yourself about anything (fooling yourself about anything necessarily compromises your conscience). And I highly recommend an uncompromised conscience. The first step is you don't ever give up on thinking about something that's more important in favor of something that's less important (well, that's a pretty close approximation to how the first step works, but not exact). Even if the world is doomed, no matter how much I and Greta Thunberg try to help it from being doomed, I can do interesting things in that world while it lasts. If that's the case, nothing about that has to weigh on my conscience. I didn't doom it. This has been a thought experiment. I'm not even a fatalist or a doomer. I'm just saying that, suppose conditions change and it becomes certain that doomerism is simply proven true, that still gives us x amount of time left to live with uncompromised consciences, if one can manage it. Lorem - so how does one separate the task of (1) clearing one's conscience from any form of compromise from (2) other things that might be just as depressing as a compromised conscience? And if that can be done, can one (1) an uncompromised conscience and (2) an attitude of something other than depression about the other things? Right now, most Americans have been convinced that they're "temporarily frustrated millionaires". Most of them think that some day soon they'll have the power of 1000 commoners. That's taken them away from realizing that improving your condition to one such that you have the power of 1000 commoners is no match for improving the world by improving the power of commoners. And most con men go to jail, but some con men pull off such an impressive scam that they gain the power of 1000 commoners and don't get arrested or convicted of anything. Funny how there's a parallel between those two things. Is it any wonder that the people with the "temporarily frustrated millionaire" manufactured delusion also tend to be the same people who exalt conmen? I'm allowed to say things that are enigmatic, that have unstated conclusions that a reader has to fill in by his own efforts, that seem like non-sequitors unless you squint your eyes just right, that don't pass rigor but do pass something else, that do pass rigor but don't pass anything else, that are allegorical even when they're disguised as things other than allegories, that have figurative meanings despite being phrased literally, that have literal meanings phrased literally but extended meanings not phrased at all, that I have more supports for than the number I've provided, that have connotations that aren't really carried in the format of text despite being in the format of text. I don't care if it means that only the sharpest of my readers can detect that subtlest of my meanings, or if that subset is an audience of zero. I also claim all applicable rights to any such meanings anyone happens to draw out of them, including ones I hadn't thought of, to whatever extent the law allows (these rights are hereby asserted). I've taken the leap of faith of assuming that a world outside my mind exists, even though there's nothing to be said that supports it other than "that's how it seems, and what are you gonna do if you don't just take that leap." I've taken the leap of faith of, assuming that a world outside my mind exists, assuming further that certain other entities in that world also have minds just as rich as mine, even though there's nothing to be said that supports it other than "that's how it seems, and what are you gonna do if you don't just take that leap." Somehow even after those two, I think it's stupid to take an additional leap of faith that says, "There is a supremely powerful god in charge of everything and he cares about you personally, even though he hasn't shown up and told you that directly." I was out for a walk one night, and I was attacked by two people from behind, and they punched my head many times, but they didn't get anything other than sore fists on account of the hardness of my skull. Now, there's a webcomic called Saturday Morning Breakfast Cereal, and in one installment, there's a story about Superman not knowing who to punch, whether the mugger, or the bureaucrat who was responsible for bringing about an economic situation where anyone would resort to mugging to pay their expenses. In reality, in this case, it was pretty clear. I got punched in the head so many times that night. My head was hurting for a month. It was hurting a lot in particular that night and for the next couple days. But it didn't stop hurting until a month later. And all that time, it was obvious. Even at the time between when those people started punching my head and when they finished punching my head, it was obvious: this is what corporate greed does. The people who happened to have motor control of those fists were a negligible part of what happened that night. The amount of annoyance I had toward them at any point, including when they were punching my head, was vanishingly little. I was punched in the head by corporate greed, and about that I am quite annoyed. It will be pretty obvious to anyone who was mugged in greater Vancouver in 2023. To most people in North America, it's a plain fact that things on the macro-economic level and related things have been rapidly getting a lot sadder in the time period of that year and the ones thenabouts. That same night, I had a dream that there were a bunch of people consoling me about how that came to a head for this particular person on that particular night, and then I didn't have any nightmares or dreams of any kind about it after that, or any kind of post-traumatic anything. "What is the seat of consciousness?" "I dunno. Maybe everything's the seat of consciousness?" "Everything's the seat of consciousness?" "I said 'maybe'." "Then is that how it is, or what?" "Well, not that it's particularly likely, but there's nothing remotely like a satisfying explanation of how this or that thing might bring about consicous experience." "Then's what the best candidate explanation?" "We don't know what's the best candidate explanation, because we're so far from understanding any of the relevant details." "Alright. What was the first candidate explanation you mentioned?" "Maybe everything's the seat of consciousness." "How would that work?" "If the thing that happens to bring about consciousness is anything made of matter." "Then everything made of matter would be conscious?" "They would." "But it doesn't seem that way..?" "Does it not?" "How would it?" "It's common sense to assume that a person is conscious, but a chair is not, right?" "That's common sense." "So it's common sense to assume that there are chairs that we experience, but those chairs don't have conscious experience, right?" "That's common sense." "Well, if your chair did have conscious experience, what would that seem like to you?" "I don't know." "It would probably seem a lot like your common sense experience of a chair, the one you assume doesn't have conscious experience. Because if a chair had conscious experience, it doesn't have a mouth, or any other way of telling you about it. A chair with a conscious experience would look a lot like what we know of as a chair." "Is there a word for this formulation?" "Yes." "What is it?" "Animism, or panpsychism." "Is it right?" "Nobody knows." "Why not?" "Because there's nothing close to a satisfying explanation of how this or that thing might bring about conscious experience." "Is it likely?" "No." "Why not?" "Because there are explanations that are more dissatisfying and explanations that are less dissatisfying about what brings about conscious experience." "Which kind is animism?" "More dissatisfying." "What's a less dissatisfying explanation of what brings about conscious experience?" "Brains or other objects with similar complexity." "Please square all this stuff about what the seat of consciousness is." "About the seat of consciousness, or what brings about conscious experience, there are explanations, but none are satisfying, but some are closer to being satisfying, and some are not close at all. Because none are satisfying, we still have to say 'maybe' to a lot of them. One that's probably not close at all to being satisfying is animism, which is that all matter brings about consciousness. But there are other answers that seem likely better. Closer, at least, to a full explanation. And all of those do have in common that your chair does not have a conscious experience." Most people choose not to cheat in undetectable scenarios not because of a spurious generalization, but because most people have figured out the yin of chou principle. [I really have to read the rest of Freakonomics, but there's a terribly interesting conclusion to the first chapter:] microeconomics can be used to explain much about almost any decision-making process, but when it comes to why most people don't cheat in scenarios where they won't be detected, that's more than 90 percent of actual outcomes, and microeconomics has nothing to explain it. The great shortfall between a measure of justice that our collective intelligence could achieve and what we really have, this is due to a relatively small portion of people who have rejected the yin of chou principle. Further, it is to the detriment of each and every one of them, because there is no greater utility to a person than a clean conscience. And cleanness of conscience can not be compatible with choosing to do harm. It follows that if people had a better sense of what is good for each, all would be better off - and I mean each person for his or her own interests. This is the difference between ethics per microeconomics and ethics per the real world. The model is not terrible but not an exact match to reality. Here the difference between the model and reality is an important fact, as is the model, as is the reality. The misconception is that the greed of each brings about maximization of the wellbeing of all. The correction is that it's right only if it's greed for fellow feeling. This is not to throw out the microeconomic principle and say "the altruism of each brings about the maximization of the wellbeing of all." Let's say that a greed for fellow feeling is one that requires an auxilliary greed for one's own interests, and that one does match the microeconomic principles. Let's take for example your favorite billionaire philanthropist. It can be a partisan issue which of those people you prefer and which you disprefer, so pick whichever of them you think is "one of the good ones." This is a person who built up a massive wealth by understanding and using the principles of microeconomics, to the point of having a great surplus, then put a great effort in trying to give away that surplus in whatever form of charity he thinks will work without backfiring. That's a person who has a greed for fellow feeling. It took understanding "What I gotta do to get mine" and then "How can I help other people do similar." It took understanding microeconomics as one of the tools for amassing wealth for oneself, and it took doing things completely outside the model of microeconomics to try and figure out how to give it away in some manner that really helps without raising incidental harms. The people with the cleanest of consciences are found among them. All that last part would be a lot less necessary if everyone understood the yin of chou principle as well as most people do. But it only takes a small percentage of people rejecting it to cause big problems. If you do something that will make you and several of your close friends very well-off, but will do a harm to a great many more people, then your conscience will not be clean. No matter what shiny things you buy, no matter how many times your friends in corruption tell you you're a great guy, your conscience will ensure you have a bad time of all that. And that will cause you disutility to a degree that makes none of it worth it. And you'll have to change what you're doing if you want to stop having a bad time of that conscience. You're simply not doing what's best even for yourself in that scenario. How to solve all that, or alleviate the harms of it, as a singular person with agency? That's hard to say. The vectors are (1) for people with good intentions to improve systems so that they're better proofed against bad actors, and (2) educating people better so that there's simply fewer people who think that the worse is the better course for themselves, and (3) doing as those billionaire philanthropists do. Those billionaire philanthropists are what I see as the pinnacle of what "spiritual but not religious" means. The expression can mean many different things to different people. At the first remove, "spiritual but not religious" means "I believe in the afterlife and divine retribution, just not by the christian god with the caucasian features and beard - it's Krishna who does the divine retribution and the mother tree who does the afterlife, or something." At a better remove, "spiritual but not religious" means someone who has put a great deal of effort into achieving fellow feeling, perhaps first by amassing a great wealth (and even that perhaps required some amount of vanquishing foes in the business arena), and then by making an effort of giving it away in some manner that will end up being effective - retribution and afterlife be fucked, it's just what feels good regardless. From "Well, I can't know whether or not my mind is the only thing that exists in the world" to "Well, I can't know whether the best-reasoned theories of science really point to regularities in the objective world or are just coincidences", list them all and then the point is (1) yep, can't know for sure and (2) but we go on as if there's some likelihood of each one of them, cuz w t f else would you do, oh yeah, you'd do the same as we in fact do, unless you're stuck on one or more of them, so here's how we navigate that. "Yeah, I sympathize with that. I mean, from your perspective, and setting my feelings aside, you can't know if you haven't imagined me. I would implore you to consider my feelings, but we have to assume that, at the level of doubt we're addressing, that's not material." What's the difference between incentivizing something and gravitizing that thing? Sometimes, a person is drawn to doing some thing because they gravitate to it. Supposedly, whatever that is that makes them feel drawn to it can be designed into it more effectively or less. So when it's effectively designed in, we can call that 'gravitizing' the product, and then people gravitate to using it. And sometimes a person decides to do something because they've been incentivized. This is either exactly the same thing or close. Let's say this then, if there's some distinction to be made here. If a person decides to do something for reasons relating to 'cool' cognition, then they've been incentivized, but if a person decides to do something for reasons relating to 'warm' cognition, then they're gravitating. So to design something in a way that gravitizes it means that it then has draw by way of 'warm' cognition, and to design something in a way that inventivizes it means it them has draw by way of 'cold' cognition. (if I don't feel like refactoring) by "no-go principle" I mean "Any statement about the way things work that nothing meaningful can be derived from, so we might as well take its negation as true whether that negation is proven or not." Coffee houses, cat videos, and environmental effects. I would like to address the question of how a hooman psyche relates to the present world, and how that differs from how it was at previous times before modern technology. Let us first consider the words of Soren Kierkegaard when he said "There is nothing with which every man is so afraid as getting to know how enormously much he is capable of doing and becoming." I want to assert here that modern technology has had a big effect on this most important of hooman questions and the general feel of it. Consider one minute in one afternoon of the life of a guy named Bob. Bob loads up a cat video on his computer upon clicking on a link on whatever site he was browsing. The cat video is one minute long, and highly amusing. The cat video ends. Bob is suddenly overcome with the magnitude of what he just forsook. How many things more valuable than watching a cat video could Bob have engaged in with the same information technology? One cat video one minute long can burden a person with a sense that he just took on one ton of opportunity cost and one ton of guilt. It's doesn't really be that way, but you'll be forgiven if you've felt the same thing. And even if you've never found the thing this particularly striking, this environment we navigate in will give you that feel whether you notice it specifically or whether it's a background effect in your mind that you don't notice. Let us now consider something more typical of a person living in the year 1676, named Bill. Kierkegaard was writing approximately half way between the lives of Bob and Bill. In Bill's time, a typical way someone could make oneself useful to the wellbeing of hoomankind would be something like: go to a coffeehouse, listen to the things people are saying about "There's a thing that happened about a month ago about a hundred miles from here, and here's what prominent intellectuals have been saying about it", and then mostly repeating the same information to the next person who enters after that guy leaves, and maybe discussing it a bit. Sure, it would have seemed a bit inefficient as a way that information gets around, even to a person at the time, and I don't think that's just my modern perspective on it. But there really was little better a person could do about making sure the true and important information gets around. And now you can forsake doing something even that inefficient entirely for a cat video. Allow me to indicate what the heck I'm getting at now in the form of a remedy of sorts: sometimes, watching a cat video is exactly the thing to do. It would be a damn shame if there were all these cat videos and nobody ever watching any of them. Watching a cat video is one of the greatest things a modern person can do for his spiritual health, and it's a marvel of modern technology. That's granted that watching cat videos isn't the only thing you do, and that indeed you do something more useful with at least part of the day some days. But if you shirk one minute of work to watch one minute of cat videos, the amount of guilt to shoulder is not one ton, and the opportunity cost is also not one ton. You have whatever number of minutes to do whatever you're gonna do with them, and when you waste one of those minutes, the opportunity cost and corresponding amount of guilt that's called for is.. minute. (Hence the name). Yeah, your home computer, if it had the right program on it, could generate the solution to most of the world's problems in the space of one minute. But nobody's ever written that program, or figured out how to. We're all just kind of here, getting along in our clumsy way, inventing computers and using them clumsily. But those computers wouldn't be any less clumsy with a cat operating them. See, we're still not so far from the coffee houses of yore. Heck, in pictures, a snake or a hedgehog can look magestic, more magestic than perhaps you regard your life as. But if you watch what they do, they're pretty fucking clumsy, too. They can die of cold because they're too stupid to buy a Bic lighter and operate it. And that's something even you can do. It's all very clumsy, and it's all okay. I'm more of a dionysian than an apollonian, more of a left-hand path person than a right-hand path person. One might refer to things like that as inefficiencies, but when I'm talking about inefficiencies, I'm on a topic different than that. A right-hand path person might stick persistently to some process out of some kind of obstinacy even though there's a more efficient way he could be accomplishing the same things if only he were willing to consider alternatives. In that sense, the right-hand path can be inefficient. Even if the person in that example is saying "I stick to this process because sticking to a process that works is efficient." On the other hand, the left-hand path process to actualization might involve many bouts of drinking alcohol heavily and questioning "hey, man, why do we do the things we do in the ways we do them" and most of those times coming up with no fruitful answers. But if even some small portion of those sessions of Dionysian philosophizing results in some answers that indeed can improve how other people can do what they do, and if the only way to come up with those improvements is 10 such sessions that turn nothing of the sort and 1 that does, that's not inefficiency. That's just what it takes to stumble upon insights. So when that person gets drunk 11 times and comes up with a great idea 1 of those 11 times, he's actually doing the most efficient thing in the world a person can do. That's why it's a bit facile to say that 'inefficiency' is what Dionysian and left-hand path people are up to. Is mathematics invented or discovered? Supposing I lean heavily toward 'discovered', as heavily as possible, here's how I would reckon it. When I write a math formula I've never seen before that encapsulates a math idea I've never heard about before - I mean, as far as I know I'm the first hooman who has had the idea, and then I worked out the formula that does the task associated with the idea, this still feels to me more like discovery than like invention. When I write a computer program that accomplishes some task, it feels more like discovery than like invention. Even when I write a computer program that other people use, that feels more like discovery than like invention. The way that goes is something like this: all objects and all possible objects already exist in the mind of god, so when you identify some object that's never existed in the world and you make the first of those to exist in the world, all you've done is to pick something from the store of god's mind, and that act of picking is an act of discovery, not invention. This works even if there is no god. All objects and all possible objects exist in the mind of a god that might or might not exist? Okay, we don't need a god or a maybe-god for this. All objects and all possible objects exist in some realm, and when you bring the first of something into this existence, all you've done is pick something from that realm. Does that require a realm of non-physical existants? No. The realm can be a metaphorical realm and not a real realm. For example, "the set of all real numbers" is something we hoomans have done a great deal of reasoning about, even if we don't suppose there's a nonphysical realm of real existants where lives the set of all real numbers. We don't need such a realm to really exist when we talk about the set of all real numbers. It can be a 'concept space'. This works even if there are things in the concept space that have never been instantiated. For example if I invoke the number 1.387459348797 by saying the words "one point three eight seven," et cetera, that might be a number that no conscious mind has ever invoked in particular before that moment I said those words. Does that make me an inventor? No. It's hardly an act of creativity to take this 'space' called the set of all real numbers, and invoke one of them that's never been invoked before. And does my act of being the first person to instantiate that number mean that "the set of all real numbers" hasn't been defined well enough until enough people instantiate all possible numbers in that space? Clearly not, since that would require either an infinite number of people naming specific numbers, or a finite number of people naming specific numbers for an infinite amount of time - and clearly we haven't needed to do that in order to reason about the set of all real numbers. So a concept space may be 'well enough defined' even if it has an infinite population, even if only a small portion of the items in that population have ever been identified in our realm. This principle, it extents to the set of all possible algorithms, all possible programs that might run those algorithms, and all possible objects that might be thrown together from three-dimensional matter. Even if, for example, I make a technical drawing of a control valve that no hooman has ever drawn or manufactured before, even if I take that drawing and a report about it to the patent office, even if the patent office confirms that I'm the first person who has ever drawn a picture of a valve quite like that, even if I get a patent for it and order a batch of such valves manufactured, and even if I have those valves installed in each of several machines around the world, that doesn't make me any more an inventor than a guy who one day says "one point three eight seven," et cetera. That's why I say all of these things are a form of discovering. Okay, so everything we call 'discovery' and everything we call 'invention' are really part of discovery. So the word 'invention' is meaningless, or it has some meaning but it's never accomplished? No, it's not necessary to abolish the word. 'Invention' is a fine word, and let's keep using it. What I want to assert is that invention is a subset of discovery. I would like to disagree with anyone who says that invention and discovery are separate non-overlapping sets, and anyone who says that invention and discovery are separate sets with partial but not complete overlap. Invention is a subset of discovery: this I assert. If you find an underground cave system that was never known until you found it, that's one kind of discovery. If you say the words "one point," and then enough random digits such that you name a number that's never been named in particular before that moment, another kind of discovery. If you make a technical drawing such that a manufacturer can read it and craft something that's never existed before in solid matter, another kind of discovery. The cave, the number, and the physical widget are all things to discover, but in the case of the physical widget, we call that invention. Fine. Then making the first physical widget of its type is invention and discovery, but naming a number for the first time or finding a cave for the first time are discovery but not invention. So goes the subsets. So what does it mean for me to be a mediocre programmer? I've made some decent computer programs, but a better programmer could have made programs that do the same things better i.e. programs that come up with the same output with a far smaller demand on computing resources required to do the task, and in fewer lines of code. That just means I'm clumsy at searching god's brain, or searching the realm of nonphysical existants, or searching some realm that doesn't need to have 'real existence' itself. And people who are better educated than I am have learned how better to search god's brain, or search the realm of nonphysical existants, or search some realm that doesn't need to have 'real existence'. They're discoverers on better expeditions with better equipment. A possible corollary to Plato's realm of the forms. Suppose the object "the number three" really does exist in a real nonphysical realm. Suppose further that the object "the set of all real numbers" also exists in the nonphysical realm. And suppose further that it can only exist if every member of it has been instantiated in particular, at least in the nonphysical realm. Suppose further that an infinite number of nonphysical monkeys on an infinite number of nonphysical typewriters can type for what would seem to us like an infinite amount of time, in an amount of time that to us is really an infinitesimally short instant. That might have been what happened at the instant of the creation of this physical realm. They would also need an infinite number of nonphysical monkeys on an infinite number of nonphysical computers running AutoCAD. Or maybe it really did take them an infinite amount of time to populate those sets, and this physical realm only came into existence after infinity amount of time passed there for them to set up those conditions. "Pain is an illusion. An illusion that really, really hurts." - Mighty Ducks (TV series) What does it mean to make a big decision about what to do with your life? You can do it half-heartedly, not really considering the options - some people opt for that. You can get stuck considering the options and never making a real decision - some people do that as well. Or you can make the biggest opportunity cost decision you could possibly make. I've decided that I don't want to be a musician. That's even though it's one of the things I have the natural talent for - I mean, if I decided to (supposing I could bring myself to decide to) I could give that a real go, be successful at it, really push some forefronts once I got really good. But I picked something completely different from my set of natural talents, something that has essentially nothing to do with music composition. And I know that if I really give that a go, I will have to bury that alternate universe version of myself who is a musician. And that's not easy, because he's really hecking good. I love it when I hear a song that has an instrument that's not common to many songs, and I can identify "Oh, right, of the hundreds of songs I'm familiar with, this and this song have that instrument, and none of the other ones do." And I almost shudder to think of how many other things like that I would be able to think if I were a serious musician. I find chords and keys fascinating, but I know almost nothing about them - only enough for it to be interesting, but not enough for it to be more interesting. Still I'm not a serious musician - I'm not a musician of any kind except for maybe really really casually for an hour once in a while. To make the opportunity cost of what you're going to do with your life, you have to identify perhaps a dozen or more of those people you could be, as impressive as each one is each in an array of different ways, and bury all but one. It's that, or get stuck, or give up altogether. By definition, this is the hardest decision you'll ever make (if you ever make it). Unsorted Pile 3 Re the take home about determinism. There's a difference between (1) clumsily accepting parts of "both sides" and (2) getting the balanced nuance right. Suppose there's a guy named Bob, and he hasn't done much philosophical discourse, but he's heard from various movies and casual conversations things like "Hey, man, we're all just chess pieces with no choice about what we're doing" and also "Hey, man, no matter what happens to you, you can make a choice to do what refuses all your circumstances." So he's heard, in some form of words, the ideas that form the foundations of determinism and of free will. That's better than never having heard anything about any of that, but it's far from good. The people who have this sort of jumble, but nothing by way of sorting it out, they'll sometimes do bad reasoning while citing determinism as a justification and sometimes do bad reasoning while citing free will as a justification. To understand the nuances of how it works: this also involves sometimes citing reasons that point to determinism and sometimes citing reasons that point to free will (or rather "what determinism doesn't restrict from"), but far less often to do bad reasoning and far more often to do good reasoning. It's not to say something like "On Tuesdays, Thursdays, and Saturdays I believe that everything in the universe is determined and there's nothing we can do about it as determined things in the universe, but on Mondays, Wednesdays, Fridays, and Sundays I believe that some entities, us included, have an ability that can supercede how everything else is determined and therefore we are the few undetermined things in a univerise otherwise determined." When you work it out, it works a bit better than that solution. The universe is ridiculous, and you should reflect that. I don't mean reflect on that. I mean that to be a microcosm of the universe, which by the way is a good idea, you must become ridiculous like the universe is. Murder should be illegal. That's just my opinion, maybe an unpopular opinion, certainly a contrarian opinion. A judge in Canada decided that murder is legal when the person committing the murders happens to be of the ethnicity we call First Nations. Here's what he said. I'm going to translate this from "Legal-ese", meaning the way lawyers and judges word things in English, into plain English. "This guy's great-grandparents had a really hard time on account of certain really mean things that our great-grandparends did, and therefore when this guy goes around stabbing people to death, that's not against the law." This is not an exaggeration. The judge said that in really obfuscated language, like "The extenuating reparations that come to bear on the present judiciary action" and stuff like that, but no exaggeration. There are people going around stabbing people, good people, unprovoked, until those good people die (of being stabbed to death), and then the people officiating the law are saying "Well, he's of a certain ethnicity, so all that murdering, that's not against the law." You can get accustomed to stasis, but you can also get accustomed to things constantly getting worse. This is the great risk in the world presently. Back when times were simpler, a person could get accustomed to stasis, and a person could push back whenever things change in any way. Now you can get accustomed to things constantly getting worse, and never push back against anything. Things can get a lot worse that way. Interest, incentive, motive, decision, action. That's the mechanism for understanding what most people do most of the time. Perhaps not surprisingly, the greatest of the ills of society can also be explained with reference to this model. Let's start with the simplest case. I have an interest in not starving to death. Because of that, I have an incentive to keep food in my home, and to buy food when there's none at home. Because of that, I have the motive, when there's no food at home, to get some. Because of that, I have the decision to get some when there's none at home. And because of that, I take the action of getting some when there's none at home. That's interest, incentive, motive, decision, action. I took the action of driving to the store today. Why? To explain why, you can point any number of steps back in the mechanism. I drove to the store today because before that I decided, "I will drive to the store today." Action in terms of decision. That explanation doesn't do much. "That's what he did because that's what he decided to do" isn't much of an explanation. But explanations that span more of the mechanism shed more insight. If I relate the action all the way back to the interest, I can say "I drove to the store because I don't like starving to death." Well, this can seem kind of vague because the explanation spans too many of the steps and leaves out too many middle bits. A good explanation would be something like "I drove to the store to get food, because I was out of food." This explanation relates to several points along the mechanism. The reason for most of society's ills is because people are mistaken about what their interests are. Because they're mistaken about their interests, they get the wrong incentives. Because they have the wrong incentives, they have the wrong motives. Because they have the wrong motives, they make the wrong decisions. And because they make the wrong decisions, they take the wrong actions. And all those wrong actions, taken together, constitute most of society's ills. If people knew their interests correctly, they would have the right incentives, then they would have the right motives, then they would make the right decisions, and then they would take the right actions. And the greater portion of the ills of society would be remedied if that happened. This sounds vague. Naturally. Because so far I've skipped a lot of middling steps and details. I've only made a statement relating the start and the end of the mechanism. Lorem something about how most people have the working assumption that they're optimizing for what's best for them and that other people count nothing toward that, and fewer people know that the best interest is "greed for fellow feeling". And lorem about how "greed for fellow feeling" is what's best for your conscience, and if you understand that then you know that it's the best interest you can have. When the penalty for committing a crime is just the cost of doing business, and not preventing the crime from being the most profitable option, then the penalty is not a deterrent. We all of us have the power to accomplish a maybe, and it's a real maybe. Maybe you do what you do and nothing great happens as a result. Maybe you do what you do and something great happens as a result. Some of that's out of your control. But when you do that thing, it's a real maybe until the further results come in. A real maybe: that's something you can accomplish. Accountability's out because it's so easy to be duped, and it's so hard to avoid being duped. Can you blame a person for being duped? These days? No. So accountability's out. "I can't even tell when you're joking and when you're serious." "You can't tell when I'm joking and when I'm serious. Welcome to my world." "Because you can't tell when you're joking and when you're serious?" "Yeah.. that's how the professionals do it." "The professional whats?" "That's.. how professional fiction writers do it. I mean even some of the really good ones. Really. It's possible to be so busy coming up with things you could say that are self-consistent, I mean to the point of plausible back-story and all that, and be too busy to work out which ones you even believe." "That sounds disorienting." "I don't even know what my skin color is." "How can a person get by like that?" "But it's not disorienting. For the most part, it's quite the opposite. I, for one, don't mind it." Live in such a way that if someone were to try to do an impersonation of you, whatever they chose to do wouldn't do much of a job at best. Unsorted Pile 4 The real pity is just how stupid those factors are, preventing how much they're preventing. Unsorted Pile 5 If you want to empty a bucket, but it's already empty, then you can't empty it even though it's empty. If you want to fill a bucket, but it's already full, then you can't fill it even though it's full. [...] It has to do with "work" and "rest". It has to do with "linear" and "lateral" thinking. I use the analogy of a bucket. If you want to use a bucket to move a big volume of water from one location to another, you have to alternately fill it here and empty it there. To engage in linear thinking all the time and refuse to do the lateral thinking part, it's like trying to keep filling the bucket when it's already full. To engage in lateral thinking all the time and refuse to do the linear thinking part, it's like trying to keep emptying the bucket when it's already empty. Many great artists do a whole one year hiatus once every few years, and it works for them. I wonder what the best way to time these periods is. At the circadian level, I'm sticking to one period of being awake and one period of being asleep per every rotation of the earth. That part's pretty straightforward. What about the rest of it? 5 days of weekdays and 2 days of weekends? Stick to that rhythm for about five and a half months at a time and then do all of half a month off in addition? Stick to that rhythm of rhythms for six years at a time and then do all of a one year hiatus? A rhythm of rhythms of rhythms of rhythms? Something along those lines is right. As long as one gets the rest of the details right as well. Whatever that "the rest of the details" is, that's a concept that I want to continue to make a little less slippery following. During the 'on' times, it's of course good to stick to some set of models for how one will be working, e.g. have a schedule that says how many hours a day you'll work, have project timelines with targets for what to deliver at what future times, et cetera. During the 'off' times, it's good to have little or none of that stuff. Also not a coincidence. Linear thinking is largely a process of sticking to models of thinking, and that's during the 'on' time, and that's also when you stick to models of work processes. Lateral thinking is largely a process of getting outside the models, rather assessing whether the models you've been using are good or can be improved, and that's during the 'off' time, and that's also when you don't stick to models of work processes. [...] "If I don't get some distance from this, I might miss something!" is something too often said by the guy who errs toward trying to empty buckets that are already empty. [...] "Are you working, or are you shirking?" is something too often said by the guy who errs toward trying to fill buckets that are already full. [...] Why the aversion in either case? Why are some people averse to doing sufficient lateral thinking and some people averse to doing sufficient linear thinking? Because both people don't like feeling lost. When you switch from linear to lateral thinking, you start to become estranged from some things, but you also start to become familiarized with other things. And when you switch from lateral to linear thinking, you start to become estranged from some things, and familiarized with other things. Lost and found. Every time you switch modes, whether one way or the other, you get a little lost in some ways and a little found in other ways. Thence comes the incentive for getting maladaptive one way or the other. So these are things to understand, and then to overcome using knowledge about them. [...] these two modes work together somehow to produce something better than either one alone, at least in the best of cases. Lateral thinking is largely about choosing between models, and linear thinking is largely about doing things with reference to chosen models. Neglect lateral thinking, and you're liable to stick to models that have stopped applying well, or that never worked very well to begin with. Neglect lateral thinking completely, and you might not even know that what you're doing is according to some model, or even know what a model is at all. Neglect linear thinking and you'll be all plans and no execution. This can become a unique kind of torture. Forsooth, it is possible to become the world's most capable maker of clever plans that really could work, and then fail at being willing to switch modes, to either hunker down or figure out how to get other people to do the work for you. That's a real trap, and real painful. Don't ask me how I know that. Other failure modes: wasting your weekends watching superhero movies. It's just a bunch of people punching each other. [...] Questions along those lines include: "What the hell do you mean when you say I should think outside of a box?" and "What box?" Lots of tough questions there. [...] When it all goes right, the combination of lateral and linear thinking shows up as a certain combination of breadth and depth and a certain combination of style and substance. [...] Both of the modes of thinking reinforce each other. Of course you can only make good decisions about details after you've thought about strategies, but also you can only make good decisions about strategies after you've thought about details. And these two modes of thinking are not always perfectly distinct. On days when you're mostly working on details, you do have to make plenty of finer-level strategy decisions, and on days when you're mostly working on strategies, they'll only make sense with reference to some amount of coarser-level worked out details. These are not things you do all of one on some days and all of the other on other days. Lateral and linear thinking is a dichotomy that is meaningful, but you don't really think about this dichotomy when you're doing either kind of thinking. The details of how you use these two things is essentially at an implicit level in practice. And plenty of thinking activities don't reduce clearly to being one or the other. It's a yin-yang in addition to being a yin and a yang. "How have you not learned that yet?" and "How have you not unlearned that yet?" There are similiarities and differences. Existential dread of a calm sort for an hour or two now and then, that's a sign you haven't become complacent about absolutely everything yet. It's a good sign. Disapproval is just something a working machine does to a broken machine sometimes. Correction is prefereable in some sense of the word 'always', but we're busy machines. Often, we're too busy to correct, so we disapprove because it's quicker. The semantic meaning behind the disapproval is not necessarily "You could have done otherwise in the same circumstances." Disapproval can be shorthand for the semantic meaning "There are ways you can improve." Or sometimes the semantic meaning of disapproval is you just ran out of chances. We can accept any machine that can set itself right given this many chances, and we cannot accept any machine that's still failed to set itself right given this many chances. One sure sign of the health of a society is how many chances of that sort they give. And another sure sign of the health of a society is what the cutoff number of those is. A machine that still can't correct itself given that many chances, we have to do away with it. We may weep at the appeals after that decision is made, but.. anyways it's easy for edge cases to confuse us about what the principles are. Unsorted Pile 6 Quotation from waking life about where should I drop you off (boat car scene). [Re the summary of the response of Ginet to O'Connor] The difference is the relation between the agent and the uncaused causes: whether he starts them or is them. By the time of Newton, almost all scientists agreed that we could, or eventually would, be able to account for all swerves. No swerves would remain forever out of the grasp of our analyzing. Liezi axe story. Divergence miracle doesn't escape determinism. The factors that go into the act of changing the past include the past of this universe plus what other worlds are available. It's just determinism with a bigger past universe. David Lewis might be to blame for the Berenstein bears fiasco. One minimal condition of having a functioning society is that crimes are illegal, and the penalty for committing crimes is not just the cost of doing business. Why do these people who can afford the penalty for committing crimes keep committing crimes? How incredibly strange. Could have done otherwise if his desires had been different and assuming no outlandish science fiction devices were being used. When I'm in a familiar environment, I'm in a narrow way, and it doesn't occur to me what radically different things I could be doing with my willpower. When I'm in a less familiar environment, I'm not in a narrow way, and it does more often occur to me what radically different things I could be doing with my willpower. Don't ever presume that you'll be any good at doing the broadest kind of planning in a one hour block in the middle of the work day in your office alone. Lorem what this has to do with the extended mind. Every decision I make I could have done otherwise if my desires had been different. But Frankfurt proved that sometimes under certain conditions you could not have done otherwise even if your desires had been different. And we each encounter those conditions in zero of our decisions ever. So even if Frankfurt is true, you can say that a decision carries moral responsibility when the decision is a non Frankfurt kind, and that happens to be 100 percent of the times. Frankfurt cases prove a 'some x are y', not an 'all x are y' or a 'no x are y'. Morally responsible = could have done otherwise if desires had been different. (a) Sometimes decision and could have done otherwise if different desires, (b) sometimes decision and could not have done otherwise if desires different. Morally responsible in (a), not morally responsible in (b). Frankfurt cases if they prove one thing absolutely it's "not always does moral responsibility require the possibility to have done otherwise". Then we find that these "not always" cases never do happen, and we're left with those 100 percent of cases wherein there's moral responsibility and the possibility of having done otherwise, which is back where we started. That's why Frankfurt cases get us nowhere even if they work. How would a hard determinist, a soft determinist, and a free will libertarian respond in different ways to what Frankfurt cases prove? Lorem. "Given the way things are, it is inevitable what I will do with my degrees of freedom. I don't feel that as a constraining condition to be in. It doesn't bother me. And if it did bother me, then I would be bothered by something I have to live with but can't do anything about, which many people by definition would call a silly sort of thing to let bother me." "It does bother me even though I have no control over it. There are lots of things that I can't control that bother me even though I can't control them. And I haven't found the injunction 'don't let things you can't control bother you' useful. Much as I'd like to follow it, much as I think it's a good idea, much as I'd like for it to work, it doesn't work. Not for some people. Saying it doesn't make it work." "In the case of determinism, you're bothered by the fact that you don't have a thing that can't even be made intelligible?" "Yes. I don't know why, but what I take as the dissatisfactions I have, they include even things like that." There was a long silence. "I claim them all," said Bob at last. High and low abstractions in game of life. You can describe a travelling object in terms of a name and coordinates, or in terms of all its cells. When it collides with something, you can only describe the collision in low level terms. So what's the difference you achieve by using abstractions? You can compact information better than if you start and stay at the low. We could be in a turtle's dream in outer space - always Sunny. Aside from analytic truths, the one statement I can be most sure is true is "consciousness is". Emerg cause = I did that because I felt like countermanding my circumstances. Whether you succumb or you countermand, it can be described in terms of metaphysical determinism or emergentist free will. In whatever case, the account kind of works, kind of is clumsy. Ghosts are real, but I have no reason to suspect that they're free of the constraints of the two types. Emergence, subatomic particles, atoms, molecules. But when we explain things in terms of what molecules do, the explanation never conflicts with what we talk about protons and electrons doing. "If free will isn't real, then why did I just pick up this pen?" "Because picking it up and asking that was a good way of asking about free will." We have this society, and it's failing to deliver on access to the True and the Good. The institutions should be doing better, because they should be more responsive to the right kinds of reasons. They could be improved by an attitude adjustment. Nested yinyang symbol. Getting good at poker decisions is to some extent garden of forking paths and to some extent that but short-sighted. "Fatalistic absolution". The third substance problem and the two substance conjecture. "Freedom is the ability to make deterministic decisions based on reasons." "Then why call it freedom?" "Because it gives me many abilities that rocks don't have, such as moving away from danger. All those abilities, that's a lot of freedom, even if I choose between them deterministically." A non chaotic system: physics cannon ball example. Do a cannon ball simple dynamics, chance the input conditions slightly, and the output changes only slightly. Butterfly effect counterfactual as in if you were to add or subtract a butterfly. It makes only a confusing kind of sense to say that the butterfly caused a storm. It was really the butterfly plus everything else, and the butterfly was part of that everything all along. But it makes clearer sense to say "if we had disappeared this butterfly at this instant, the storm would not have happened." I'm made of stuff that gets to say, "I'm not made of stuff," but that doesn't make it true. Illustration of how you might try and fail to combine the two types and get a third. Lorem - not sure but pretty sure, if one makes an illustration of this kind and notes its characteristics, it can show how Kane's account doesn't produce a third type, and where his mechanics plus some clever uses of language collapse into nothing newer than him. Of the things the third type theorists want indeterminism to do: half of them nothing can do and half of them determinism already does. Someone with a gun pointed at him couldn't have reasonably done otherwise? But the guy holding the gun could have instantaneously transformed into tapioca pudding and collapsed to the floor. "Why did you do all the work to write that book?" "I had no choice. I had the book writing task, and it had to be finished." "Why did you have the book writing task?" "I chose it." Stole because starving vs stole gratuitously vs edge cases. Stole because starving could reasonably be called coercion. Stole gratuitously could easily be called not coercion. An edge case might be someone who had to choose between stealing and taking a job with awful working conditions. Determined to act as if counter determined. Determined to do the exact opposite as he would do if more simply determined. In the extreme case, it doesn't even require saying that a person decided to act in a manner opposite to how he would have acted as if determined. It only requires saying that he could weight one of his influences higher than all the others combined, that one being principles that the had from earlier, and the others being his immediate circumstances. When you put it that way, it's not exactly the sort of thing that would confuse a determinist. Hard and soft determinism: call it two different questions, two different topics, two completely different subjects of inquiry if you want. "Talking about hard and soft determinism in one book is more arbitrary than talking about them in two separate books. Hard determinism is about metaphysics and soft determinism is about ethics." "Guilty of first degree murder" is something like "American dollar". What exactly does it mean to be guilty of first degree murder? Something very specific as we defined it with many conditions. What exactly is an American dollar? Something very specific as we defined it with many conditions. Both of these things have legal consequences. The legal system will do things to you if you're guilty of first degree murder or if you do certain things with American dollars. Why? Because enough people agreed to the rule sets. If someone walks up to you and punches you, and you think that the appropriate thing to do after that is something other than if he had tripped and hit you, then you're either a compatibilist or you believe the same things that compatibilists do - refactor the terms if you want. And that's because there's a meaningful distinction between determination and constraint - or if you don't prefer those terms because you think they're the same thing then refactor those terms as well. There's a difference between an if-then and a line that just says to do something unconditionally. And you're more like an "if-then" than a "do [something]". There's a difference between a pair of scissors and a statue of a pair of scissors. And you're more like a pair of scissors than like a statue of a pair of scissors. Call the hard determinist thing free will which doesn't exist and the soft determinist thing free agency which exists behind some actions but not others. "Oh, when I wrote a book, I felt like the topic was fated, and putting in the work was a choice, and one that took quite some willpower." "Oh, I guess your yinyang and mine have opposite orientations." It would be a fallacy to say "let's not deal with silly things like infinite regresses" (but some people indeed do say that). So we've got everything about the game encapsulated into these two big, abstract concepts, and they're opposites, but they're also.. identical? Fuck. [about "the best offense is a good defense"] Whatever the mind-body problems soul stuff has, it also has the third type problem. Let's set aside [list all the mind-body and such problems soul stuff has, and set aside, and set aside]. It still has the third type problem. When there are problems and then solutions, it's because people become informed about problems, why they matter, and how to solve them. When that works, it's because we have the ability to make decisions based on information. Nothing about determinism says that can't happen. In fact it says how it can happen. To whatever extent you say, "free will is how we'll solve it, and denying free will prevents solving it," that's just making wishes without a plan, and denying the value of making plans and having them. Whence the freedom vs whence the control ([redacted]'s stupid framing). 1: the freedom comes from freedom meaning things other than metaphysical freedom. 2: that's both not something that can be made comprehensible and also not how biology works. Could have acted as if responsive to the wrong kinds of reasons if his attitude had been worse. When someone does something praiseworthy, this condition works just fine as a way of saying they could have done otherwise. Determinism is an approximation with virtually no margin of error. Free will is something like a metaphor. Stupid FAQ: if determinism were true, then everyone would act identically and they would all have the same personality. Stupid FAQ: evolution requires fundamental randomness, because how could things evolve if it's all deterministic? In general, the argument that overstates the case for emergence goes something like this: "Water is wet, but a single molecule of H2O is not wet, and no matter how much you study one H2O molecule, you won't realize the fact that water is wet. And that's why reductionism fails when there's emergence." "What happens when we put a bunch of these together?" is part of studying what the one thing is. When I consider an H2O molecule, the questions I can ask about it include, "What's the angle between the two oxygen atoms?" (answer: 104.5 degrees), and "What happens when you put a bunch of these together?" (answer: You'll have water, which is wet, and it will conform to the bottom of a container at room temperature, and so on). Questions about the emergent effects of a thing are just a category of the questions you can ask about that thing. Sure, emergence is an important concept, but questions about emergence from a thing are just a category of the questions you can ask about that thing. And what's important about this is: reductionism doesn't fail when such questions happen to have fruitful answers. How can I use the deterministic laws of physics as a tool to harness the indeterministic laws of physics and then pick from the genuinely possible futures the one I want? It would be nice if that's how it works, but that's not how it works, or how it possibly could work. If hidden variables is true, then there's some deterministic process at the scale of picometers or smaller that produces an extremely near-perfect pseudo-randomness at the scale of nanometers in subatomic particles. Then patterns in how those pseudo-random processes interact produce an extremely near-perfect determinism at the scale of micrometers in atoms and molecules. Then patterns in how those pseudo-deterministic processes interact produce again a near-perfect pseudo-randomness at the scale of centimetres to decimeters in something like a computer running a pseudo-RNG, or in an animal brain, or on a roulette wheel. Then patterns in how those pseudo-random processes interact produce again a near-perfect determinism at the scale of hectometers in the steady profits of a casino. There seems to be no limit to the number of levels of emergence where we see at one level a near-perfect model that's the exact opposite of the near-perfect model of the previous level. There could be lorem number of these flip-flipping levels between the strings of string theory and quantum mechanics. Yoctometers to gigameters: how many flip flopping layers? There's probably some realistic limit to how many between string theory objects and dyson spheres. You seem to be holding onto an insane amount of hope maintaining something as hopeless as the prospect that there really are tiny people inside your TV. Have you tried giving up? Do you need me to promise you that you won't lose anything worth keeping if you give up? I'm willing to promise you that. Will you try it now? Give it up. This book does not try to mislead, or to be unfair, but it does favor certain stances over others. It is not an attempt to provide an impartial survey of the popular stances, which I don't think is even possible, because of how stupid several of those stances are. Suppose there's a computer with the program: If beamed aboard Newcombe's game room, then say, "I think I'm smart. I chose both boxes". If you think both boxes is the right answer, all you're doing is that plus extra words. This is difficult to understand. Lorem is it possible in Game of Life to produce something that looks like an organism that has a body and mobility and acts as a local entropy reverser? I don't know. I think maybe the analogy breaks down when proposing that as the right question, because Game of Life assumes a power source that runs the steps of all the tiles. Game of Life injects negentropy into all cells, so the game is no simulator of generally increasing local entropy with local entropy reversers. Because Game of Life is Turing complete, you can have in it a simulation of a computer that runs a simulation of agents that locally decrease entropy. That would be a simulation within a simulation, where the simulation being run on the simulated computer has some model of entropy and local entropy reversers. You can also make a computer within Game of Life that runs something like a sorting algorithm and decreases the entropy of an input to the computer. In some ways, blame and praise are symmetric mirror images. In other ways, not. In terms of the plausibility of doing otherwise, these can be symmetric in that the person who is the most praiseworthy is the most reliably good and the person who is the most blameworthy is the most reliably bad in cases where there are no exempting or excusing conditions. In terms of what we want to do about these people, the mirror symmetry is directly opposed: we want the reliably good people to keep doing the sorts of things they're doing, and we want the reliably bad people to stop doing the sorts of thing they're doing. Suppose someone deliberately walks up to you and shoves you, and you say, "Careful, there's a bump there." Then later that day, someone trips on a bump and stumbles into you, and you say, "Hey, what the fuck is going on." If you grant that this would be absurd, that there are relevant differences between the two cases, and that talk of those differences matters, then you will have to grant that at least some of the things in the compatibilist literature have relevant things to say. In the minimal case, that would be only the parts that have nothing to do with metaphysics. If you say that deterministic is the best way of understanding hooman decision-making, that's like saying you could go to the casino, watch the speed and position of the roulette ball, and place a bet on the right number every time before the ball comes to rest. But no one has ever done that. That's why there's zero purpose in calling the outcome of one round of roulette determined. And it's also why there's zero purpose in calling a person's decision determined. Ignorance might as well be quantum uncertainty for all it matters. What about high-level causality and especially top-down causality? Well, you can decide with your whole brain to remove part of your brain. If I take a low-caliber pistol, and put it to the side of my head at a forward angle, and fire it, I could probably remove a chunk of the frontal lobe of my brain without suffering a mortal wound. The result of this would likely be similar to having a frontal lobotomy. Does this disprove reductionism? No, it only goes to show that at some time I can make a decision with my whole brain to do something to part of my brain, but the consequences of that will be at a later time. If I did that, I would probably stop doing things like writing books and start doing things like watching televised sporting events, but that change would take place when I fire the gun, and the way my brain was before that decision would be the cause of how my brain is after that decision. And after the lobotomy I would also stop being able to come up with clever self-help schemes such as that, so most of the high-level emergent effects of the brain would cease. I wouldn't retain high-level abilities, because so much of the low-level mechanisms that used to give rise to them would have been blown out. So, if anything, this proves that emergent effects depend on low-level effects, and that confusions can arise when you forget that effects follow causes, and the system is in different states at different times when there's high-level causation. Frankl put an extreme amount of decision weight on the factor called remaining alive. All the other people who survived for a while and then gave up put less decision weight on the factor called remaining alive, and then the sum of other factors became greater. But whence these differences in decision weights? That's more like roulette. What's the relevant difference between, "You can't decide how tall you are," and, "You can decide whether to be virtuous"? A hard determinist might say "You can't decide how tall you are," and, "You can't decide whether to be virtuous," if he prefers to say that determinism entails that decisions are not possible. In some sense that we take to be intuitive, however, you can decide one and not the other, at least to a much greater extent. Even in a case like this, I sense a warning flag that tells me it's silly to disagree about definitions. If the hard determinist prefers to say that decision is one of the things that's negated by determinism, fine. There's something that related to the parts of your bain commonly called 'deliberative' that accounts much more for how virtuous you are than for how tall you are. If you don't want to call that 'decision' and its outcomes, fine, then call it something else. But no one can correctly deny that among the things that account for the difference between how virtuous you are and how tall you are, one of the major factors is the parts of your brain commonly called 'deliberative'. Oh, fuck. I decided to avoid the silliness of bickering over definitions and how I've wound up in the silliness of circumlocution, and probably of pleonasm as well. Well, if I'm going to stop talking like I'm up some ivory tower, then I'd like to speak plainly and use some word that means whatever that difference is between what your height ends up being and how virtuous you are ends up being. I prefer the word 'decision'. That's just a definition. If you hate that definition, then just cut-and-paste some other word for when I say 'decision' hereafter, because I'm just not going to bicker about the definitions of words, but I am going to assert that we have to use some word to mark what this difference is. Okay, so how virtuous you end up being is subject to the results of your decisions, or your deliberations, much more than how tall you end up being is. Further, this is related to the idea of self-control. Model of good use of willpower as a currency: (1) budget what things you're going to spend on, (2) seek good deals, (3) invest. Example of (1), prioritize better things to spend time on more than worse things to spend time on. Examples of (2), make effective plans, and decide on efficient procedures. Examples of (3), maintain good health, do something like a gratitude journal or at least some kind of contemplation of suchlike. Why is our insight into our reasons so far from great? First, there's a theoretical limit that you can't know everything about what you know. Second, there's a limit built in as a psychological defense mechanism. Third, evolution did it to us to keep things interesting. Fourth, brain mechanisms consist of high-gain switching circuits. Fifth, brain mechanisms consist of massively parallel and many-layered neutral networks (we can't even extract meaningful reasons when we make our computers simulate that). When you make a hard decision based on feel, it's because you can't do it explicitly, because you don't have much explicit access to your reasons. More movie/TV recommendations: Companion, Death of a Unicorn, Star Trek TNG Tapestry. Emergentist causalism without the spooky realism. Free will is a causally effective illusion or a causally effective emergent effect. It is not something that makes determinism false. (effective only in the sense of handy in terms of talking). All that stupid stuff about inclining without necessitating and uncaused cause is sometimes the best working model. If I count the number of emissions from a radioactive sample over a given time interval, that inclines without necessitating. For example, a count of 9 or 10 or 11 in a second is more likely than a count of 1 or 2 or 20. More movie/TV/etc recommendations: Library of Babel simulator. How can mirrors be real if our eyes aren't real? It's a popular meme that's held up as an example of pseudo-profundity, but it's actually a wonderful example of how theology works. Every time you make some trivial decision, like you decide to pick your nose right now instead of one minute from now, there's some chance that you cause a hurricane a thousand miles or more away that kills 1000 people, compared to the counterfactual world where you decided otherwise. But there's an approximately equal chance that deciding to pick your nose now instead of one minute from now prevents such a hurricane, compared to the counterfactual world where you decided otherwise. The temporarily embarrassed millionaire scheme is one of the gamebreaked versions of free will. It's like how most brands of Christianity tell their congregants that they have the just right formula for summoning the magic Jesus power. There are people who seem like they should have general competence in things, but they're extremely stubborn in their refusal to evaluate their attitudes and potentially change them. How is this any different from other kinds of infirmity? If they've been cursed with this stubborn rigidity, then why do we treat them as blameworthy? The 'if' clause that hinges on attitude will refer to something that won't be happening. So this is a curse that appears less like a curse than most kinds of handicap: being the kind of person who plugs their ears when someone says to them, "Consider your attitude." What's to be done about free will and determinism? 1: Fix things that are gamebreaked on account of free will. 2: Fix things that are gamebreaked on account of determinism. 3: Understand how things work. 4: Help other people understand how things work. A computer, once it's been set to run a program, if it's been given the data, and given the program to run, ought to be able to come up with the output as soon as the "run program" button is pressed (a fallacy). David Lewis's account does not explain why the world is not the same as Black Mirror 7x2 Bête Noire. When we talk about a person's being culpable for an action, we're talking about their free agency. If we use the language of metaphysical free will when talking about free agency, that's just because the language is handy. Compare: "around sunset time last night" and "around that time last night when our 'up' direction had rotated to about 90 degrees with the line between the sun and this part of the globe". The high can affect the low only on the following conditions: (1) This is just the same as saying the low can affect the low, and (2) in such a way that the effect is at a later time than the cause. The low at one time can affect the low at some later time. There is nothing about emergence that does anything more special than this. Hard determinism carves physical nature at the joints and soft determinism carves hooman nature or ethics at the joints? Not really. A hooman is quite unlike any other known object, and, short of indeterministic freedom, it does have kinds of freedom that we don't see and other things having. So you could say that there are different kinds of metaphysical freedom, that hoomans have several of those, just not the indeterminism kind. If we're using the term "metaphysical freedom" to mean the negation of determinism, and if we're insisting that this is the only metaphysical question on our plate, then soft determinism has nothing to do with metaphysical freedom, but with other senses of the word freedom. What's really the case is that there are senses of the word 'freedom' that are metaphysical other than the sense of freedom that means the negation of determinism. Reductionism never said you can take the state at time 10 and figure out what the state at time 5 must have been. It says you can take the state at time 10 and figure out what the state at time 11 (or later) will be. And reductionism says you can derive that from the lowest-level description, but it doesn't say that the description at the lowest level will be as handy as the description at higher levels. Likewise determinism says nothing about the retrodictability of the past, and all it says about predictability of the future is that it's hypothetically predictable with a big enough computer. If emergence makes one true fundamental assertion about how things work, it is this: the high is often more handy than the low as a way of talking about things. That's it. If free will has the status of useful to call as if real sometimes, what status do the positive accounts have? Silliness. When you make a decision, there's often a shortage of your access to what reasons you used. Proposed reasons for this shortage: because free will is false, because determinism is false. When I hear about free will being making decisions based on reasons, that sounds to me like determinism. When I hear about free will being making decisions based on no reasons, that sounds to me like randomness. When I hear of free will being sometimes the one and sometimes the other and sometimes a combination, that still sounds to me like "mechanism". Contrast "Those who have few vices tend to have few virtues." If I didn't have the vice of sometimes pressing random buttons, then I wouldn't have the virtue of knowing what certain buttons do. I choose to retain the vice of sometimes pressing random buttons. It usually causes only one second of inconvenience, and sometimes results in a valuable learning experience. Vices and virtues: if you have to choose to have both or have neither, it's probably a good idea to take both. About buttons, of course, that was an understatement. I choose to retain vices that are both more harmful and more fruitful than sometimes pressing random buttons. "Despite the constant negative press covfefe" Pigeons are pretty good at chess. My evidence is that no hooman has ever won a game of chess against a pigeon. Ideal poker playing robot RNG. How often do I do a deliberation and then take an action that goes counter to all discernable good reasons? Or this should be restated: how often should I take unpredictability as stronger than all the other reasons. If you do just the right amount of this, then opponents will know that you're unpredictable, but you'll still bet most of the time when you have a good hand and get out when you have a bad hand. If you do too much of this, then opponents will know that you're very unpredictable, but you will too often neglect to bet when you have a good hand and get out when you have a bad hand, and this will cost more than the value of the increase in how unpredictable you're perceived to be. If you never do this, then you will be known to be too predictable and your actions will be too reliably indicators of what your hand is, and then you might as well be playing with your cards face up. About sometimes not betting with a good hand and sometimes betting with a bad hand, sometimes these decisions can be justified as directly profitable because deception is likely to be profitable within the considerations of the one round. But even aside from that, sometimes when this value is not apparent for a given round, it can be a good idea to refrain from betting with a good hand or bet with a bad hand purely for the unpredictability value that carries over to later rounds. So sometimes, the poker-playing robot says "RNG overrules all else and gets all the decision weight". How does it decide when to do that and when not to? It uses RNG to decide it. So now we're talking about two layers of RNG. The robot rolls an RNG to decide whether or not act as if more simply determined or unpredictable. If that rolls says to be unpredictable, then it rolls another RNG to decide which unpredictable action to take. That kind of multiple layers of deterministic and pseudo random is the closest thing to the third type, but it's still not any kind of special third type. lorem - I've been speaking about it as if all decisions are dichotomous, but this can be extended even in poker (no limit) 1: I'll get out with this bad hand, 2: I'll bluff with this bad hand just for the unpredictability, small bluff, 3: I'll bluff with this bad hand just for the unpredictability, big bluff. Reactive attitude when it will work and when it won't. Sometimes it won't work and we do it anyways because there are no excusing or exempting conditions. Hylozoism. When I let go of the rock, it exercised its desire to fall to the ground. Imagine a rock rolling down a hill, and saying, "Yeah, I just really like getting lower." This is a pretty neat way of explaining everything. Rocks like moving downward, or at downward angles. I like doing several things other than moving downward. And everything does what it likes to do. QED. Megarian school, lorem, founded by disciple of Plato, did determinism since founder? Earlier determinists? If I have a power circuit that could power a motor, but the switch is broken, then there's one sense in which I can power the motor and one sense in which I can't. For the most part it only makes sense to say that I can't power it. It doesn't mean much to say "you have the power and the motor, so you can power the motor" if the switch is broken. So what does ability without will amount to? Inability. Even if someone is on the fence about a decision and after long deliberation chooses X and not Y, even if someone chooses Y and then on his way to perform Y changes his mind and doesn't perform it, in all these cases what it amounts to is inability to do Y, just as if the person didn't have the physical capacity. Your fate is a function of your character, but your character is a function of fate. This guy doesn't have a social security number for Roy. Like how it's possible to imagine having a mind without a body, but it's not conceivable really how that could happen, we likewise have many stories in our movies and TV shows that depict a battle of competing willpowers within one mind, like in any of a hundred stories where one character has mind control over another character but he can resist it if he exerts enough willpower against the mind control. If we model a roulette wheel as not deterministic and we model a hooman as not deterministic, doesn't that mean we should model both of them as random rather than one as random and the other as agentic? When I get an idea, it seems to come unbidden. Will seems to have nothing to do with summoning it. And if I were to will to have some specific idea, then I would have had to have that idea before the act of will. Not only is it hard to see how Kane isn't just compatibilism (especially considering his more recent revision that only nonlinearity is necessary), one can also wonder whether on his account a poker-playing robot has libertarian free will. I'm actually not a great poker player, but overall my experience of playing poker has been pretty interesting, because of what my one strongest skill is. My skills are far from strong in many of the departments. Resisting stupid temptations to act on stupid justifications? I'm not the best. Even paying attention to the other players after folding a hand? Oof, I'm pretty terrible. I pretty much fall asleep after folding a hand, even though I know it's a good idea to watch what the other players do with each other. But my strongest skill.. it's sometimes called "acting", but that's not a great name for it. "Acting" as in the opposite of "reading". Now, reading is when you look at your opponent, and his mannerisms, and you can tell things about what cards he has based on his physiology. Those things he does are called "tells" and reading is the skill of picking up on "tells". Also a skill I'm not great at. Acting is when you try to thwart reading by giving off fake tells. Or at least that's a simplistic definition of acting. Really, I might only be marginally better than average in most other skills on average, but I've made a ton of money on average because of how much value there is in giving off fake tells. But it's not acting - I mean, to use the word 'acting' to describe the art of giving off fake tells is not to describe what that's like at the higher levels of that skill. And it's worth a ton of money. When I place a bet all-in, and the only thing left is for the other player to make the last decision for that hand, and I have a good hand, I want him to think I have a weak hand, and if I have a weak hand, I want him to think I have a good hand. And it's honestly a bit shocking how often I make that work. And it's worth all the money whenever I make it work. But now we're talking about something several layers removed from explicit reasoning. No, when you read a tell, it's often not something explicit, like, "he raised his left eyebrow, and I know that whenever he raises his left eyebrow that's when he's bluffing." That's how you might have seen it depicted in the movies, but that's simplistic, because that's what a movie character might say when he needs to depict this idea using words, but it all works on a level deeper than words. When you read a tell, you don't know what exactly you saw that made you think you detected what you think you detected. It's just, "I was watching him, and there's something he did physiologically, and that gave me a gut feeling that he's bluffing." So reading tells is something beyond putting to words. And giving off fake tells is something even further beyond putting to words. On a good evening, there might be 10 times I bet all-in and the other guy has to make the last decision, and 9 times out of 10 he'll make the exact wrong decision, and when that happens, there's something I do (and I don't know whether it has something to do with an eyebrow or what) that makes him think, "Something about the way he moves right now, I think he has a bad hand," when really I have a good hand, or, "Something about the way he moves right now, I think he has a good hand," when really I have a bad hand. On a particularly bad evening, it only works maybe 5 times out of 10, but on average, it works way too often. If I didn't have that skill, I wouldn't even beat the rake on average using my other skills. But that's my one in particular, which is especially profitable. And I'll tell you what I'm thinking when I use this skill. All I do is I channel the guy from the movie Nightcrawler. That's it. I couldn't do it any better if I knew it any better, and you can't know it any better, because it's so many levels removed from what you can do with words to talk about what specifically is going on. That's it. It works way more often than seems reasonable, but every time someone's on the fence with the last decision, I can almost always push them in the exact wrong direction off the fence. And they do it because of a gut feeling that's beyond understanding details. And I make them do it because of something I give off that's doubly beyond understanding details. And it's probably beyond teaching, too. Consider the injunction, "Just act like you have charm." Well, you can only act like you have charm if you indeed have charm, in which case it's not acting. That's why "acting" is a pretty terrible word for it, although there probably isn't a better word for it anyways. [re nightcrawler] It's like a priming effect, but twice removed from scrutability. Normally, when someone uses a priming effect, the guy doing the priming knows what the image or word or whatever stimulus is, and only the person subjected to it is not consciously aware of what the prime was. In this case, I'm giving the other guy primes that he can't discern consciously, but those primes I'm giving him, I also can't discern consciously. Withhold punishment of the person who becomes paralyzed after committing a crime? It would have no effect on prevention, but it would undermine deterrence. In the calculus, when someone's thinking of committing a crime, it would lend weight to "Sometimes you commit a crime, and then something happens, and then there's no legal punishment." So even in the case that someone become paralyzed after committing a crime, and having no chance of committing a crime ever again, it still makes sense to apply judicial action. David Pizarro's analagy for reactive attitudes, per the Very Bad Wizards podcast. When I see a door, I know it's a rectangle, even though the sense data in my visual field is a parallelogram. So there's some mismatch between the parallelogram of my sense data and my knowledge that I'm looking at a rectangle in 3D space. But this mismatch is not a conflict. This is just the natural relation between sense data and perception. The analogy to reactive attitudes or attributing free will to other people is: (1) the sense data is a parallelogram :: the reality is that the actions of other people are determined, and (2) the perception is a rectangle :: I treat other people as if they have full metaphysical agency. (There's a backwardsness to this, but it's fine as stated) A Galton board inclines without necessitating. Like taking the sum of ticks of a geiger counter over a period of time, the bottom result of one round of Galton board takes the sum of several rounds of left/right, and each of those left/right moves can be treated as random, but the sum of them is more likely to be near the middle than near either extreme side. This is the same as "the flocking patterns of the anarchic". And it's the same reason why we have those atoms and molecules we're so familiar with, rather than protons and electrons and neutrons in a completely random scatter all over the place at all instants. A computer can run a line with an if-then, and any given time it runs that line, one outcome is fated to be, and the other was fated not to be. But the computer can run the if-then multiple times in different scenarios and do the one option some of those times and the other option the other times. This is like following the dictates of reason. When I use a car-driving robot, there's a line in the programming that says, "When coming up on an intersection, use the camera to detect what the traffic signal is. If it's red, then stop, and if it's green, then keep going." On any given time it comes up on an intersection, it's a matter of fate whether it runs that line and says, "Red, then stop," or says, "Green, then keep going." When it's "Red, then stop," it's like the part of that instruction that talked about green lights and what to do about them didn't matter. When it's "Green, then go," it's like the part of that instruction that talked about red lights and what to do about them didn't matter. So in any given instance of running that conditional, it's like half of the instruction didn't matter. If it was a red light, and that part of the instruction were "If it's green, then self-destruct," that wouldn't have made any difference in the instance of that one time it came up on that one intersection and the light was red. But it does use the instruction multiple times, and sometimes when coming up on a red light and sometimes when coming up on a green light, so it matters what that instruction has in both parts of the conditional. No person commits evil willingly? If I disagree, that risks being a disagreement about a definition. There's some thing that no person does willingly, but to refer to that thing that no person does willingly with the word 'evil' is a pretty terrible choice of words. The theological determinism that says that omniscience is compatible with free will is using a compatibilist definition of free will. Omniscience is compatible with not imposing a will. But creating something while omniscient about everything it will do, that is imposing a will. If I create a machine from some wheels and some circuit boards and a floor-cleaning brush, and I give it floor cleaning instructions, and then I switch it on, that amounts to cleaning my floor just as much as me picking up a broom and swinging it around does. Suppose I modify my Roomba, and I give it a pistol and facial recognition, and I gave it a picture of the face of someone I don't like, and I switch it on. It passes by several people and doesn't shoot, then it sees the guy whose face matches the picture, and it shoots him. That's just me shooting the guy, plus whatever number of intermediate steps. A lot of good it will do in my legal trial when I say, "But it passed by several other people and didn't shoot, which proves that it decided to shoot the guy it did shoot. And since it decided that, I didn't decide it. And since I didn't decide it, I can't be guilty of the shooting." Libertarian free will plus omniscience is not possible. If libertarian free will means I can choose multiple options in a given circumstance, and if that choice is genuine and not just a matter of ignorance, then not even a god can be omniscient about the future. There are people who say that there is a god and we have compatibilist free will. There are people who say that there is a god and that we have libertarian free will, but that the god's omniscience doesn't include knowing about the future. There are plenty of people who say that there is a god and that we have libertarian free will and also that the god's omniscience includes knowing everything about the future, but that's not intelligible. Indeed, there are plenty of people who say that a god exists and who also like saying all kinds of things that aren't intelligible. The traffic jam argument for emergence that says you can't predict a traffic jam by analyzing car parts is of course dishonest, because a traffic jam is made car parts plus several other things, such as drivers. The traffic jam indeed does reduce to the set of car parts, drivers, roads, et cetera. "But if you know about one car's parts, and its driver, and a bit of road, there are things you can know about what will happen, and if you take those ideas and the idea of 'more of the same', you can't predict a traffic jam." "No, indeed you can't. But all that says is that emergence is a type of question that you can ask about having more of them. But that doesn't mean that reduction fails. And now you're going to say that when you have more of them, and the information about them, and you do predict a traffic jam, you will be explaining the traffic jam in terms of high level abstractions. But once you have enough information to do that, you jolly well could explain the traffic jam in terms of the low level abstractions too. Or do you still think there's some stronger claim?" "Yeah, I think there is." "And would you say that when you have something that's deterministic, and then you have more of them, something can emerge that's not deterministic?" "Yeah." "I contest that that's not even intelligible." The emergence of a traffic jam and the emergence of higher cognition are identical to the emergence in Game of Life in all relevant ways, and the principles made clear by Game of Life hold firmly. Suppose every hooman decision is like one round of Galton board. In what ways should we think of one decision as determined? In what ways should we think of it as random including the incline without necessitating bit? And in what ways should we think of it as etiogenic including the incline without necessitating bit? There's no theoretical grounds for calling any decision anything other than determined. If we call a decision anything other than determined, it would be for reasons other than theoretical, which would mean we would need to have a reason for calling it something contrary to what it actually is. We may have practical reasons for doing that which are more important than the truth that indeed decisions are determined. If we do have practical reasons for using a terminology that is contrary to the truth, we should probably be pretty careful about that. When we speak of a spin of a roulette wheel being random, it's pretty obvious that we don't mean something like there are multiple actually possible futures and something about the roulette wheel is going to make one of those possible futures actual. It's clear enough that we're only referring to ignorance when we use the word 'random' then. When talking about hooman decisions in ordinary parlance, it's usually not clear that we use words like 'choice' in the same kind of way. In ordinary parlance when we say something like "you didn't have to do that," do we mean in the sense of epistemic possibility or in the actual possibility agent causation sense? If we mean it in that indeterministic sense, does it typically come with the subtext that we're calling things contrary to what they are, or does it come with the assumption that we're speaking in terms of a model that's true? Socratic dialog the case for "when's the only time we speak of a spin of a roulette wheel being deterministic" and "how would the same attitude be taken toward hooman decisions and is there any way to think it should be disanalogous". One disanalogy: attributing randomness to the roulette spin and free agency to the hooman, where fundamental randomness is a coherent idea that one can say something approximates whereas free agency is not a coherent idea. Does that break it down? Maybe not. It turns out the real libtards are the metaphysical kind. Determinism vs indeterminism: blaming the victim vs excusing laziness or evil. Indeterminism makes it too easy to say that anyone can rise above the circumstances that formed them. Determinism makes it too easy to say that no one can rise above any circumstances that maybe formed them or maybe didn't. The most virtuous and the most vicious are both extreme episodics. One of the two most salient features of psychopaths is that they're extreme episodics. And one of the two most salient features of the absurd heroes of Camus is that they're extreme episodics. Would perfect decision knowledge really take an infinite sum? Yes. When you consider the factors of a decision, you first consider the factors that come to mind. Then you can ask: are there other factors that didn't come to mind? And, when I considered the factors that came to mind, did I do that in a biased way? And, how might I have been biased? And, what factors would a person who is not biased in the ways I am have considered? So a full consideration of any decision would require trying to imagine a perfect decision maker who knows everything and has no biases. In other words, you can think about the decision factors, then think about how you thought of the decision factors, then think about how you thought about how you thought about the decision factors, and... If a deist god is trying to come up with the best of all possible worlds, and if it has many possible worlds to pick from using the fundamental constants, and if it would have to simulate a world with perfect fidelity in order to predict it, and if there's no functional distinction between a reality and a simulation with perfect fidelity, then your present existence is consistent with being in a predictive simulation that is to end with the god saying, "Oh, fuck. That one turned out to be the worst of all possible worlds. Maybe the next simulation won't go so bad." It is possible to imagine a deist god who has many possible worlds it can create, then simulates all of them in order to figure out which one is the best of all possible worlds, and then really instantiates that world, not just simulating it. Then it instantiates that one 999 more times, because that's the best one and that's how nice that god is. But that doesn't rule out our world being the simulation of one of the particularly terrible worlds. It doesn't even rule out the possibility that you're one of the collateral damage units of the best possible world, that you have a genuinely nasty fate, and that you and 999 exact copies of you have to go through the same ordeal just because it was the best of all possible worlds for most other people. Is there a difference between running a reality once and running it many times? Yes. There is a possible world (self-consistent world) wherein the movie Black Knight (2001) only ever got seen by one person after it was completed, and it's a damn fortunate thing we live in this world, where instead many people saw it. In this world, the protagonist of that movie, Jamal Skywalker, is more relevant, imagined more times, and dare I say, more real than in some deprived alternate universe wherein 10 million fewer people saw it. So I'll be glad if I live in a universe that gets re-run in exactly the same way a million times like in Nietzsche's eternal recurrence, because that would mean that all my joys and sorrows are just like I felt them but a million times over. Even if all but one of those times happened in some timeline that I can't access, I'll be glad if they really do exist and I'm not the only exact instance of me that ever existed. So, nuts to the idea that running something multiple times adds nothing that adds to running it just one. And if this universe with exactly this timeline gets run a million times, then not only do I exist one million times over, but Jamal Skywalker exists 10 trllion times over. The paradox of flip-flopping layers. "I'm not a soft determinist because excluded middle!" Easy to relate to someone who took that conclusion because they didn't have a huge amount of time to read about the stuff, or read some arguments that are partial against soft determinism. Video game recommendations: Recursed (2016, not the other game by the same name). Guidelines for playing Recursed: I said I'm not recommending any books, but it turns out I'm recommending a video game that's harder than any book. Expect this video game to take you forever, enjoy beating any stage, take a recording every time you beat a stage, review those recordings until you understand the principles and can beat whatever stages you've beaten repeatedly. And expect to put the game down and pick it up again later! There's a law of nature that says that 50 percent of philosophizing is to bad ends, but there's no law of nature that says they have to get 50 percent of the readers or 50 percent of the influence. Determinism is to an essentially perfect approximation true of how hoomans operate on the scales of physics and neurobiology. The environment they operate in is often deterministic and often indeterministic. Indeterminism is often a near-perfect approximation of how hoomans operate on the level of decisions. So descriptions of how hoomans operate can best be either deterministic or indeterministic depending on what level of abstraction applies to a given question. But to say that indeterminism is true of how hoomans operate in any fundamental way is mistaken. But to say that determinism is true of how hoomans operate in any practical way is often not fruitful. Hard determinists are right that there's no contra-causal freedom. Soft determinists are right that there are important things to know about related issues, like agency (which you can call a type of freedom if you dare choose to). Free will libertarians aren't right about anything when they say that their ideas are about reality, but the things they say about reality aren't completely irrelevant.. oh wait, yeah, the things they say about reality are irrelevant if a soft determinist has said the same things about how agency works as an abstraction, and has not asserted that those supercede how reality actually works. What are the factors of luck? Deterministic effects in the environment, indeterministic effects in the environment, deterministic effects in your biology. We assume people are like Galton boards in the sense of not determined but weightedly probabilistic, but we also assume people respond to correction. What counts as within a person's control? If I were lucky enough, it would be within my control to press just the right buttons on a library of Babel simulator and produce next year's biggest science discovery. But that couldn't be called reasonably within my control. If I'm made of heavier than air particles, and if all heavier than air particles, according to hylozoism, prefer only moving downward, then my choice to do anything other than melt and descend to the bottom of a pit is quite an imposition of will of mine onto billions of wills that want to do something else. Consider the statement, "I'm what I call a semi-compatibilist, because I think determinism is not compatible with free will, but it is compatible with moral responsibility." It almost sounds like a substantial statement. To see how it's not, consider this statement, "Determinism is not compatible with free will because free will means the negation of determinism, and determinism is not compatible with the negation of determinism because a thing can't be compatible with its negation." That latter statement sounds a bit tautological. It's a statement that's not like "Grass is green," but more like "Grass is grass." The Yin of Zhou principle (what it is and how it works) is a middle mechanism of The True and The Good. Describing the Yin of Zhou principle is not the same as describing The True and The Good, and it's not a description of something like the fundamental physics of how something works, but it is a certain level of explanation that does bear talking about. Frankfurt cases do prove one thing absolutely. It's a proof of "not necessary". Moral responsibility does not necessarily require the possibility of having been able to do otherwise. It's not relevant if you were already a compatibilist. And if you were already a hard determinist, then it doesn't matter as a statement of how moral responsibility works because you already think moral responsibility doesn't exist. It's arguable whether or not the prods is a plausible case for a third kind. Maybe the vaguenesses make it uninteresting. Maybe if the vaguenesses are still plausible then it really is a description of how a third kind might work, and how that is necessarily nothing like Kane's SFA or whatever construction that tries to build a third type from the first two types. Saturday Morning Breakfast Cereal analog computer simulating a block of cheese. The guy I voted for lost 24,000 votes to 20,000 votes in this district. In what way can I say my counterfactual justification was good? It makes perfect sense, because it makes no sense to say that I could have known that if I had done more research. Voting matters even if your guy doesn't win because even if your guy doesn't win, people will see stats like, x percent of people turned out, and y percent of them voted for this guy. Well, if your vote didn't swing the last sigfig of that reported number, then.. About metaphysical freedom, there are two main confusions. (1) the illusion, and (2) reconciling what's going on. As for (1): it's how a sufficiently advanced intelligence works. As for (2): it's the levels thing. The remainder of topics are about (3) what's to be said (it's a lot) about agentic decisions other than whether those decisions are made by libertarian free will? (And whatever other questions we have about other meanings of 'free' and other meanings of 'will'). Whatever else might be relevant is branching off from those main nodes. That's a lot of topics and a lot of branching. Goes as far as etiology and ontology. Only something that's not part of all that branching could categorically be said not to belong in a book about free will and determinism. [In Yoruba philosophy,] the most important element of personhood is the ori or "inner head" [...] The ori determines one's fate, and, contrary to most alternative cultural accounts of the soul, the Yoruba actually chooses his ori. In the creation myth Ajala, the "potter of heads," provides each body with a head. But before a person arrives on earth, he or she must go to the house of Ajala to choose a head. To make matters more complicated, Ajala has a reputation for being irresponsible and careless. As a result, Ajala molds many bad heads; he sometimes forgets to fire some, misshapes others, and overburns still others. Because it is said that he owes money to many people, Ajala commonly hides in the ceiling to avoid creditors and neglects some of the heads he put on the fire, leaving them to burn. When a person gets to Ajala's storehouse of heads, he or she does now know which heads are bad or good - all people choose heads in ignorance. If a person picks a bad head, he or she is doomed to failure in life. Yet, if a person picks a really good head, the person is destined to have a good, prosperous life. With hard work, he or she will surely be successful, since little or no energy need be expended in costly head repairs. From Jacqueline Trimier, "African Philosophy," 1993 [typo? now -> not] Ori myth vs Homer. Homer predates the free will debate, and the free will debate with reference to Homer may seem mysterious. In the Ori myth, the free will debate also seems mysterious, but the mystery is brought to the fore. This book does mention theological arguments a number of times, but only because that's a handy way of framing several of the questions. It also mentions "theological"-type thinking, and by that it means the kind of thinking that partakes of a certain set of related fallacies. Like a nightmare architected by Clarence Darrow. "Deserve" just refers to "case where reactive attitude or judicial action is applicable"? If not, what else is desert? Is it real? Is it real in the way that rocks are real? Or real in the way that equilateral triangles are real? Discernable reasons incline without necessitating. Discernable and indiscernable reasons together necessitate. Once a compatibilist says this and other things about agency, then the libertarian free will literature can be thrown out. [about the Ori] You are the author of your own character. But you have to author it before you have your character. And before you have your character, you don't really know anything relevant about anything. So you only get to be a completely blind author of your character. So it doesn't amount to anything. It's not the kind of self-authorship that's typically desired. But if you want a stronger kind of self-authorship, you run into an infinite regress problem. A head can do work designing and revising itself, but ultimately the head has to come from somewhere other than itself. Palinopsia: the poor man's time worm. To say that a brain or that evolution needs true RNG is like saying that our best engineers have as yet failed to design a roulette wheel that can thwart prediction. You'll maximize your odds of feeling good if you believe what seems most likely to be true. That's the only good "feels good" principle for deciding beliefs. There are dispositions that become bad attitudes. Like the disposition to favor being duped. When that disposition has become attitude, it's hard to see that changing the attitude would be a good idea. But it's still possible. The artist: has to be isolated from results to do good art, has to depend on results to remain fed. Recommended video games. Ultrakill: learn how to git gud at this game without ever losing your patience and you'll learn the meaning of appropriate baseline expectations and staying cool in chaos. Re moral luck necessary: this would work just fine as a theorem. Unfortunately it's been proved. The worst kind of unfreedom is just not knowing any of the relevant thoughts, pro or con, about a topic. Tyranny is when freedom is stifled overtly. Sneak tyranny is when freedom is stifled indirectly. Fatalism can be with respect to some object or to the universe. Defeatism is a word related to fatalism or lethargism. To fix the education system, you would have to convince the bosses of two things they probably wouldn't like. First, that it's worth it to teach things that are controversial. Second, that it's worth it to teach things that are relevant. It's not always discernable when an error will be useful. That's why we commit to doing them randomly. You can predict a thing without fully simulating it if it's not chaotic. If it is chaotic, the prediction and full fidelity simulation are functionally equivalent. Eliminativism about free will won't be warranted. Free will was invented as a means for slave morality. Nietzsche's idea. Makes sense in a hard determinist framework, assuming certain historical details about how free will was invented. We have different dispositions and those differences arise pseudo-randomly. That's how evolution works, and it's adaptive. Gut feeling is the infinite regression escape hatch. If we came up with a sufficiently advanced roulette wheel outcome detector, casinos would stop offering roulette wheels and start using something more chaotic. What about the analogous case of hooman decisions? If we had sufficiently advanced hooman behavior outcome detectors, would we just get more chaotic? Maybe not. But what's the timeline for such a technology? What should we do in the meantime? Does the hypothetical technology tell us something about what we should do in our state of before having it? Examples of eliminativism: we've been eliminativist about the four humors. Well, most of us have been. Some people who are really into "personality typology" still categorize their favorite fictional characters under the four temperaments: sanguine, choleric, melancholic, and phlegmatic. Hard determinists vs soft determinists: "We need to abolish the justice system as we know it and replace it with something better. This can be proved by how wrong the justice system is going" vs "The main principles that usually operate in the justice system are appropriate, but there are ways that the finer details of implementation are going terribly wrong, and those need fixed." The tapioca pudding example is what I mean when sometimes I say something in terms assuming that determinism is absolute and how there is technically an exception, and how that exception doesn't matter, so the thing said in terms assuming determinism is absolute is just as relevant as if determinism really were absolute, even though it's technically not. Refactor "lethargism" as "global defeatism". Apathy becomes listless, and that's because accepting global defeatism is a fallacy. Lib: (1) influences don't have any bearing on decisions. or (2) influences do cause tendencies in decisions, but those tendencies are never to the point of necessitating. There's something I heard the other day Does free will exist? There's a number of different things free will can mean, and some of them exist, and some of them don't. I disagree. Free will can mean a number of different things, and none of them exist. I disagree. Free will can mean a number of different things, and all of them exist. What does it mean? Hoo.. are you sure you want to know? Umm.. Is there some reason I wouldn't? No. Just.. it takes a bit of explaining. Why's that? I mean, it sounds like a pretty simple question. Well, what do you think the answer is? What free will means and whether it exists? Doesn't free will mean, like, when you want to do something and then you do it? So it exists, right? This seems as plain as asking the meaning of the word 'gloves' and asking whether gloves exist. So what's this stuff about other meanings and maybe free will not existing? That's will, but is it free? Okay, so that's the question.. So when you want to do something and then you do it, that's will, but people disagree about whether that will is free? Yeah. So some people say that will is free, and some people say it isn't, and most of them say that there's different things it might mean to say whether that will is free or not, and depending on exactly what meaning is used, the disagreements about whether it's free or not can have different people on the 'yes' and 'no' sides? That's the rub. Imagine I have a butler robot, like a Roomba, but with more limbs, and capable of cleaning more than floors. Imagine this robot comes with absolutely no programming for how to move its limbs - doesn't even know the difference between a set of movements for walking and a set of movements that would result in its falling over. But the robot is capable of learning. With the right feedback, it can learn how to walk, and finally how to reach and use the duster to get the dust off the top of a high shelf. Suppose further that the only user interface of this robot consists of has only two possible interactions. When the robot does something that progresses toward the kind of complex motion that I think will be useful, I pat it gently on the top of its head, and when it gets that input it incentivizes whatever motions happened recently before that. When the robot does something that does not progress toward the kind of complex motion that I think will be useful, I smack it hard on the top of its head, and when it gets that input it disincentivizes whatever motions happened recently before that. And you have to smack it really hard in order for it to register that input, and you also have to yell unkind swear words. Otherwise, the robot's learning process is to try random combinations of movements, one by one. So it starts with random motions, and then it favors some and disfavors some as you give it input. Okay, if I never use the user interface, it will likely only try different kinds of writhing on the floor, one after the other, because "there's a lot more ways to fall over than ways to remain standing". But if I tend the robot with enough care for a long enough time, gently patting it on the head whenever it happens to take some random action that's closer to the useful kind, the kind that will eventually end with standing up, walking, and dusting the top of the shelf, smacking it hard on the top of the had whenever it happens to take some random action that has nothing to do with useful coordinated motion. If I stay with it for a long enough time, it will learn how to stand, how to walk, how to sweep the floors, how to dust the top of a shelf, and so on. One can imagine that if the learning process were so inefficient, and if one really did tend to the robot long enough for it to learn to be useful under that setup, this interface might start to envelop more of our natural reactive attitudes. You have a friend over one day, and at one point you interrupt the conversation, saying "Hang on. My robot is fucking up too much today, the bastard. I have to incentivize it. You buttfucker! *smack*. That should help things for now. Anyways, where were we?" And even if you didn't start having real reactive attitudes toward the robot, it would look like you were having reactive attitudes the whole time you were training it, and it would look like it was having reactive attitudes on the receiving end the whole time. And that would work to train it. So the interface for succesfully training a learning algorithm can look exactly like both participants are using reactive attitudes in all of their interactions, even if there are none of the feelings that normally accompany reactive attitudes. How different is this really from how we interact with our friends, our children, and with strangers? "A lot different! I was with you there, as an example, until that last thing you said, that question. It's a lot different from how we interact with people." "Isn't that how small children learn how to walk?" "No! It's not!" "Oh. I wouldn't know. I don't have kids." "Don't start!" Just imagine only having been told the sun goes around the earth. And think of the sun. And imagine someone giving you for the first time the idea that the earth, that thing you're standing on and that seems to be not moving, is going around the sun, that thing that appears to be going around you. Does baby's first act of contra-causal agency happen before or after baby's first steps? Technically you could explain how a car works in terms of quantum mechanics. So what exactly does it mean to say "x has nothing to do with y" where y is the mechanism lower than the one best for talking about x? In some sense, quantum mechanics has everything to do with all the stuff from the fuel injector to the crankshaft. To say quantum mechanics has nothing to do with it is to say that explanations on higher levels of abstraction handle all the explaining. Quantum mechanics as such has nothing to do with it. What would being eliminativist about free will look like, and what level of technology would it require? We didn't need much advance in technology to get eliminativist about the four humors. What about mental categories like beliefs? The thermostat just went click on the cold side, and now the thermostat desires to heat the room. There was a crack in the bimetallic strip, which is why the thermostat got the mistaken belief that the room was colder than it really was. Is this a metaphorical use of desire and belief? Are real desire and belief something different? Does it matter? He believed x, knew that if x, then y would produce z, he desired z, therefore he willed to y. In what future do we do away with these terms and explain the same decision process in different terms? I seem to have access to a world outside my mind that has things like spaceships in it, and I seem to have it on good authority that the spaceships were not built by zombies. If I'm not wrong about all that, it also seems like it wasn't just a matter of luck to rely on certain regularities in nature continuing to hold when the spaceships were designed. But maybe the laws of nature will stop holding tomorrow, maybe all the other people are zombies, and maybe everything I take as sensory detection is really projected by wires into a vat that contains my brain in a world that contains no spaceships. Refactor "no-go principle" to "ignorance impasse" (I might have refactored it into something else already, in which case, refactor that to "ignorance impasse"). When I attach two sticks, two wheels, and an axle to a bucket, I observe emergent wheelbarrowness. When I was using just a bucket, and carrying it, I could move half a ton of bricks from this side of the construction site to the other side. Since I combined those things into a wheelbarrow, now I can move three tons per hour. This change can't be explained purely in terms of just buckets, or just sticks, or just wheels, or just axles. Definition and conditions for what counts as emergence? Surprise: it's a matter of definition. I can decide one day that I would like to explain something in terms of emergence in a way that works only when I give an especially loose definition of emergence, and I can define my terms, and make a material point. And the very next day I can decide that I would like to explain something else in terms of emergence in a way that has a stricter condition for what counts as emergence, and then define my terms, and make a material point. That's not cheating, as long as you define your terms and scope them. A hypothesis just is an unsubstantiated statement. How do we gradually improve our ability to make hypotheses? That's one of those things that's hard to describe. There may be successful descriptions of what good hypotheses tend to be like, but even that's not the same as describing how to make good hypotheses. Making good hypotheses is some combination of pattern matching, analogy, and suchlike, and the process has to do with hunch, intuition, creativity, and probably either aesthetic sense or something like it. How do you step into the previously unstepped into, and in a way that's better than other ways? That's where science is more art than science. How to falsify or support a hypothesis is like how to change a tire or how to assemble a piece of furniture whereas how to generate a good hypothesis is more like how to paint a painting or how to appreciate a piece of music. So how does science advance? A combination of these two very different things. A snail trail: the poor man's time worm. What does it mean to draw a path on a map? "Free will" and "unimpeded agency". We can have different meanings for the same term, as long as we define those terms and mark when we're using what sense of what term. If you don't like that, we can call the incompatibilist's free will "free will" and use the term "unimpeded agency" for the distinction that compatibilists say is relevant. Like, if you really don't like having multiple definitions, we could keep them that separate and have a single definition for each term. Lorem if compatibilists need to invent a term for the thing they're pointing out as relevant, it might need to be something more like "unimpeded self-regarding agency" or whatever. We should focus on what we can do things about, and what we can do things about are influences and information. You can will to develop a will to exercise. Does action follow attitude, or does attitude follow action? It's not terribly uncommon that a person wants to exercise regularly, and also wants to want to exercise regularly, and then he has to kickstart that process by expending a great deal of willpower to exercise on several consecutive days, and then finds that after a week or two, now he gravitationally wants to exercise every day. Sometimes, attitude follows action. When you have a first-order desire and a second-order desire in serious conflict, and you're pretty sure the second-order one is better, and the first-order desire is to be lazy, a useful incantation might be: "The reluctance is incorrect." Warning: use this incantation with care if you use it - only when you're really sure. In a determinist framework, there are things you can say about what to do with the world to improve it in some small way. In libertarian free will, you have either no answers, or the same answers, or some combination of those. To say you can countermand your circumstances is not the exclusive domain of the proponent of libertarian free will. Determinists get to say that too. And determinists don't know everything about how every decision is made, but saying libertarian free will is true gives you no additional knowledge to that. In the worst cases, the proponent of libertarian free will says "I'm a white Christian nationalist, and I chose to be a Christian nationalist, and I chose to be white, and white Christian nationalists are first-class citizens, and if you're not a Christian nationalist, or you're not white, or both, then you're not important" and other well-intentioned things. The rule of sufficient cause holds always, except for fundamental particles, which means technically it doesn't hold for anything. Rollercoaster tycoon and rollback. Load a save, leave it for a while, see what it looks like 15 minutes later. Repeat. Same outcome both times, but before all that, there was no way to predict it in any way more efficient than just running the whole thing. Deterministic but not predictable (not predictable by any means other than full fidelity simulation). Calvinists have been around for a while and they haven't all lost the will to live. "Feeling like there's moral responsibility involved with something is just the conscious phenomenon that accompanies the awareness that it's time to do something utilitarian. Even saying something like "We have decided to enact retribution" is just the auditory phenomenon that accompanies deciding to do something utilitarian." "No, real deserving is something different from deciding what action to do to improve the state of society." "No, they're just two things that go together, like the feeling of hunger and eating food." "I accuse you of changing the question." "I deny that accusation." Nietzsche on short-lived habits. [A paraphrase] If I had no habits, I would be lost. If I had only long-enduring habits, I would be bored, and I would probably become narrow-minded. Short-lived habits are the way for me. I stick to those, one at a time, and I learn just enough about a great many things. The habits are not so long that I miss anything in deference to them, and not so short that I'm guideless. Impasses that tend to stand, even if maybe they shouldn't stand scrutiny: (1) The basic desert principle. (2) Insisting the illusion is not an illusion. (3) Fear of excluded middle. Also, things you can hang your hat on satisfied you've knocked something down when really you haven't. Compatibilism does not mean saying truefalse is a truth value. Compatibilism does not (necessarily) mean that you start with determinism and end up with something that escapes determinism. "You can attribute randomness to a roulette wheel or to hooman decisions, but you can't attribute freedom from determinism to hooman decisions." "Sure you can. The difference is you can attribute freedom because a decision can be to countermand circumstances." "But if there was good enough prediction technology then that could also be predicted and made clearly deterministic. If we had sufficiently advanced brain scanning technology, it would know: this guy has been sticking to this principle even though other influencing factors are all to the contrary, and if we increase those pressures by this much more, we can tell whether he will succumb, or whether he will continue countermanding. If that's what freedom is, and if that kind of freedom is just as mechanical as any other kind of decision, I don't see how that warrants talking as if determinism doesn't apply in those corners of the world called hooman decisions." You're not separate from the big bang. The big bang is not an explosion that happened a long time ago. It's still exploding. It's not done exploding. You're part of the explosion. Suppose you drink a glass of wine, then a minute later I say, "You're a person with wine in his stomach." That makes sense. Then an hour later you don't have wine in your stomach, but all that wine has been dissolved, broken apart, and integrated into several other parts of your body. But there's no exact moment in time that marks the change from person with wine in his stomach to person with the components of wine distributed throughout him. The whole idea of separateness breaks down and doesn't apply. Separate and not separate don't even make sense as two opposed concepts. In the same way, it's a strange sort of statement to say that you're in the world, like there are these two distinct things, you and the world, and the one is inside the other. You're part of the world. Like how a branch is part of a tree. And like how it would be strange to say "that branch is on that tree" rather than "that branch is part of that tree". No, you're part of the world in the same way that a branch is part of a tree. "Uhh, continuum fallacy?" "Yeah, but the conclusion is still true in some sense of the words." A self-driving car doesn't do counterfactuals about what would happen if it stopped or kept going. It's just go if green and stop if red. Well, in most intersections, a hooman driver typically decides just as simply. And if something exceptional happens, like someone starts running across the street near a green light, even the self-driving car will start making more complicated decisions. Anyways, a free decision by a hooman doesn't feel like the "go if green and stop if red" kind of decision. It feels like you're trying to predict multiple possible futures. And then it feels like the antecedent of the "if-then" is the future outcome that looks better. It doesn't feel like the antecedent of the if-then is some combination of things you thought before that. But it is. There's an idea that if judicial actions don't include retribution, then there would be no warrant to judicial action in the case of someone who becomes paralyzed some time between arrest and sentencing. Let's see if that passes scrutiny. A guy does a crime, gets arrested, gets paralyzed, gets pardoned. Later, someone else says, "I'll rob that guy, then get caught, then some time before sentencing I'll get hit by a truck and paralyzed. The perfect crime." No, not necessarily. But it will be a decision factor. When someone's thinking of committing a crime, they'll know that there are that many more pardoning factors, and it will be an item on the scales of that decision. That's why even rules normally categorized as retributive are utilitarian. Regret is often about a lapse of vigilance. That's why there's little difference between how we treat a lapse of vigilance and an act of commission. I won't say free will exists, but I do say it's okay to mention it as if it does exist. "To say 'determinism is true' is true, but it can never matter because of chaos. Or it can only matter for billiard balls and car engines and a few other things." "Well, a lot of things. There are a lot of things other than billiard balls and car engines that are not chaotic, and that admit of being made sense of in terms of determinism." "Okay, true. But how shall we take stock of those chaotic and nonlinear things? If you have to plan a vacation months in advance, the weather forecast for specific days that far out will never be deterministic in any way that matters. You're deciding whether to book the hotel starting on the Monday or the Thursday, and the weather will probably be fairer in the one case than the other, so how do you pick? There is no decision criterion, and that's because determinism is as good as useless when you're making that decision. Whatever you do, you'll have no better odds than if you flip a coin. So what shall we do about hooman decisions? They're like the weather forecast, only they evade predictability in an instant rather than a month." Imagine one day you check your mailbox and you find a letter that just says "Laplace" at the top-left where there should be a return address and sender's name. You open the letter, and it says what tomorrow's winning lottery numbers are. You assume it's a prank or a scam of some kind, and you mostly disregard it. The lottery will be drawn the next day, and the tumbler full of numbered balls will be spinning the whole time from now until then. Out of curiosity, you check the lottery numbers next day, and it turns out the letter predicted all the numbers exactly. The next day you find another letter from "Laplace". It has predictions about what the weather will be two days in the future for 100 major cities across the globe. Two days later, you check the weather reports, and every single prediction is right, to the single degree of temperature at all 100 locations. If that happens, you would have to conclude that you've been receiving mail from some location that has a computer more powerful than our entire universe. The only logical explanation would be that there is some 'verse that's "outside" of ours, that our universe is running on a computer in that 'verse, and that the beings in that 'verse have decided to inject some direct communications from them to us. If that happened, and it is possible, would it then make sense to say that determinism is irrelevant due to chaos? Likewise if I send communications to my sims that could only be predicted by a computer more powerful than any computer that could exist in their sim world. They would have to conclude likewise. Retributive feelings go wrong sometimes, as does hunger signaling. Reductio of the argument from coarse graining: extra coarse. [re about more finding out and more mysteries] But has one been increasing and the other decreasing? Every time we figure out more things, there's an increase in the number of things we want to figure out, and we tend to figure them out. Every time we figure out more things, there's an increase in the number of things we can find mysterious, and that keeps increasing no matter how many things we figure out. Knowledge and mystery both have been continuing, and have both been increasing. Has there been a tendency that acting on one of these is better than acting on the other? We assume there are always more things we can figure out, and we keep figuring more things out. We assume there are things we may never figure out, and we keep wasting effort failing at trying to figure out some. Being a robot feels pretty damn good. Is that hard to believe? What can and can't be proven ultimately? We can never prove if free will is real or not? We can never prove if determinism is true or not? We can prove that free will is an illusion because it's a feeling of something that can't be made intelligible. There's this things we imagine having, but when you describe it in detail, you describe something that can't possibly exist. I have a decision to make, and I imagine I have a choice about whether I do one thing or another. And I imagine I can choose which thing to do no matter what the decision factors are, as long as nothing's stopping me physically, like a wall in front of me. Like, if someone says to me "If you raise your hand, I'll give you five dollars," I might raise my hand because I feel like earning five dollars. I might keep my hand down because I don't believe he really intends to give me five dollars for raising my hand. I might raise my hand even though I don't believe him, just because I want to see what he'll say if I do. I might keep my hand down because I'm carrying shopping bags in both of my hands and I don't even want to stop to earn five dollars. Or even if I have no reasons, I might raise my hand because I want to do something random, and raising my hand feels like a random thing to do, or I might keep my hand down because I want to do something random, and keeping my hand down feels like a random thing to do. In every case, I feel like I can do anything short of violating the laws of physics for whatever reason, or for no reason. The reality is that there's only one decision to do x or do y that you will do, that you ever could do, and that indeed the other option is prohibited by the laws of physics as they apply to your brain. There are people who think they're soft determinists who are really free will libertarians. There are people who think they're free will libertarians who are really soft determinists. There are people who think they're hard determinists who are really soft determinists. If we're talking about the kind of free will that makes us able to learn and have individual differences then we're talking about a kind of free will that sea slugs have. If you really want a rigorous proof of this, it can be done in a perfectly deterministic simulation universe on a computer. (It's maybe a weak objection not worth saying much about.) Consider the statement, "Even insects have free will." What to make of that? Well, if it appeared in a piece of writing that defines free will as the libertarian kind or certain other kinds, then that's a fringe opinion. If it appeared in some context that didn't include a definition (or other further indication) of what they're meaning by free will, then to agree or disagree that insects have free will is only disagreeing about a definition. In compatibilism, cases that call for correction are always unfortunate, but the correction is better than no correction. By chance, divination by animal entrails sometimes works. By chance, there are people for whom it's always gone well. Doing things for no reason vs doing things for no cause. We don't always have words for the abstractions that might be called 'reason' as opposed to 'cause'. Why did you stub your toe? Because you decided to walk in that exact direction. Why at that angle that led you into that piece of furniture? There's only low-level describing that, but like the billiards example, the full elaboration is there in the facts of nature. If we knew all the positions of all the atoms that constituted your body and the items in your room five seconds before you stubbed your toe, and loaded those into a sufficiently powerful computer, the computer would be able to say that the person described walks into that piece of furniture and stubs his toe on it. The full report of how that happened would be millions of pages long, but it would account for sufficient cause. We wouldn't be able to turn that report into a shorter report, because we don't have the abstractions for grouping any of those particulate facts into more abstract concepts. In other cases, the account of a person's decision could start with a report millions of pages long about atomic interactions, and it could be turned into a shorter report because we do have the abstractions for grouping those particulate facts into more abstract concepts, and the more parsimonious report might be only a few words long, like, "He stopped to buy a hot dog, because he was hungry." But the only reason we can have a shorter report in the hot dog case is because we have abstractions called reasons that are agglomerations of causes at the particulate level in terms of things like intentions, and in the stubbed toe case we don't have the abstractions that might be used to agglomerate similarly. So the two cases are identical for all purposes except that in the one case we have these abbreviations of language that we call abstractions and in the other case we don't. Language aside, the two cases are identical in all relevant ways. So that's why when you stub your toe, you do it for causes that are just as good as reasons. And that's why the only relevant difference between causes and reasons is what kind of talking we do when we talk about them. Why the hidden conditional? A compatibilist answers the CDO question, doesn't see it as interesting for the moment, starts talking about other things, and just that quickly forgets that there's a difference between the CDO with the 'if' clause and just saying CDO with the 'if' clause hidden. "Ought to" is just a social construct. I'm not saying that emergentism should be so demoted that we should do away with the studies of chemistry and biology just because they're made of physics. Issues like that are clear enough. Just a lot of confusion happens when emergence happens to come with layers that have flip-flopping properties. Freedom emerges, not ontologically. "Emergent property" is about nothing more than how we talk about and think about things. Words are for expressing ideas. There are a lot of things words can't do. Words can't hammer a nail. But even for expressing ideas, there are some pretty disappointing limits to what ideas words can express. There are a lot of ideas that words can't express. A lot of the most important ideas are the ones that words can't express. "You can will what you will, but you can't will that (or whatever the limit to the recursion is)." "Yeah." "Then what good is the distinction of agency?" "It's a lot of things that you can improve the world by teaching about. And it's the hinge of when correction is applicable. And it's true. By those characteristics, it is of prime importance. A few days ago I decided to inform a friend about the concepts that tend to work when you want to form a good habit or break a bad habit. I decided that because I had the choice of telling him that or not telling him, and it seemed to me that a better outcome would result in telling him. Yesterday, he decided to attempt forming a good habit, and he used the information I gave him because he thought it was more likely to work that way than some less informed way. Today, he decided to attempt breaking a bad habit, and he used the information I gave him because he thought it was more likely to work that way than some less informed way. So there are plenty of things we can do about matters of will that work better than other ways. What this comes down to: if you say something like 'you can do what you will, but you can't will what you will' (or whatever truncation of the infinite series) that suggests that there's nothing to be done about informing people about how wills work beyond saying 'they're determined', that's a disservice." A pair of scissors that never gets used might as well be a statue of a pair of scissors. A degree of freedom that never gets used might as well be no degree of freedom. "There's something that, if found, would prove the existence of libertarian free will." "There's nothing to be found, nothing that wants finding, no way to describe what you want found to prove it." It's a strange thing to say that a butterfly flapping its wings can cause a hurricane. Wouldn't it make more sense to say that a butterfly flapping its wings plus all the other facts of the world have the outcome of a hurricane? This can be made more clear when we understand 'cause' in the sense of counterfactuals. In this world, there was a hurricane. Now suppose we were to rollback time by one week and make a copy of this world, so now there's two, and in the other world we disappear one butterfly from it. Now there's two worlds, and one is identical to our world one week ago, and the other world is identical to that world minus one butterfly, and now we resume time for one week in both worlds, and then there's a hurricane in this world that isn't there in the other world. That's a good illustration of what 'cause' means in terms of counterfactuals. It's still a bit of a strange thing to say that a butterfly flapping its wings can cause a hurricane rather than saying that a butterfly flapping its wings combined with all the other facts of the world cause a hurricane. But now at least it makes technical sense. 'Cause' only makes sense when it's understood to spin up a counterfactual Re re the alcohol units per day model of moderation in moderation: you could also RNG it, but that would have an excess of patternlessness. So the right balance avoids having no pattern, avoids having a simple periodic pattern, avoids having second-order pattern, etc. Even saying it's explainable often does nothing toward explaining it. Isn't it a strange sort of thing to say, "Quantum mechanics is how all matter works, but it has nothing to do with how a car engine works"? Words do maybe half of what I would like them to be able to do. Sometimes a person or a machine makes a decision based on information it already got in the past, and sometimes it makes a decision based on projecting possible futures. But this second kind is just a type of using information you got in the past. Information you got in the past includes: there will probably be a future, I will probably be in it, other things will be in it, et cetera. So when you say that the difference between a machine and a hooman is that machines can only make decisions based on past input and hoomans can make forward-looking decisions, that's both not true (the machine and the hooman both make both kinds of decisions) and it's also not even a real distinction (a decision based on projecting future is a type of decision that's based on past information). Imagine trying to make a decision that projects possible futures while forgetting facts such as what gravity tends to do to objects that are heavier than air. You wouldn't be able to make much of a reliable future-projecting decision if you weren't basing that on information you got in the past. Have you been hanging out in some neighborhood where there are indeed people running around saying, "By studying a single molecule of H2O you could know facts such as water expands when it freezes"? Extreme episodics and the two marshmallow test. Most adults have learned enough delay of gratification to wait for the second marshmallow. The vicious kind of exteme episodic doesn't have enough impulse control to wait. The virtuous kind of extreme episodic would wait, but not because of the idea of getting more utility later - he would wait because someone called "my future self", who is not me, would enjoy two marshmallows more than I (present self) would enjoy one, and that's a good enough reason to forsake one for someone else to get two. Change it up so that you have the choice between having one marshmallow now and someone else having two marshmallows now. The vicious kind of extreme episodic would still take one. The virtuous kind of extreme episodic would still forsake one. The ordinary adult would be more split than in the first condition. Ways that counterfactual reasoning gets strange. Imagine I had my soul but happened to be born in some civilization in the distant past. Or, suppose some event in the distant past had been different, and how would you and I be differently off at the present time. Consider: Statement 1: If we take the determinist's definition of "could (actually) have done otherwise" and the utilitarian's ("weak") definition of 'deserve' then we have, "You can be morally responsible for your actions even if you could not have done otherwise." and Statement 2: I'm a semi-compatibilist, which means that I believe that determinism is compatible with moral responsibility even though it's not compatible with having been able to do otherwise. Statement 1 and Statement 2 are identical in meaning. But Statement 1 appears to be a statement about definitions, and Statement 2 appears to be a statement about beliefs. But they're identical. So either they're both statements about beliefs, or they're both statements about definitions. In fact they're both statements about definitions. Statement 2 is a statement about a definition, but it's made to look like it's a statement about a belief. To make a statement like this and say it's a statement about a belief is a mistake. The statement contains the phrase "I believe-" but it's actually not expressing a belief at all, only a definition. I could say, "I believe that a triangle is a shape with three sides," but that's not really a statement about a belief. It's just a statement about a definition. It's just saying "I'm defining a triangle as a shape with three sides," plus a bit of confusion of language. So you can be confused about a belief not only if the belief is incorrect, but also if what you think is a belief is not really a belief at all. "I want to waste no energy disagreeing about definitions." "I think it may be worth energy sometimes disagreeing about definitions." "Oh, fuck. Now I'm about to waste energy disagreeing about whether it's worth disagreeing about definitions." You say that deserving requires being able to branch the timeline. Now I've given an account of deserving that doesn't require being able to branch the timeline. Now you're saying that that's irrelevant because you're sticking with the idea of deserving requiring being able to branch the timeline. I say that's a useless idea of deserving because branching the timeline is not something that people can do. You say that idea of deserving is fine because people can branch the timeline. I've asked you why you think that's something people can do. You say because it feels like it's something people can do. I've given an account of why it feels like something people can do even though it's not something people can really do. And you're sticking with the idea that people can branch the timeline and that's where deserving comes from. And you're sticking with those because they feel like they're true even though it's not physically possible, and alternative accounts are satisfying enough. At that point you disagree because you just don't find them satisfying. And somehow believing something that's not even physically possible is in your opinion more satisfying. Well, call me crazy if I feel like your ideas are less satisfying than mine. Of course, I have the natural feeling of being able to branch the timeline, and I have the natural feeling that deserving is based on that, and I find them satisfying as aids to navigation most of the time, but I only find their negation satisfying as actual coherent concepts for understanding how things work. If I can't convince you, I at least want you to understand what I'm saying about all this. A roulette ball is not capable of branching the timeline, and similarly a hooman is also not. And there really are important things to understand when those are taken as literal truths. And those important things really are undermined when you give metaphors the status where literal truths are meant to stand. "As a feeling, there's free will, but that feeling is an illusion. As an emergent property, there's free will, but that emergent property is a metaphor." "But I could just as well say that the rigidity of a bronze statue is a metaphor - the bronze statue could wave its arm, so determinism is only ever a tendency." "You're right. Free will is possible. But free will only happens as often as a bronze statue waving its arm happens. No, wait. Even that's nonsense. That's how often random will happens. And free will is still unintelligible." Library of Babel simulator really does work as an idea prompter. Condemned to be free. I was thrown into this world. It's a deterministic world, but I have a kind of freedom. I have the "if x then y" kind of freedom. Okay, what are my if-thens? I don't know - I just got thrown here not knowing those. If "I don't know what my if-thens are" then "feel angst". That's the one if-then that comes pre-installed when you get thrown in here. Other solutions to infinite recursion: keep it and write it in finite notation. For example: "The counting numbers are 1, 2, 3, ..., or in other words they're the positive whole numbers" is an infinitely long statement. But you can use the ellipsis notation on a certain part of it to express an infinitely long statement in a finite number of characters. Really the dot-dot-dot expands to 4, 5, 6, and on past ten, past a hundred, past a thousand, past a million, past a billion, and on still further. Let's try a few examples of using an operator to shorten an infinitely long sentence into a finitely long expression. For example, Family Guy [lorem season and episode number] "Everything I say is a lie, except that, and that, and that, and that, and that" [lorem - check wording]. Apologies, but I have to explain the joke. Suppose there was a person who only ever told lies. Then he says "Everything I've said before this statement is a lie." That would be simply true. But if he says "Everything I've ever said is a lie," that would have to include the statement itself "Everything I've ever said is a lie" and now this is the same as the liar's paradox. In the liar's paradox, we only have to imagine someone saying "This statement is a lie". Is the statement maybe true? maybe false? Well, if you assume it's true, that entails that it's false, and if you assume that it's false, that entails that it's true. So it can't be either true or false, which is a problem. Okay, now suppose that this morning I said " (1) Crows are mammals," and this afternoon I said "(2) Three is greater than five," and this evening I say "(3) Everything I've said today is a lie". That statement works fine in regard to the statement I made this morning, and the statement I made this afternoon, but regarding itself it has the liar's paradox problem that it can't be true or false. Suppose then I say, "(4) except (3)". Now (3) applies to my statement from this morning and my statement from this afternoon, and now (3) says nothing about itself, so it doesn't matter that statement (3) is true. Now everything's neat and tidy about (1), (2), and (3), but now (3) says that (4) is a lie, and now there's the same liar's paradox about statement (4). So I say "(5) except (4)", and now everything's neat and tidy about (1), (2), (3), and (4), but there's the liar's paradox problem about (5). So I asy "(6) except (5)", and so on, and however many of those qualifications I add, it makes the previous one fine but makes itself problematic, and then the next one makes that one fine and itself problematic. And the can must be kicked indefinitely. Good news: we can use one of those notation things to express an infinitely long sentence using a finite amount of ink. Here's the solution: "Everything I say is a lie, except that [capital sigma] and that". This expands to "Everything I say is a lie, except that, and that, and that, and that, and [et cetera]". Here's what's strange about that: if you do the full expansion, it's an infinitely long sentence, would take you an infinite amount of ink and paper to write down, or an infinite amount of time and breath to speak, but it expresses a propsition that's simply true. If I've been telling only lies before, then "Everything I say is a lie, except that [capital sigma] and that" is true, and is also an infinitely long sentence. So this operator that enables us to represent an infinitely long sentence using a finite number of symbols is sometimes the right solution to a recursion puzzle. "I know [capital sigma] one thing, which is that I know nothing, except for" is another one of these infinitely long sentences represented by a finite number of symbols, thanks to our friend the capital sigma operator. This one expands to "I know one thing, which is that I know nothing, except for one thing, which is that I know nothing, except for one thing, which is that I know nothing, except for one thing, which is [...]". A simpler way of saying this is, "I know exactly one thing, which is the present statement." The point is that if you say "I know nothing", that's a statement that implies that you know that one thing, and then you know both nothing and one thing, which is a contradiction. So you can say, "I know exactly one thing, which is the present statement," and that's pretty much the same thing as saying "I know nothing" minus the contradiction problem. But it's a bit enigmatic to say "I know exactly one thing, which is the present statement." One might reply to that, "What the heck do you mean by that?" And then to that one might reply "I know one thing, which is that I know nothing, except for one thing, which is that I know nothing, except for one thing, which is that I know nothing, except for one thing, which is [...]". Unfortunately, that would take you an infinite amount of time to say. So instead you could say "I know [capital sigma] one thing, which is that I know nothing, except for". Or if you don't feel like explaining how an infinite expansion operator works, you can trust that it's understood that that's what you mean when you say, "I know one thing, which is that I know nothing." "I have a feeling this could go on for a while." "Forever." "We'll have to truncate." "No, we don't have to truncate." "Then what?" "concatMap(\b->let c d e=let f=d*2;g="? I";h i="Intolerance"++concat(replicate(i-1)" of intolerance");(j,k)=if e then(f-1,g++"'m against")else(f,g++" support")in"("++show j++"): "++h j++k++" it, except in the case of "in c b True++c b False)[1..]++"."" Determinism is meaningless? Incorrect. Determinism is irrelevant? That's a different question. You could argue that determinism has a meaning but doesn't matter. You could argue that determinism has a meaning and does matter. But if you say that determinism is meaningless, that's a big mistake, and wrong. Determinism may turn out to be a technical concept that has profitable purchase in limited scopes, but ends up mattering nothing more than the self-consistent bickering of warring theologians when it comes to what matters to practical society. If you say it's determined, and then you say everything else you can say following that point, you get as far as zero point zero percent of an explanation. Will this only ever matter in the case of impossibly advanced technology or divine intervention? "We have enough chaos to thwart any possible predictor. Even if you had a face-down predictor made of a metric gigaton of computronium, I could easily render it impotent with the amount of chaos I have in me." "That would have to be a lot of chaos." "Enough to give birth to a dancing star." We will not be shrinking away from recursions. Most of the task ahead is thinking about thinking, which is already recursive. We're gonna bring up every recursion puzzle that one naturally runs into in these lines of inquiry, and that's quite a few of them. And there will even be a few extra examples of recursion just to make sure we're getting comfortable with them. I think you're virtuous. How did you get that way? It's because you can decide that you want to be virtuous. How did you get that way? You have some constitution right now that enables you to decide to be virtuous, but not everyone has that kind of constitution. Perhaps you got that way because at some time in the past you decided that you would cultivate good character traits and relevant knowledge about how to deal with things. How did you get that way at that previous time? It could be said that hard determinism isn't something to take very seriously, but at the minimum, it does raise serious issues with soft determinism. In the final analysis, some kind of soft determinism is better than any kind of hard determinism, but if you want to wind up with a good formulation of soft determinism, you have to take the objections of the hard determinists seriously. If you were in the ideal situation, it would look nothing like our cultural idea of what the ideal situation looks like. You would only recognize it if you knew how to look. It's a fish that's too slippery for you to catch, but it's still a real fish just like the other ones that you can catch. You feel like you make decisions. But aren't you made of the stuff of physics? And doesn't the stuff of physics just do what it does? Does such stuff really make decisions? "Considering how many butterflies flapping their wings on the other side of the planet there are, to what extent are you at their mercy?" More further readings: podcast Omnibus Project episode The Chicago Seekers. "You say that this person is evil, and by that you mean that if he hadn't done all those things that he's done, the world would be a lot better off, or at least that this would be a plausible inference. But if you were to remove this person from the history of our world, that would require rolling back our universe all the way to the big bang and changing something about the initial conditions of the universe so that what happens instead is a world that's much like ours, only without this one person, but otherwise identical. When you judge that a person is vicious or virtuous, do you really mean to wrap that up with such outlandishly heroic super abilities that we have to imagine?" "No. Fortunately, running a counterfactual is a lot simpler than that. I can imagine rolling back time 20 years, then going *poof* this guy out of existence, and then resuming the processes of the world from that point. And so can you. And it's damn fortunate that our reasoning processes come with operations that tidy. We can easily imagine a world wherein this guy was born, grew until the age of 20, then got *poof*ed out of existence, and then the rest of the world going on as it would in that case. And the further conclusions we draw from that process of imagining are reliable and meaningful. That's what it means to run a counterfactual. That's part of the faculties of reasoning we all have. And it is a great deal of utility indeed that we draw from it, and not in the manner of kidding oneself. That's just saying, "This fails, and therefore this succeeds," which is clearly bullshit when you say it that straightforwardly, but there's a special kind of talent that some people can use to day something like that and make it sound fine. This is a snow job. If you read what this guy said, and you think it means anything other than "This fails, and therefore this succeeds," then you've been snowed by this piece of writing. Projectivism: properties are things that we project onto objects, but that doesn't mean that those properties don't really exist. Deserve's got nothing to do with it, or non-basic desert? The kind of freedom that may warrant reactive attitudes and utilitarian responses is compatible with determinism. Rights can be utilitarian. They don't have to be deontic. Doing all the same things as retributivism while calling things by the same words as retributivists do can be utilitarian. Solution to the illuminati prescription problem: use folk terms for people who are fine with that, and define them in non-standard ways in the rule books for people who want to look those up. I've seen a roulette wheel that appeared to be agentic. I've only spectated roulette a few times. One of those times, there were several silly fools placing their chips all over most of the board - there were several numbers with big stacks of chips on them, several numbers with smalls stacks of chips on them, and only a few numbers with no chips on them. Then the ball fell, and on one of those numbers that had no chips on it. The dealer just swept all the chips off the board, straight into the house's profits. And then again, and again. This roulette wheel seemed to have a mind of its own. I didn't see what it looks like when the winning number has a bunch of chips on it and the dealer has to do the math to multiply that by 35 and pay off accordingly. It's like the spinning wheel and the revolving ball were working to countermand all intentions of the fools gambling, and bring profits to the house at a rate far exceeding the rate that would be expected by real randomness. The stuff of physics just does what it does. In some configurations it makes decisions. That includes the configuration of you or me. In those cases, it's the stuff of physics making decisions, but that's just part of doing what it does - that kind of decision is not free from the laws of physics. The primary question is: "Is everything like clockwork?" Other lines of inquiry branch off that. It's easy to do a bad reductio of utilitarianism when you strawman it. If you pretend that every utilitarian has to follow the most simplistic formulation, you can imagine them doing all manner of outrageous things. Like how a decision can be explained with its cause in the future, the reason for retribution is that it is to be a disincentive for your past decision. It's possible to have big effects by working with incentives. Lorem Derek Parfit water cart analogy. If by 'deserve' you mean a reason for doing something that can't be made intelligible, then 'deserve' is an empty concept and no one's ever deserved anything. The best of all possible sims worlds. At what point in biological history did we evolve the third sort of thing, and where in the body is it found? How does chemistry emerge from physics? This is called related to the study of physics and the study of chemistry. If we didn't have this convention, then auto mechanics would be called one discipline of the study of particle physics. Are there people who keep trying to understand biological ecosystems by studying particle physics and who keep failing due to that methodology? Concurrent top-down and bottom-up processing is, surprise, nothing in addition to the low. It might be sloppy to say that indeterminism emerges from determinism, but it might be fine to say that freedom of some sort emerges from determinism. Unsorted Pile 7 The folk concepts are that you have an immaterial soul, that this soul is where your consciousness and your self are, and that your conscious self makes decisions that are not subject to the laws of nature. The truth is that you don't have an immaterial soul, your self is the bundle kind, and consciousness is produced by the brain. You have the illusion of having an ability that can't even be make intelligible. Now that's a self-serving bias. Determinate matter can get together into complex arrangements that have simple stimulus-response interactions with its environment. And then it can get into even more complex arrangements that have even more nuanced interactions with the environment, because it's free of that constraint of only simple stimulus-response. So that's clearly some kind of emergent freedom. What parts of the mind do we have volitional control over? I'll flip a coin, and if it lands heads, I'll raise my right hand, and if it lands tails, I'll raise my left hand. I'll look at a blue object and flip a coin, and if it lands heads, I'll get the sense impression of a blue object, and if it lands tails, I'll get the sense impression of a red object. I'll flip a coin, and if it lands heads, I'll force myself to exercise every day until I get into the habit and then feel like exercising every day, and if it lands tails, I'll force myself to read books every day until I get into the habit and then feel like reading books every day. Clearly it works in some cases that you can will what to will just like you can will to move a limb. You can't cross-wire your optic nerve using the power of thoughts, but you can do plenty of things to your will using the power of toughts, at least sometimes. At this point, a hard determinist will be itching to mention that "It's antecedent causes all the way back whenever that happens." This must be acknowledged. It is antecedent causes all the way back, but it's also a legitimate question to ask what characteristics an agency has. My cat has some kind of agency. I have some kind of agency. Those two kinds of agency have some things in common and some differences. Are some of those differences relevant? Hard determinism, properly defined, is the assertion that there's no relevant difference between what kind of agency I have and what kind of agency my cat has, at least as it matters to moral responsibility. Soft determinism, properly defined, is the assertion that there are relevant differences between what kind of agency I have and what kind of agency my cat has, and that matter to moral responsibility. One bad definition of soft determinism is, "Determinism is both true and false." If you think that's the definition of soft determinism, then hard determinism probably seems more attractive. If we use the "free will, the kind that matters for moral responsibility" in our definitions of determinism, then hard determinism can be defined as, "The truth of determinism is the negation of the kind of free will that matters for moral responsibility." So if there's some relevant difference between what kind of agency I have and what kind of agency my cat has, and that matters to moral responsibility, then soft determinism asserts that this difference is worth investigating. One could object to the "the kind that matters for moral responsibility" definitions if they think there is no such thing as moral responsibility. But you can define moral responsibility as whatever appropriately inspires reactive attitudes and utilitarian responses (if you don't care for the basic desert idea). Even if you're a utilitarian and a naturalist and you think "the good is whatever maximizes utility" is a complete statement with no "further facts" required. You can still say as a definition that thing about what moral responsibility is. And since there are differences between how we do those things about people and how we do those things about cats, and since that's based on what differences there are in the kinds of agency involved, that all together proves that hard determinism can't be true, that soft determinism is true if determinism is true. Similar expressivism. There are things that inspire me to say things like "boo" and "hurrah" to my cat, and there are things that inspire me to say things like "boo" and "hurrah" to people, and there are differences between those two sets, and those differences are due to the different kinds of agency that cats and people have. Admittedly, I would treat everyone the same as I treat cats if they were to let me. If all knowledge of gymnastics got *poof* disappeared from the world, we would reinvent forward rolls and backward rolls. If you want the low impact kind of forward roll, it's not a symmetric movement, but you start either on your left shoulder or your right shoulder (the parkour forward roll vs the gymnast's forward roll). Likewise ethics. If all discourse of ethics got *poof* disappeared from the world, we would reinvent much of the same discourse. But that doesn't mean that ethics is "out there" in the world to be discovered. It might be that these are just quirks of our own biology for the expounding. It would not make sense to say, "I'm a hard determinist, but I agree with soft determinists about all that agency stuff," because that would just be to say, "I'm a soft determinist." Likewise, it would not make sense to say, "I'm a soft determinist, but I agree with hard determinists about all that agency stuff," because that would just be to say, "I'm a hard determinist." Free will, the kind that matters for moral responsibility, or whatever we're doing as a substitute for it. Virtue is something you can win in a lottery. Everyone who got it got it that way. Then there's a lot of other things which to a virtuous person are not lotteries but to other people they are. Stuck point between two people with the same stances but one says, "free will, the kind that matters or would matter for moral responsibility," therefore hard determinism, and the other says, "free will, the kind that matters for moral responsibility, or whatever we're doing as a substitute," therefore soft determinism. If there are two people, and this is the only remaining thing they're disagreeing about, then it's only definitions left that they're disagreeing about. They can seem to be disagreeing about many things, and be unaware that none of the disagreements are substantive, because they didn't realize they were using these two slightly different definitions of hard and soft determinism (and that was really the thing to address to clear up the disagreements (or apparent disagreements)). They can walk away both agreeing that one is still a hard determinist and one is still a soft determinist without realizing they had agreed on all substantive questions. In a less-bad case, they end up saying, "We're only talking past each other at this point, but I don't know how to fix it." In the best case, they figure out how their apparent disagreements are just disagreements about definitions. Consider the statement, "We'll see if x can show us that determinism is true, and maybe x can also show us that free will doesn't exist." It's a somewhat strange thing to say. If x can show that determinism is true, then x plus a definition, that free will is the negation of determinism, shows that free will doesn't exist, or x plus a definition, that free will is something other than the negation of determinism, shows that x can't tell you anything about free will. So the statement, "We'll see if x can show us that determinism is true, and maybe x can also show us that free will doesn't exist," has exactly as much weight as, "We'll see if x can show us that determinism is true," plus a definition, and a definition doesn't carry any weight. Determinism is unfalsifiable because it's about everything. Contra-causal free will is unfalsifiable because it's always potentially about something not yet detected. It doesn't matter if determinism is unfalsifiable in that absolute sense, since we know how much almost everything relevant is deterministic (or a near-perfect approximation to it). Also, the concept 'deterministic' is a concept that makes sense to apply to things narrower than the whole universe. In order to survive in the modern world without succumbing to the brain rot, it takes a certain set of survival skills, and those skills are so far removed from the skills relevant to our evolutionary history, it seems like pure dispositional luck that determines who succumbs and who doesn't. It could be a matter of public education, and then it wouldn't be a lottery. If one wants to have that kind of attitude that can live in the modern world, and endeavor to understand it, and also not get beat down by it or taken in by charlatans, what does that take? One needs to have a degree of incredulity, but not too much incredulity. One needs to have a degree of emotional sensitivity, but not too much emotional sensitivity. Et cetera. Who ends up having those things and who doesn't? The people lucky enough to roll a certain set of dispositions in the dispositional lottery end up having those things. All the people unlucky enough to roll a different set end up in this or that bad way: one so credulous that he lets a charlatan control him, one so emotionally sensitive that he gets beat down by the existential bugbears, one so incredulous and emotionally numbed that he thinks nihilism is a good idea, et cetera. The world is like the movie Groundhog Day, but without the repeating. Add to recommendations: Groundhog Day, and Groundhog Daying. If Frankfurt's free will is second order desires for the first order desire that wins, then it would be dangerous to say that more free will is better. Doing a good job of maximizing that kind of free will would not leave enough room for things like blunder and angst. The quality of one's second order desires would deteriorate. Part of the good life is the tensions between the parts of oneself, how those often go other than planned, reflection on that, and then a kind of insight that helps you know things like what second order desires would be good to pick. In a world where second-order desires always win, there would be no akrasia, and there would be no good artists, and there would be a dearth of insight about which second-order desires are really the good ones. "People with few vices tend to have few virtues." - Abraham Lincoln [lorem - wording] The daimon is the real nth order unconscious desire. When a second-order desire happens not to be effective, it's because the daimon had a third-order desire that ran counter to the conscious second-order desire. For the enlightened, or those on the path to it, the daimon will counter second-order desires according to some pattern that results in more insight than second-order desires always winning (and the daimon being silent). For the unfortunate, the daimon will lead only to self-destruction in countering second-order desires. This is something like "controlled chaos", but not exactly. This is something like Bayesian inference, but not exactly. It's a matter of disposition whether you tend to stick to your dispositions or not. Whether you readily set aside your dispositions when that will be the best way to make a decision, or whether you always decide according to your dispositions like you're stuck to them: that's a matter of disposition. It may be imperative to have MN sanity and degrees of a different kind of insanity. Perhaps an amount of such insanity that doesn't qualify as actually insane, like how personality traits are milder forms of mental disorders. The same epistemological status, and therefore the same ontological status as far as we might care about. Is "moral responsibility or whatever we're doing as a substitute" a distinction without a difference? It could be that the sets happen to be identical but the means of generating them are different. Then that would be a real difference. But if they're identical sets and the means of generating them differ only in terms of one or more definitions, then it's not a real difference. If we're using "free will, the kind that matters for moral responsibility" in our definitions of determinism, and we're also insisting that we mean only the kind of moral responsibility that's based on a concept that can't be made intelligible, then we're probably using a terrible definition of determinism, and we should probably be using some other one. Let's not make the non-existence of leprechauns make our talk of biology confused. Imagine that in biology, the taxonomy included a genus defined as "Leprechauns and other animals that have tha same number of limbs and the same number of vertebrae." And imagine that this genus was defined that way at a time when it was popularly believed that leprechauns existed. Then, some time later, it became popularly believed that leprechauns don't exist, but we kept the definition of that genus the same. Suppose that at that time the genus included rabbits and squirrels. Now suppose that someone proposed that cats should also be in that genus. Naturally, the next question that would arise would be "Do cats have the same number of limbs and the same number of vertebrae as leprechauns do?" And inquiry into the answer to that question might produce a lot of pointless friction. At that point, it would be clear that the definition of that genus should be revised in light of the widespread belief that leprechauns don't exist. Biology indeed does run into a lot of philosophical problems like that regarding issues of its taxonomy. And they typically have a pretty robust way of settling issues like that. Metaphysics has been less lucky. A lot of problems that are pure murk have sprung from questions like, "How shall we describe the qualifying conditions of this class in terms of nonexistents," when perhaps the better question would be, "If we've been describing the qualifying conditions of this class in terms of nonexistents, what would be a better thing to do instead?" It's like we wondered whether lab-grown meat was kosher, and then we got all up in arms over the answer to "Does the null set of animals contain zero of the kosher kind of hoof, or zero of the other kind?" [quotation credit: Ken Jennings] You can say, "I picked up the keys because I'm going to the store soon" or "I picked up the keys because I had just decided to go to the store." The two statements are identical, but one of them seems to put the cause after the effect, and the other one seems to put the cause before the effect. Of course, the cause indeed was before the effect, no matter which way you choose to phrase the explanation. So the second statement, the one that says you made the intention to go to the store, and then picked up the keys, is what you really mean when you happen to say something like the first statement. It's only a quirk of language that lets us talk as if the chronoligcal sequence goes effect-then-cause, as if a cause-effect relation can have the effect first and then the cause later. Re utilitarian determinism self contradicting. There's exactly one possible outcome, and that one outcome is both the best possible outcome and the worst possible outcome. Therefore, every act is both moral and immoral. Reductio QED. And no act is either more or less moral than the alternatives, because there are no alternatives. So utilitarianism requires either free will or a determinism with a CDO-if. And determinism with a CDO-if is soft determinism. Therefore a utilitarian determinist must be a utilitarian soft determinist. This generalizes not just to utilitarian contemplations, but to all decision-making. If you're a determinist, whatever kind you are, you use the framing of CDO-if, if you are to make any sense of how you make decisions. But is CDO-if really the definition of soft determinism? A refactoring: "determinism is/isn't compatible with whatever that thing is that matters to moral responsibility." For all the people hung up on the words "free will", this refactoring works perfectly for distinguising between hard determinism and soft determinism while entirely avoiding the words "free will". Whether you're a hard determinist or a soft determinist depends only on this formulation in these terms. If you thought it was some other way, then you were probably in some kind of flight of fancy about what the words "free will" inspired in your imagination. The thief stole the sack of gold, but he could have left it if he had wanted to. The judge issued a utilitarian punishment, but he could have chosen not to if that's what he had wanted. The utilitarian punishment was calculated according to a retributivist scheme, but the rulebooks stated that the terminology was all either metaphorical, or swapped with terms like "boreal responsibility" instead of "moral responsibility." The punishment was administered. It was the best of all possible outcomes, if we define "best of all possible outcomes" as "best of all of the outcomes that, following the theft, could possibly have arisen assuming anyone at that point could have had any desires." Which moral system has the best outcomes? We could compare the candidates, but the utilitarian one is defined as the one with the best outcomes, so it automatically wins. Or rather, whichever one wins automatically becomes the utilitarian one. You learn to control your body well enough to stand and walk, and you learn to control your decisions well enough to..? To respond profitably to correction. One of the possible states of the logical faculties of my brain is for it to plan how to build a certain piece of furniture. Yesterday, the logical faculties of my brain indeed were in that state. The intention faculties of my brain both wanted and caused the logical facultues of my brain to be in that state at that time. Therefore, the intending faculties of my brain had control over the logical faculties of my brain. One of the possible states of my body is standing. Last time I was standing, the intention faculties of my brain controlled my body to make that happen. Some people don't have the control to stand because the intention faculties can want but can't cause standing. Some people don't have the control to exercise any willpower because the intention faculties can want but can't cause other parts of the brain to be in whatever states it wants. A lot of confused people do a lot of confusing words for concepts. A lot of much clearer-thinking people only confuse a word for a concept once in a while. A few people have become so clear of thinking that they never confuse a word for a concept. Whenever there's something that's a definition, but you think it's a concept or a belief, that's when you've mistaken a word for a concept. Imagine someone walked into a restaurant, got the menu, and then ate the picture of a hambuger that was printed on that menu, because they forgot that you can order the hamburger using the menu and then eat the actual hamburger rather than a picture of it. This is what's meant by confusing the reference for the referent. The semicompatibilism thing is one kind of eating the menu. This kind of eating the menu will tend to lead to disagreeing about definitions. A concept that works to explain how some part of the world works, and works to make accurate predictions about it, is a profitable concept. A profitable concept is a real concept. A real concept is real, at least as far as we care about anything being real. If "really real" means anything, then concepts such as prime numbers are either "really real" or as similar to "really real" as we might possibly care about. There are true facts about the world. Profitable concepts are profitable inasmuch as they make predictions and explanations about the true facts about the world. Let's define a certain set of discourse. Suppose we start by asking, "What things would wishful thinking lead me to believe are true that are also unfalsifiable?" and next we ask "What do those things entail?" So we might start with something like, "Does the universe have a purpose? Well, I think I would feel better if the universe had a purpose than if it didn't, therefore wishful thinking would lead me to believe that the universe has a purpose." And, "Am I immortal? Well, I think I would feel better if I were immortal than if I weren't immortal, therefore wishful thinking would lead me to believe that I'm immortal." So we populate column 1 with statements such as "The universe has a purpose", "I'm immortal", "There's a grand creator and orchestrator of the universe", "The grand creator and orchestrator of the universe cares about me personally", "I have the kind of free will that's neither determined nor random nor any combination of determined and random", and so on. And then in column 2 we work out what those things entail. Those include "The grand creator and orchestrator of the universe created a universe in which there are beings that have the kind of free will that's neither determined nor random nor any combination of determined and random" which is self-inconsistent in more than one way. And then a great body of work is generated in attempts to argue in favor of this or that self-contradiction. One guy says, "If a being created this universe and in such a way that such being could predict everything that would happen in it at the moment that this being created it, then that's just what it means for everything to be predestined, in which case, there's no way that any of the beings in it could have the kind of free will that's neither determined nor random nor any combination of determined and random," which is an attempt at taking some number of the wishes as self-consistent but rejecting other of the wishes. And one other guy says, "It is possible that a being created this universe and in such a way that such being could predict everything that would happen in it at the moment that this being created it, and also in such a way that things aren't predestined, and there can be beings in it that could have the kind of free will that's neither determined nor random nor any combination of determined and random," which is an attempt at granting true a set of wishes that really can't be made self-consistent. Surprise: this ends up being identical to what in our actual world is called "theology". And, surprise, most of that body of discourse is silly and a waste of effort. If free will libertarianism insists that there's always a remaining "I don't know" factor, any feasible version of this will still have to admit that what solutions certain determinists propose are the same ones that they must agree with. If acting according to your values is deep self, then there's a far more important deeper self. The prescriptive parts are the personal, the interpersonal, and the institutional. The compatibilist stuff is largely comparable with free will libertarianism. You can describe semicompatibilism as, "Thinking the hard determinists are better metaphysicists and the soft determinists are better ethicists," and then it's substantial to say, "I believe semicompatibilism is true." "What way are things?" vs "What should we do about it?" Whether a concept is a "really real thing" It will be good for the plot. Maybe we can define the difference between hard and soft determinism as whether you regularly reason using the CDO-if answer to PAP or whether you regularly reason in terms of C-actually-DO. This could be a real division of the population into two groups, and maybe the best one, because it's a distinction about what you tend to do with your brain. Well, when you play chess, and you use the illusion of free will to imagine possible timelines and the system of branching they consist in, that's just what it means to think in terms of CDO-if. By that definition, there could be a hard determinist chess player, but there's no way he could be any good, because he would be forbidding himself of using the most useful thought process. And aside from chess moves there's about 99 other things that also indicate thinking in terms of CDO-if, and usually one or more of these you see people doing within a minuite of meeting them. To a hard determinist who is good at chess, I would like to ask them what they're doing with their brain when they're playing chess, ask them if they know that it's reasoning in terms of CDO-if, ask them if they know that's the hallmark of what it means to think like a soft determinist, and then ask them what it would mean to still call yourself a hard determinist when using the tool of the soft determinist to figure out your moves. Objection and response: well, even if most of the decisions for a person are closer to stimulus-response, and fewer of them are forking paths, the forking paths mode is a higher module built on the lower module of stimulus response. So if 90% of your decisions are stimulus-response, and 10% are forking paths, then that 10% is a lot to account for, and if you admit that it's because you're a soft determinist, then that solves that 10% and leaves the other 90% unproblematic. The deep self accounts of free will are like those descriptions of how science works that neglect to provide any account of "but what are the good and bad ways of even making hypotheses?" if they describe the difference between a will that's free and a will that's not free, they don't describe the difference between a free will that's interesting and a free will that's not interesting. What kinds of free will are worth wanting? 1: a will that's free of certain constraints both metaphysical and societal, and 2: a kind of free will that's interesting. Imagine I have a beetle in a box, and you have a beetle in a box, and I can look at my beetle, but I can't look at the beetle in your box, and you can look at your beetle, but you can't look at the beetle in my box. And we want to describe these beetles to each other using words. Now suppose that my beetle and your beetle are identical, but through some confusion in the communicating, we both conclude that the two beetles are quite different. That's what it's like when two people are disagreeing about nothing other than the definitions of words while they think they're disagreeing about something substantive. There are a lot of things in the free will debate that I think are only disagreements about definitions, and that are not substantive. But there are also plenty of disagreements in the free will debate that I do think are substantive. Sometimes the beetles in the boxes indeed are different. Some people say, "I can follow the dictates of reason, and that's freedom." Some other people say, "I can follow the dictates of reason, and that's fixity." These two people might them agree on all substantive matters, then if someone asks them, "Does free will exist," and one says "yes" and the other says "no", and they're convinced that a real disagreement remains, then they're wrong about that, because the only remaining disagreement is a disagreement about a definition, not a disagreement about the way anything actually is. They both have identical beetles in their boxes, but they're convinced that the two beetles are different, due only to a failure in their use of language. You don't solve the case for nihilism substantively. At best you solve it prescriptively. "If I were in his situation, I would be him" is one of those facts that's simple but hard to understand. If I try to imagine being a guy named Bob who was born and raised in New York, what I imagine instead if myself moved to New York. Then the imagination task fails. It's because we're stuck to our selves. We just imagine the same self in a different location. If I were born with the exact genetics as Stephen Fry at the same time and location that Stephen Fry was born, then I would be Stephen Fry. That's just what it would mean for me to be Stephen Fry. In the real world, that's just what it means for Stephen Fry to be Stephen Fry. How many times has this happened: a theologian, or a fan of theologians, gets the idea that he has his free will thing, and that this free will is something that enables him always to supersede his biological dispositions, and then he commits to an agreement that binds him to the prospect of that free will working that way always, and then, after denying his bioloigcal dispositions for a certain amount of time, the whole prospect lapses, much to the dismay of an unfortunate choirboy. So it may be said that a person has a justfied antipathy toward free will hokum, that these airy philosophical musings do quite often have implications to things as physically real as so many a dismayed innocent bumhole. Does a particle collider have free will? Does a quantum computer have free will? Look at how far we've got in designing technology to interact with things on the level of quantum weirness, in ways that you have to understand that weirdness in order to design the technology, and strangely none of the products of that have been the free will chip that you can install on any computer. The move toward excusing people based on motives or based on involuntary factors, it's been good when done right. It's been done disastrously when done categorically. One can draw an illustration of someone with deep self free will who is uninteresting to the point of being ineffectual. One can also draw an illustration of someone who has less deep self free will than most who is both interesting and effectual, and infinitely more virtuous and worthwhile. Qualifications: assuming the deep self is the conscious kind. Also categorizing all acts as moral or immoral misses more than it illustrates. Also the asymmetry argument is foundationally flawed. The ideal is to define deep self free will in terms of motives that might either be conscious or unconscious, and to have that kind in an integrated alignment. Suppose there's a guy named Bob, and Bob has the kind of willpower problems that we may call diminished free will by the deep self model. And suppose we're taking deep self free will in the conscious sense. If you do what you want, but it's not something you consciously want to want, then your free will has failed. Suppose further that Bob's conscious second-order desires are all just really plain and uninteresting. And suppose that Bob's subconscious is actually interesting, but Bob's psyche isn't very integrated, so Bob's conscious desires and Bob's subconscious desires are quite at odds. And suppose that every time Bob's free will, in the conscious sense, lapses and Bob does something that Bob didn't consciously want to want to do, it's something that Bob subconsciously did want to want to do. And suppose that every time this happens, what Bob does is more interesting than what Bob would have done if Bob's concsious second-order desires had been effective. We can suppose that Bob's actual experience is one that's interesting, relevant, acutely well-aimed, and it's only by the conscious deep self definition that Bob's free will keeps lapsing. In this case, we would not want Bob to have more free will in the sense of conscious deep self. If Bob did get more free will in that sense, he would be less interesting, less effectual, and so on. It's only when Bob's subconscious wins out that Bob's agency produces the more interesting and more relevant things. In Bob's case, it becomes urgent that we define two different ideas of deep self. If we define deep self as the conscious second-order desires, then Bob has little of that kind of free will, and plenty of what matters a lot more than that. If we define deep self as subconscious second-order desires, then those are the things that keep winnning out, we're all a lot better off on account of those winning out all the time, and we will want to define some other kind of deep something that captures what that thing is. So let's say that Wolf's deep self is free will when one's conscious second-order desires win out, and let's define a new term, the bathyal self, and when one's subconscious second-order desires win out, that's because free will is happening in the sense of the bathyal self. So this Bob has all the free will in the sense of the bathyal self, and almost no free will in the sense of Wolf's deep self. So Bob is interesting and effectual despite having almost no free will in the sense of Wolf's deep self? Yeah, technically true, then how do we capture what matters? Bob is interesting and effectual because he has all the free will in the sense of bathyal self. There are also people who have no free will in the sense of conscious deep self, whose subconscious desires keep winning out, but whose subconscious desires are terrible, and everyone is worse off when those win. This is someone who has free will in the sense of bathyal self, but who just has a terrible bathyal self. So, after all these new distinctions and dichotomies, what's the kind of free will that's worth wanting? Free will in the sense that someone's bathyal self wins out, and also has a bathyal self that's well-aligned. So, the ideal for the world is that as many people as possible have free will in some sense, that sense is the bathyal self sense, and also that the bathyal self of those people is more good than bad. Which model will have the best outcomes? Wait a minute. Utilitarianism was already defined as whatever has the best outcomes. If the retributivist model will have better outcomes than the utilitarian model, then that just proves that the utilitarian model you used was not the best model of utilitarianism, and you can make a better one. So whichever model wins out in the analysis, you can then say, "Newer and better utilitarian model: whatever that other one is." Calvinism does work pretty well practically. Conceptually, what it comes down to is this: the god loved some people and hated some other people before he even created them. So the bad ones do bad things, and that's why they deserve to go to heck after they die, and the good ones do good things, and that's why they deserve to go to disneyland after they die, and that's just the way things are designed. So when you get mad at someone for some transgression, you can hate them, because they constitute something worth hating, and that was part of the god's plan since before he was born. Works neatly in terms of how to deal with agents in the world. Less satisfying is the concern about, "But why did the god do it that way?" Spoilers: all attempts at answering that question satisfactorily involve saying stupid things and failing to answer it satisfactorily. The best case for two-boxing in Newcomb's is, "I don't buy backwards causality." Here's the supporting argument: "Once I'm in the room, the opaque box is either empty or it's not. And nothing I do while I'm in that room can change that. So whether there's money in it or not, here we are, and I'm in the room, and it's time to make the decision. So I'll take the clear box and the opaque box, because that has at least as much total in it as just the clear box." Though fallacious, this does sound a lot like a good case. Knocking down this argument is the greater part of knocking down any possible reasoning for two-boxing. Here's how that knockdown goes: it's not backwards causation. This Monday, there was some state of the world, including the state of your brain, and this Monday, the game dealer scanned all that, and became informed about the state of the whole world, including your brain. The state of your brain is such that given certain situations it might find itself in, it will do certain things. Bob's brain is such that if I drop something heavy on his foot, he will say, "Ouch", and if I beam him aboard my spaceship and explain the rules of this game to him, he will say, "Just the one box". Alan's brain is such that if I drop something heavy on his foot, he will say, "Ouch," and if I beam him aboard my spaceship and explain the rules of this game to him, he will say, "I'll take both boxes." I'm only beaming people aboard my spaceship this Tuesday, but when I scanned the world this Monday morning, and I found that there are brains in it that have these different properties, and those different properties take the form of, "If given situation x, it will do y." By Monday night, I had looked at the scans thoroughly enough to know those things about those brains. So on Tuesday, I beam Bob aboard my ship, I explain the game to him, and he says, "Just the one box." And I knew he would, because by Monday evening, I knew that he had the kind of brain such that if you put him in the game, he will say, "Just the one box." And on the same Tuesday, I beam Alan aboard my ship, I explain the game to him, and he says, "Both boxes." And I knew he would, because by Monday evening, I knew that he had the kind of brain such that if you put him in the game, he will say, "Both boxes." Was there any backwards causality in any of that? No, only the ordinary kind of forwards causation. Causes and then effects. Effects after causes. On Monday, Bob had this kind of brain, Alan had this kind of brain, and I wanted to run experiments on both of them. On Tuesday morning, I put money in the opaque box for Bob's trial and no money in the opaque box for Alan's trial, because of what I had learned about their brains the previous night. On Tuesday afternoon, Bob won the game and Alan lost it, because of what was in their boxes, and what was in their boxes was due to the states of their brains the day before. The states of their brains on Monday was such that they would react some way on the Tuesday, Bob saying one box and Alan saying two boxes, and I decided what to put in their boxes based on what I knew about their brains on the Monday. On the Monday, I knew, "This guy has the kind of brain such that if he plays this game on a Tuesday he will say this." And then on the Tuesday he did say that, but his saying that on Tuesday afternoon is not what caused my putting either something or nothing in his box that Tuesday morning. His having this brain predisposed to these certain reactions on the Monday was both the cause of either my putting money in his opaque box or not on Tuesday morning, and also was the cause of his saying either "both boxes" or "one box" on that Tuesday afternoon. ++ movie recommendations: The Meaning of Life by Don Hertzfeldt. Here I stand. I could have done something else if I had wanted do, but standing here right now is what I wanted to do, and I couldn't have wanted to do anything else, in the sense that there's one world, and only the things that do happen are the things that could have happened, and in that sense of the word 'could', I could do no other. Soft determinism is something I see not so much as a thing that's true, but more as a thing that's good - soft determinism as a moral prescription rather than as a statement of fact. It's both, but the goodness of it is more relevant than the truth of it. It's true in the sense of, "Soft determinism provides the best outcomes," which is just to say that it's true that it's ethical because it's ethical. "It's ethical" is more fundamental than "it's true that it's ethical (and also has no inconsistencies either factual or internal)". "Here's a surprising truth. You thought that if determinism is true then there can't be free will, but I can demonstrate that there can be free will even if determinism is true," is dishonest because it's dressing up a definition as something substantive. Something like that demonstration is indeed what soft determinists are up to, and it is true in the sense of self-consistent with its definitions and with the way things are and the best grounds for morals, but to say it as above is misleaning. It's like saying, "I'm about to blow your mind by showing you how a thing can be consistent with its negation," and that's just not what soft determinism is. We could define "basic desert" (or whatever term) as the kind of deserving that comes from when someone did something and could actually have done otherwise. Then that kind of deserving is not compatible with any kind of determinism. One of the strawmanning ways of defining soft determinism is believing that determinism is true and also believing that this kind of deserving is possible. Of course, that's just the same as defining soft determinism as believing that contradictions are true plus one middling step. Re Newcomb's. Only after you make the decision do you learn which way the world is and was since yesterday and whatever amount of time before. Only after you've submitted your answer. Even if you try to do a double cross. It's whatever answer you submit after all attempts at double crossing or tricking the game dealer. Why the talk about determinism doesn't crack at the seams when the real question is mechanism: the substitutions are straightforward, and when you apply them, the arguments still stand and the conclusions still hold. e.g. for "could actually have done otherwise," sub in, "could actually have done otherwise, and for reasons other than pure randomness," and there's the patch. That's why I prefer to talk in terms of determinism when what's really going on is, "I mean mechanism, but I'm saying determinism for simplicity's sake, but the difference is this straightforward patch that introduces no problems. "You're a hard determinist?" "Yeah." "Do you think there are things about agency that matter to utilitarian responses?" "Yeah." "Are you a soft determinist?" "No." "But you are a determinist?" "Yeah." "Well, by the definition I use, a soft determinist is someone who is a determinist who also thinks that there are things about agency that matter to utilitarian responses. Are you a determinist who thinks that there are things about agency that matter to utilitarian responses?" "Yeah." "Then you're a soft determinist by my definition." "Not by my definition." "What's your definition of a soft determinist?" "Someone who is a determinist who also thinks free will exists." "Free will as in what?" "Free will as in the negation of determinism." "Then your definition of a soft determinist is just someone who believes in contradictions." A hard determinist chess player would be someone whose decision processes are something like, "My opponent has advanced many guys on this side of the board, but I haven't advanced as many guys on that side, so my next move will be to advance one of my guys on that side of the board." It's something like stimulus-response, in a way that works better than randomness. They wouldn't be very good, but good enough to beat someone who is just doing random moves. I think that in the final taking stock of all stances, different stances will have to be defined in terms like this of differences between what different people will do in the same situations. The rest is either disagreeing about definitions or "just as good". How can we design the going bad of good intentions? This is a characteristic common of most scams. "Not knowing is knowing 'it'. Knowing is not knowing 'it'." - Word Guy "I think quantum random number generating is going to have something central to do with achieving general AI." "..Why?" "Well, narrow AI is all deterministic, but the intelligence that our minds have, I think you need to have free will in order to have that much intelligence, and we have that because of the quantum effects in our brains. So the only way we'll ever be able to improve the AI from narrow to general, it's gonna need to have free will in order to be that smart, and for a computer, a quantum random number generator is probably how you give it that." "None of those things have anything to do with each other in anything like the ways you just outlined, or ever will. Everything you just said was so incredibly wrong in all of the relevant ways that thinking works, so thoroughly down to the last detail, that until just now, I actually didn't think it was possible to say something that stupid while using big words and big ideas." "As for you, my fine friend, you're a victim of disorganized thinking." "What?" "I have doubts about how well you've been accessing the quantum effects." "Oh no!" "Yeah, you've been decohering more often than is good for your health, for example." When you kill a bug colony, with what attitude do you? Extreme prejudice? Sympathy? Neither? Something else? Lorem. Imagine someone who kills a bug colony with extreme prejudice. There's a bee hive on his front porch, and as he sprays insecticide on it, he's saying, "Was it not clear? This is clearly my property, and bugs aren't allowed here! You should have known when you looked! Look! It's a house! Not for bugs! No bug colonies allowed here! You bastards set up where you knew you shouldn't have! Now you die for that! Die! Die! Die!" If we get to, "It's either real or as good as real," then we're done the process of asking questions and answering them. If you say there's one remaining question, that of whether it's really real or just as good as real but not really real, then I say your remaining question is trivial. Same for the spot, "It's either true or as good as true," and the remaining question, "Is it really true, or is it just as good as true but not really true." And if, for whatever reason, the answer to that last question can't be ascertained (for example, if it's never possible to answer that remaining question about anything), then I additionally would like to tell you that it's a fool's errand to pursue the remaining question. As for when we agree on all substantive matters, but still have a conceptual disagreement, a similar principle applies. If we agree on all prescriptive questions, but we have some remaining disagreements about definitions, then it would be good to see that we're actually done what we set out to do. Perhaps. There may be exceptions to this. Sometimes, sorting out a conceptual matter in such a situation is just the sort of thing that can foster further progress. At the least then, what I'm saying is that it's important to recognize is when it's the case that, "We have no remaining substantive disagreements for now." I firmly believe that pointless problems arise when it's not recognized that this point has been reached in a discussion. When practice contradicts theory, that's when it's particularly warranted to fix up the theorizing. Granted, this can also happen when all parties agree both about the prescriptive matters and the conceptual. Why does a modal world satisfy the CDO-PAP for moral responsibility? It does only when the modal world bears certain relations to the actual world. "You have some amount of self-control, and in the modal world I'm thinking of, you made better use of it than you did in this world" for blame or, "You have some amount of self-control, and in the modal world I'm thinking of, you didn't make good use of it so much as you did in this world" for praise. "There's no free will." "Oh? Is there not? But you had to use your free will just now to say that there's no free will. So your account is self-defeating." "No, I said that as a robot. Beep boop, there's no free will. See, my account is self-consistent." "I'm good. I don't take credit for it, but I do welcome it. That's my substitute. It still feels pretty damn good." "Too bad you don't have modesty you can take credit for." "Fuck it." "Scratch an 'altruist' and watch a 'hypocrite' bleed" - Michael Ghiselin The better paraphrase might be "Everyone who seems like an altruist is really an egoist". One can argue that the asymmetry argument breaks down here. You can imagine someone doing something seemingly blameworthy, but for reasons that aren't really blameworthy, and you can imagine someone doing something seemingly praiseworthy, but for reasons that aren't really praiseworthy. Kant's shopkeeper is the mirror image of Robert Alton Harris. It might be possible to tidy up this symmetry by considering attribution effects. Take a trivial antipathy, such as a sportsball rivalry. If someone does something nice for a rival team that they find praiseworthy, you can find a reason why it's not so altruistic. Lorem. Altruism is not always a matter of social conditioning or mental malfunctioning. I could sluff off all of my conditioning, and be functioning exactly right mentally, and I would still be an altruist. That's because altruism and egoism are not mutually exclusive. They are compatible. I do what's best for me, and what's best for me is doing what's best for other people (at least sometimes). Hard determinism is as good as a gay panic defense, multiplied by an arbitrarily big coefficient. And the Twinkie defense, et cetera. A belief is about the way things are. Expressing a belief is asserting somethinig about the way things are. A definition is about what a word shall be used to mean. Stating a definition is not asserting something about the way things are. A substantive disagreement is a disagreement about the way things are. It will be had by people with differing beliefs. A disagreement about a definition is not a disagreement about the way things are. It is not substantive. The good definition = "free will, the kind that matters for blameworthiness and praiseworthiness"? or "free will, the kind that matters for expressions of approval and disapproval"? Every useful tendency of thought can go wrong because of a broken negative feedback mechanism. Like a wrench thrown in a flyball governor. This is a fundamental principle of FMEA, but it also happens to be a fundamental principle of critical thinking. You get a new idea. If you had chose to get that idea, you would have had to have the idea before you had the idea. This is a contradiction. It follows that when you get a new idea, you don't choose to get that particular idea when you get it. Once you have the lay of the land, then to state the differences between hard and soft determinism in terms of prescriptions is much easier than to state the differences between hard and soft determinism in terms of satisfactory definitions. The differences in terms of prescriptions is primary, and it's easier to tease apart. The differences in terms of concepts is secondary, and is mired in disagreements (disagreements about definitions). To date, there haven't been definitions of hard and soft determinism proposed that are satisfactory. A set of definitions of hard and soft determinism will be satisfactory when someone figures those out, and they happen to cleave things along the same lines as the prescriptive questions. It's less satisfactory working with definitions that only strawman the people who the person providing the definitions disagrees with. So it's good that the prescriptions of hard and soft determinism cleave things along clear lines. It's not so good that we haven't figured out the definitions of hard and soft determinism that entail those differences in prescription. As long as we don't have those definitions that entail the differences in prescriptions, we have a lot of confusion about what's a disagreement about definitions and what's a disagreement about the way things are. A hard determinist gives definitions of hard and soft determinism that strawman soft determinism, and that soft determinists don't like, and a soft determinist gives definitions of hard and soft determinism that strawman hard determinism, and that hard determinists don't like. Hard determinism: determinism is true, free will can't exist if determinism is true, therefore free will doesn't exist. Free will libertarianism: free will exists, determinism can't be true if free will exists, therefore determinism is false. Soft determinism: determinism is true, free will exists, and it can be that both determinism is true and free will exists. These definitions are fine. We can all agree on those. But the next order of business is more definitions. "Free will as in what?" "What definition would a hard determinist give for 'Free will as in what?' and what definition would a soft determinist give for 'Free will as in what?'": ask a hard determinist for these answers, and ask a soft determinist for these answers, and they won't agree. The common ground for definitions hasn't been found. The simplest facile contradiction, "Soft determinists think determinism is compatible with the negation of determinism". The slightly less facile contradiction, "Soft determinists think that determinism is compatible with something that entails the negation of determinism." A soft determinist with any amount of competence will not accept either of these definitions of soft determinism. Any hard determinist who thinks that either of these definitions are a knockdown and that's all that's at issue is not giving a charitable account. When a hard determinist says we should imprison someone according to the quarantine model, what does he propose we should say to that person? Backward looking is coextensive with forward looking in basically all imaginable scenarios. Exemptions break the order of things in almost all realistic scenarios. "I must maintain the order of things" is warranted in almost all real scenarios. God's omnipotence includes being able to make any possible world real. It follows that our world is only one of many possible worlds. Therefore, determinism is false. I could ask a hard determinist for definitions and say that by his definitions I'm a hard determinist and not a soft determinist, and I could ask a soft determinist for definitions and say that by his definitions I'm a soft determinist and not a hard determinist. Therefore, I don't know if I'm a hard determinist or a soft determinist. In this work, "hard determinist" shall be a proxy or a shorthand for "hard incompatibilist" or "hard physicalist/materialist/naturalist/mechanist/whatever" (those finer distinctions won't matter here). "Soft determinist" shall be a proxy or a shorthand for "compatibilist physicalist/materialist/naturalist/mechanist/whatever". Free will is a terminological question but moral responsibility is where the debate is substantive? Free will is a terminological question, and moral responsibility is also a terminological question. When these terms can be made clear in all their variations, "free will as in what?" and "moral responsibility / deserving as in what?" (different options given and assigned different terms) then there can be a debate that sticks to the substantive. "Free will, the kind that's compatible with ever being able to warrant hating a person"? Better at trying to find the relevant distinction, but the debate may still be mired. I may believe that the free will libertarian kind of deserving is not a thing, and I may still hate a person as a reactive attitude and a mnemonic. In debate more often than in writing it happens that two participants just can't work out what's a disagreement about definitions and what's the substantive part. It happens in a lot in writing by really smart people, where some parts are substantive and other parts are the parts where things that look like assertions appear but they're just terminological things. In debate, often you can have two really smart people stuck for hours not being able to work out how to get to the substantive parts, because they can't work out the difference between what's substantive and what's a disagreement about definitions. Sometimes, this ends with, "At this point, it seems like we just keep talking past each other, so we have to end it here," which is to say that they haven't worked out how to call different ideas by different names and figure out how to do the remaining parts that matter. And it genuinely is quite difficult in many philosophical endeavors. For a subject like the free will debate, it takes a lot of smarts and dedication to work these things out. It takes some doing. Accomplishing the task of figuring out what's the substantive bits and what's the other bits, and then separating them, just doing that could earn one the title of the guy who saw it clearer than anyone before, saw through the mires. Unsorted Pile 8 I don't know if the causa sui condition is incompatible with free will. To argue this is pointless because "game without rules". Abolishing hate is more work than it's worth, and a disutility aside. More work than it's worth because keeping the attitude is easier than abolishing the attitude. A disutility because abolishing the attitude is less useful than keeping it. Frankfurt free will is about different kinds of internal compulsion. Tied to the mast. A technique for managing willpower. Don't you hate it when a sentence doesn't end the way you think it octopus? Talk of free will as though it exists is in error. Don't feed the Frankfurt trolls. What I write about it will be nothing in addition to what's already been written about it, and you can quote me on that without writing anything in addition to what's already been written about it, and nothing worth saying about it will be out of the scope of that. A mire contains water that isn't clear. A disagreement that's clear is one that makes it easy to discern the points of disagreement. A disagreement that's a mire is one that makes is hard to discern the points of disagreement. Seeing through a mire isn't easy because the water isn't clear. Sorting out a mired disagreement isn't easy because the points of disagreement aren't clear. "I don't entirely blame them, but I don't exactly not blame them." The patch (why saying 'determinism' is okay as an approximation to materialism/physicalism/naturalism/mechanism) + "aside from in exceptional circumstances, which are more rare than being hit by lightning." Those drugs must be so good that you're better off not knowing just how good they are. It's easy to say that free will is an emergent property of deterministic stuff. It's also easy to say a lot of stupid things right after that. If you want to do this part right, you can say that free will is an emergent property of deterministic stuff, and then say that what you mean by free will in that sense is something other than the negation of determinism, and also say that the emergent properties do not include violating determinism. Retributivism minus basic deserving equals utilitarianism in terms of how to do criminal justice (in my favorite model). Quasi-retributivism? You can violate determinism any time you want, because soul stuff. When you were zero years old, you had some constitution and some situation, and you can't take credit for that. Then one thing led to another, and here you are. At what point does taking credit for your own outcomes come in? Uhh, somewhere. Around the time you got self-controlling ageny, or the ability to use it. After that point, credit to you if you had good outcomes and some part in making it happen, or shame on you if you didn't try. Because there's no standard catalog of the answers to "free will as in what?" and "moral responsibility as in what?" and likewise no agreement on how we separate those, there's therefore muddle and confusion in the talk. "As many religions as there are people." When people have cognitive deficiencies, you have compassion for them. When people have moral deficiencies you also have compassion for them. Low IQ person here, mass murderer here, no relevant difference in terms of things like what attitudes are warranted. "Strong emergene, that's just not how supervenience works." "What's supervenience?" "It's a word that means almost the same thing as emergence." "Uhh, circular argument?" "You caught me." Note, some people define emergence as only the properties of compositions like how heaviness is an emergent property of being made of enough particles which are each light. And then define supervenience how some other people define emergence, like how the way a painting of a face looks supervenes on (=/= is an emergent property of) how many tiny daubs of paint are arranged. Erik Hoel's hypothesis on dreaming: it's part of how agency is made robust. It's like all these impersonal forces of society and economics and politics are imposing their will on your will. Having a decent amount of freedom of the will in terms of freedom from worldly soft coercion, it takes imagination as a necessary but not sufficient condition. If you have no imagination, you'll get swept up by the manufacturers of brooms. Do you think retribution is trying to change the past? Utilitarian quasi-retributivism: no, it's not. It's future-looking using a model that happens to be extremely reminiscent of a purely backward-looking one. Hard incompatibilist includes moral responsibility is not compatible with free will, but not the "free will, the kind that would be required for moral responsibility" kind but the "violate determinism any time you want" kind. Also hard to argue for or against in any way, because unintelligible premise. Soft in this sense does not mean the opposite of rigorous. You can be a soft determinist and rigorous. That's not a contradiction. This stuff is not inconsistent. Self-consistency is something that soft determinism does claim. The person who is worst subjugated is the person who doesn't understand the nature of their subjugation. The subjugated person who understands how he's subjugated, he at least has more of a fighting chance. In case I didn't mention: knowledge is power. Imposed ignorance is disempowerment. Even influences that tend toward ignorance are forces toward subjugation. Even if those are not intentional, they're just as worrying as if they are intentional. Is free will, the kind that enables you to refuse determinism any time you want, using some kind of self-originating freedom, compatible or incompatible with moral responsibility? People differ on this. I don't even differ on this, because it's not even right or wrong, just total nonsense baked into one of the premises. How can I shed some of the constraints of my own loadout, and how can I shed some of what tends to coerce people? How can I swap out some external compulsion for some internal compulsion? How can I maximize what number of things come down to my own agency for me? Your height is something you win in the genetic lottery, and everyone's fine with that. Well, virtue is also something you win in the same lottery. But wait! Your agency has a lot to do with whether you become virtuous. Well, your agency is also something you win in the same lottery. Lotteries, all. At the very least, virtue is something you win in a series of lotteries, maybe not a single round of a lottery. Herp, but wouldn't a utilitarian model require locking people up for pre-crime when they only seem statistically more likely than average to commit a crime, even if they have as yet committed no crimes? The counter to pre-crime is the same as the counter to involuntary organ donation. J S Mill: rights can be a great idea to grant for utilitarian reasons. Having a first-person subjective experience is a social construct. We only have the idea because a bunch of dead white males wrote a bunch of stuff that became canonical and then you and I got indoctrinated into the canon. The truth is that we don't have first-person subjective exerience - we only think we do. What's in a name? Absolutely nothing. You define your terms, you stick to them, you do the argumentation, and if that produces conclusions that seem to be soundly justified, then maybe you learn something. That's what's in an argument, and no part of that is what's in a word. Conservation of energy and conservation of momentum hold at all times and in all places, except in particle colliders, nuclear explosions, and when the soul stuff acts on parts of the brain. "It seems to me that compatibilism is only a way of redefining moral responsibility in a way that avoids really satisfying the principle of alternate possibilities, and therefore it's unsatisfying." "Uhhh, there's a kind of compatibilism that's unsatisfying for that reason. if you hear a compatibilist say that CDO-if is a way of satisfying PAP and that therefore there's moral responsibility as strong as the libertarian kind despite determinism being true, yeah, fuck that. That's a lot of weaselly logic toward an end that doesn't earn warrant. But consider another kind of compatibilism that just says that CDO-if is a condition that warrants a 'weaker' kind of deserving, that doesn't try to pretend that compatibilism can satisfy PAP as strongly as bogus arguments for libertarian free will try to. I think that's a kind of compatibilism that's not trying too hard to do something that it can't do, and I think that the deserving that it does define is the only real kind of deserving that can exist anyways, and I think that that 'weaker' kind of deserving is actually quite interesting and relevant. It turns out to be the most interesting outcome of the whole free will debate. So I want you to join me in considering what this 'weaker' kind of deserving is, and I want you to consider a kind of compatibilis that doesn't try to do things it can't do, and I think you'll find that the most cromulent distinctions and prescriptions come from just that. That, I argue is not unsatisfying." The only thing that compelled you to do evil is that you're an asshole. The only thing that compelled you to do good is that you're righteous, That's internal compulsion, and it seems to be a well-grounded justification for making assessments. There's nothing about true RNG that's essential to free will. There is something abut free will that has either true or pseudo RNG about it, and either of those will do. (That's not the contra-causal kind of free will.) When someone becomes virtuous by second nature we may call it one virtue to work that out, and another virtue to make it automatic. For example, there may have been the virtue to decide to be rational, followed by the virtue of being rational. The conditions of the original Newcomb were not great: almost perfect predictor, and 1000:1 ratio of amounts of money. The reivision with a perfect predictor and a 2:1 ratio and other details (for example, you see that the game dealer really can predict 99 players before your turn) works excellently as an actual vehicle for explaining metaphysics (and is a good detector of who is reasoning poorly). With the original conditions, you get questions like, "But how much would you regret losing a million dollars to try to make a thousand?" and "if the predictor isn't quite perfect, what trick of the brain does it take to fool him?" Here's something I found in a box of old papers. It's a scorecard from when I took an IQ-like test (one of the legitimate ones) when I was 8 years old. The test had been issued by my elementary school to measure just how precocious I was. One of the three scores on the card says 99.9, which is a percentile score, on a scale that probably doesn't go higher than 99.9. I didn't have much agency when I took the test. I disliked school, I disliked reading, I didn't have a tutor, and I didn't attend any after-school programs. One of the other papers from around the same time says I was obsessesd with "A TV show called Beevis [sic] and Butthead." It would be ridiculous for me to take any serious kind of credit for having the kind of brain for scoring that high on the test. Still, this old scrap of paper is dear to me. I keep it for the same reasons I would keep the stub of a winning lottery ticket. I'm a compatibilist about luck and skill. It seems strange to say that the real self can lose in a decision. If the real self loses, to what does it lose? A fake self? It can lose to the pull of drugs or other behaviorally addictive things. It can also lose to the deeper self. It would not be accurate to say that the unwilling addict has a deepest self that wants to remain addicted. In the Philpapers survey, more people answered 2 boxes than answered 1 box. There's no single standard for designating a person 'philosopher'. A lot of people who work in philosophy got their jobs because this or that grifter needed to take some total poppycock and get it rendered into sophisticated language. That's when someone gets hired as a philosopher a lot of the time. So a lot of the people who would meet the qualification of being in the Philpapers survey are the kind of person who only says stupid things and says them by the bookfull. Possibly half. If you say CDO-if-desire or whatever thing satisfies the originally strong version of PAP, that doesn't connect. That would be a completely arbitrary criterion. It satisfies something weaker, and not arbitrary. If 'strong' and 'weak' deserving have the wrong connotations, then supernatural deserving and realistic deserving. The story in Outliers about the Korean pilots: imposed irratioal will, disaster. MAD: imposed irrational will, preservation. Bob had tried everything he could think of to aid Jim in developing an okay character and remedy him from his wayward ways, but Bob could never hate Jim. Many people hate Jim because of his character, but Bob never has, because Bob knows that compassion rather than hatred is the only attitude toward him that might ever result in him being set right. There are people who Bob hates, and because those people do the same things Jim does, but Bob can never hate Jim. There are people who hate Jim for the same reasons that Bob hates other people for, but Bob can never hate Jim. (disagree). My cat typed the following line: biiiiiiiiiiiiiiiiiiiiiooooook The weaker form of PAP is just whatever is satisfied by the right formulation of CDO-if. The best formulations of soft determinism don't particularly lend themselves well to the kind of deductive grand structure like The Tractatus? There's plenty to say that partakes of logic, but it comes down to arguing for a few big, nebulous if-thens, and saying "I assert that this big deductive sub-structure supports this and this if-then from the main line"? The quagmire of evasion "PAP is satisfied because he CDO." "But he not actually CDO?" "No, not actually." "Then in what sense he CDO? Some not-actual sense? This is all quite dubious." or "So you say that he did this thing, and he could have done something else instead, but when you say he could have done something else instead, you don't mean he could actually have done something else instead. You say he could have done something else instead, but in some not-actual sense of 'could'." David Lewis on breaking the laws is a quite silly way of attempting to drain the quagmire by saying that some conditional form of CDO fully satisfies PAP. It's easy to imagine a commercial airliner flying backwards, but that doesn't mean it's possible. "Can a person have done other than what he actually did do?" "What?" "Suppose there's a person, right?" "Alright." "Now he's just done something, alright?" "Okay." "Well, now that the thing's done, is it possible for him to have done something else instead?" "[...]" "No, I don't mean 'Is it possible that he did do something else instead'. Clearly that's not true. He did what he did, and not something else. But I mean to ask that, given what he did, is it possible for him to have done something else instead." Meta-skepticism about deserving. Suppose the case of the magistrate and the mob. Suppose the hypothetical is posed in such a way that it will be the utilitarian choice to lynch the innocent person, and also that this happens about once a week, and every time it's the utilitarian decision, and every time only the magistrate knows that the guy is innocent, By the set of definitions, does the innocent person deserve to be lynched? Also, if retributivism says he shouldn't be lynched, and the utilitarian choice would be to lynch him, then how could you say that your utilitarianism is coextensive with any kind of retributivism? Response: the magistrate is fictioneering the role that he's a retributivist. Lorem. What reactive attitudes and utilitarian responses result in the best outcomes is relative to this or that culture. This has some relativity, but it's not the same as moral relativism. A culture may have a set of practices that don't work the best possible relative to that culture. About that culture, a relativist would say that whatever they've decided is de facto morally best, but I would say that what's morally best is whatever would work better relative to that culture. Vague PAP: a person can be held morally responsile for an action only if he could have done something else instead. Supernatural PAP: a person can be held morally responsile for an action only if he could actually have done something else instead. Realistic PAP: a person can be held morally responsible for an action only if he would have done something else instead under different internal constraints. He could have done otherwise if his desires had been different, and they would have been different if he were responsive to the right kinds of reasons, and he would have been responsive to those reasons if he had the right attitude. You can have an interal locus of control and still accept that it's based on internal compulsion, Deep self, sane deep self, or full self? The Yin of Zhou story anticipates the JoJo story: JoJo is to be given the opportunity to be introduced to some kind of full self. The emergent thing called 'will' can take any factor that impinges on it, including hidden ones that no one else can notice, as singularly more important than all other factors combined, and that's why it often seems to be something that can refuse what it arises from. I was able to do the thing because I had skill. I had that skill because of previous skill. And I had that previous skill because of luck. Consider the question, "What turns into toast?" Bread turns into toast. But dough turns into bread, which turns into toast. I'm a compatibilist about the statements, "Bread turns into toast," and, "Dough turns into toast." Clearly it would be a mistake to say that one of those is true and the other is false. Why are we disposed to think of the inverse of Pascal's Wager as a joke? It's actually more plausible, but most people would think it's a joke if they heard of both. That's only because we have all these ideas about theology that we've taken in by osmosis - ideas like, "There's a god that's perfect in every possible way, and he created this universe, which is perfect in every possible way, and if it seems sometimes like maybe it's not, then you just have to hear the right explanation which will square all that up." All objects and events are Rorschach blobs to which we have applied labels. Social engineering is the mind's equivalent of Dim Mak. Why does bad thinking happen to good people? Because all the ways thinking can go wrong are not only identified, but they are they are also the exact set of targets that charlatans aim at. If there's a way you can be duped, there's someone who wants to dupe you in just that way. Every cognitive process can go wrong in a number of different ways. When a hooman brain is working right, there's maybe 1000 cognitive processes that have to be going right, and there's about 5000 ways for one of them to go wrong. It's not a coincidence that all of the personality disorders in the catalog can be describe as like an okay personality but with one or two features taken to harmful extremes. And if you take the rest of the cognitive processes, and you think of what happens when this or that one goes to this or that extreme, then you've just described the rest of all the mental disorders in the whole catalog. If you say Frankfurtian free will is changing the topic, and not what the free will conversation was ever about, well, it's changing from an uninteresting topic about which there's not much to be said. Impossible thing is impossible, so let's talk about someting relevant. Re the accusations that compatibilism is a switcheroo. It is a switcheroo when it includes a step that says "Huzzah! You thought free will meant something that can't exist if determinism is true, but there it is!" And it is a switcheroo if you conflate CDO-if with C-actually-DO and say "Huzzah! So there really are alternate possibilities even if determinism is true. When quantum computers are used in industry, it's only to solve a math puzzle that a binary computer happens to be faster for. It comes down to doing deterministic things. Because the magistrate is fictioneering, this supports quasi-retributivism. Far from it being a knockdown, the magistrate and the mob example is a demonstration of just what it means to be utilitarian while pretending to be retributivist. What's best for the society of the magistrate and the mob is not what's best for our society. Fortunately for us, what works best in our society never involves condemning an innocent person while claiming he's guilty of a crime that we know he didn't commit. But what was best for societies of old sadly did include doing that sometimes. There are as many shades of moral responsibility as there are acts done in the world. A computer is already conscious. That's what a brain is. Remix (?) slightly the Zhuangzi story about the empty boat. Our pilot shouted once to inform the supposed pilot of the other boat, then shouted a second time with angry words, then shouted a third time with obscenities, then realized the other boat was empty. If we all went arount like empty boats, and we all went around seeing each other as empty boats, then that would be the end of most of anger, indignation, and hostility. Take a cup, fill it with paint, and throw it across a room so that the cup shatters against the wall and splashes the paint against it. Wait for the paint to dry. Now, try to write a description of this new painting you have. Try to account for every detail in your written description. You can't. But the task of defining everything in the world in terms of objects and events is quite the same sort of task, only a lot of people don't accept that it's just as futile and it can't be done. MAD: imposed will, irrational will, outcome good. Korean Air example: imposed will, irrational will, outcome bad. Tied to the mast: imposed will, rational will, outcome good. Whether a geiger counter goes tick in a given millisecon is not caused by the previous state of the universe in the normal sense. But we also don't typically make decisions based on single ticks of geiger counters. We make decisions based on the average rate of ticking, and that rate is exceedingly likely to approximate determinism. "I think agency matters greatly to moral responsibility, but I think determinism is true, and free will is not compatible with it." This sounds perfectly reasonable. It is perfectly reasonale. The only problem with it is that it's crosswise to how we tend to define terms when we're doing productive discussion. You could start with that set of definitions and produce all of what's worth saying about those things, but if you look to the body of discussion already done that way, they mark out the definitions a little differently, and that something that matters to moral responsibility they call free will rather than agency, and the negation of determinism they don't call that free will but something else. (re about "possible to have done") "This is nonsense. I don't think these words you're saying are expressing any real idea." "What if I told you that people tend to be split on this question?" "Split how?" "A lot of people say of course it's possible for him to have done something else." "And that means something other than it's possible he actually did do something else?" "Yeah, not possible he did do something else, but possible for him to have done something else." "Could you tell me how this could be any kind of sense?" "Sure. Consider how regret works. Sometimes you regret doing something you did the day before. It's because you think you could have done something else, even if you don't think it's possible you really did do something else." "Aha.." "Well, do you think regret entails a contradiction, or is nonsense?" "Probably not. Okay, maybe I'm coming around to this. A person does something, and then it's not possible that he did do something else instead, but it is possible for him to have done something else instead." "That's how a pretty common line of thinking goes." "But I'm still not sure about it." "Because it sounds suspiciously like nonsense." "Yeah, this 'possibly did' and 'possibly have done'." "So maybe it is." "Then is regret nonsense?" "Maybe it is." "What do you think?" "What I think about this is not something everyone would agree on." "Will you grant me the boon of your possibly controvertable framework of this?" "Sure. First, imagination is a useful thing in some ways sometimes, right?" "Sure. Sometimes it seems to have uses other than deception and other kinds of dishonesty." "Okay, well, 'possibly did do' refers to reality, and it's not possible you did do something other than what you did, and 'possible to have done' refers to imagination, and it is possible to imagine you did something other than what you did do." "And is regret something other than deception or dishonesty?" "Yeah. The basic mechanism of regret is you imagine you did something other than what you did do, and then you find a moral in that story, and you go ahead and do things in the future better than you did in the past. So there's a use of imagination that has a productive application." "That all seems to line up pretty neatly so far. But you said that not everyone buys that system of workings?" "Yeah, many people don't." "How does it work accoriding to someone who disagrees with that?" "Well, some people say that 'possible you did do something else' and 'possible to have done something else' both refer to reality, but in two different ways, and in ways that are not just ways of taking our ability to imagine things and employing those." "So, according to that story, someone does something, and then it's not possible he did do something else, and that's a statement about reality, but it is possible for him to have done someting else, and that's a different statement about reality?" "That's what some people say." "How does that work? I mean, possible you did do something other than what you did, the answer is 'no', and that's a statement about reality, saying something didn't actually happen, but possible to have done, the answer is 'yes', and that's also a statement about reality? What relation does that have to reality?" "I don't know what relation that has to reality according to those people. You would have to ask someone other than me at this point. And there indeed are people who insist that that question does have some answer." The way the world impinges on our wills, we're all like the copilot gonna crash the plane, gonna get blew up, gonna take a lot of other people out with us, even though we know how to prevent it, and all becase we're just doing the done thing. We have heuristics because we have finite computing time, and we have weak spots because we have heuristics. We also have strengths because we have heuristics. The work of Daniel Kahneman has been to inform people of what weak spots we have because of our heuristics, and the work of Gerd Gigerenzer has been to inform people that those heuristics often work a lot better than whatever else we might have had instead. Scams are based on exploiting either cognitive heuristics or institutional structures. And a cognitive heuristic is a lot like an institutional structure within one person's mind. When a scam is based on exploiting a cognitive heuristic, a person gets scammed. When a scam is based on exploiting an institutional structure, it's typical that one person or many people get robbed. Lorem about why it wouldn't be accurate to say that a person who lost the contents of their bank account got scammed when someone scammed the banking system. Would you say decisions aren't real? A computer program makes decisions even though it's determined. If there are things that don't exist on account of determinism being true, decisions aren't among those. If you say that moral responsibility doesn't exist on account of determinism being true, then you would have to say that an action that someone decided to do, such as murdering, has the same moral status as an action that someone did but didn't decide to do, such as sneezing. PAP is satisfied by a kind of CDO were 'could' is meant in a certain sense, and that sense is just whatever's described by a good kind of compatibilism. This might sound like it's back-rationalizing or equivocation. The reason why it's not either of those things, but rather quite a sound logical step is that moral responsibility is based on a certain kind of counterfactual thinking where you imagine some counterfactual worlds and not others, and the rules about how that's one right are pretty weird and squishy. Because our rules for moral counterfactuals are squishy, the right relation between PAP and CDO is also squishy. If the artificer is responsible for what the robot does, and we're robots without artificers, then there's no one in the spot where we might want to pin moral responsibility. Re why semi-compatibilism is a definition and not a belief. There's CDO in the etiogenic sense and CDO-if in the sense of counterfactuals that's compatible with determinism. And there's moral responsibility in the etiogenic sense and moral responsibility in the sense of holding responsible that's compatible with determinism. So when you say "I'm going to use CDO in this sense and not the other" and "I'm going to use moral responsibility in this sense and not the other", that's only an act of picking between alternative definitions. The word 'bank' can mean a kind of financial institution, and the word 'bank' can mean the land beside a river. Now suppose you're going to write a report about how geological engineering projects get financed. You're going to be talking about river banks and financial banks extensively in this piece of writing. One thing you can do is say, "Every time I say 'bank' in this document, I shall mean the land beside a river, and every time I mean the kind of financial institution commonly called a bank, I shall call it the savings and loan," or, "Every time I say 'bank' in this document, I shall mean a financial institution, and every time I mean the land beside a river, I shall call it the flank." Either of those is fine. Clearly, those are definitions and not beliefs. And it's clear that if someone were to say, "I believe that a bank is a kind of financial institution and not a landform," that would be ridiculous, or if someone said, "I believe that a bank is a kind of landform and not a financial institution." Unfortunately, there's an equivalent error that people often commit that's identical to that, but it's less clear that it's insubstantial. But wait, if we're using the standard that words only mean what you define them as in contexts, doesn't that mean that there are only definitions and no beliefs? No, because sometimes a person sets down definitions and then makes arguments and inferences based on those definitions and those arguments, and then the conclusions of those arguments are beliefs, not definitions, and substantive. So someone can say, "When I say moral responsibility, I mean the etiogenic kind, and since there's no etiogenesis, there's no moral responsibility, and since there's no moral responsibility, then it's never warranted to hate a person for doing anything." I wouldn't say I agree with that, but that does contain the whole set of definition, argument, and substantive conclusion, and that conclusion is a belief, not just a definition. But when someone says "I'm a semi-compatibilist because I believe this one kind of CDO is true and I believe this one sense of moral responsibility is true", that's a major confusion in thought, because what he really means is "I'm taking this kind of CDO and this one sense of moral responsibility as definitions, and I'm defining semi-compatibilism as what resoning might follow from that set of definitions", which is still a statement that hasn't got any further than definitions. If I have to imagine becoming Jim, then I have to imagine turning into someone who has no memories of ever having been Bob. So when I say, "If you were in Jim's situation, you would be Jim," I mean if you had Jim's physical location, his history, his mind, all that including not remembering ever having been you, then you would just be Jim. In that sense, if you were in Jim's situation, you would do just what Jim does. In this exercise, I don't mean to say, imagine you had your mind in Jim's body, or imagine you had a mind that's half like Jim's and half like yours. That's another kind of exercise that's useful in a different way. But in this case, when I say Jim's situation, I mean Jim's situation including having Jim's mind. The other kind of exercise is when you say something like, "If I were King Lear, I wouldn't have split the kingdom," (thanks, Sherlock). And when you say something like that, you mean what you would do if you had your mind in the scenario that was otherwise King Lear's. And that kind of exercise is useful in its own way. So these are two different kinds of exercise, when you say "If I were him." And if you were in Jim's entire situation, that would include having Jim's mind, and then you would do just what Jim does, because you would be Jim. If we didn't have the moral emotions, would we also not have the idea of moral responsibility? Hofstadter on prime numbers. Lorem, rework the example. You wrote a computer program. Why did the program output "is prime"? Because of the previous line of code, because of the previous state of all the transistors recruited by the program. All true, but "because [number] is prime" is also true, and clearly the most appropriate answer. And therefore the primeness of prime numbers is a real thing? presentism, omnitemporalism, statements about the future lacking truth values. One definition of emergence: a property that's had by an object that's not had by any part of it. If it's profitable to treat a hammer as deterministic and not probabilistic (because it's effectively deterministic), then it's profitable to treat a hooman as indeterministic (because it is effectively). [introduce the example of the squad and the tactic] "Suppose a person says 'All hooman decisons are deterministic,' and then five seconds later he says, 'Hooman decisions are not deterministic.'" "Sounds like a contradiction." "But is it?" "Is it? It's a contradiction unless it's one of those things that sounds like a contradiction but isn't?" "Could it be one of those things?" "Based on your previous example, he would have to be saying it's deterministic with reference to one level of analysis and indeterministic with reference to another level of analysis." "Yeah." "Is that how all this determinism and indeterminism stuff is settled correctly?" "Lol, no. There are a lot of people who say that, and that's one of the most pernicious things going around right now among discourse people who like to pretend they're smart." "So it is a contradiction to say in one moment that all hooman decisions are deterministic and then in the next moment say that hooman decisons are not deterministic?" "Maybe not." "Maybe not? Because there may be yet another way that a set of statements that seem contradictory actually aren't?" "Yeah." "What's another way that might apply here?" Consider the following idea about semantics.. [quotes the essay on inerrant theory] "That's how you get compatibilism?" "Lol, no. That's the absolutely most stupid way of deriving anything." "Is there some other way that it might not be a contradiction to say that all hooman decisions are deterministic and then a moment later say that hooman decisions are not deterministic?" "It will be good to come back to this much later in the discussion. But right now, it will be good if we open up a number of brain teasers, then we get down to some other serious questions and answers, and after that we see how all these things together might be settled." "Great! But will you please just tell me one thing right now?" "What?" "A person says that all hooman decisons are deterministic, and then five seconds later he says that hooman decisions are not deterministic. It would be foolish to say that these statements are both true because every statement and its opposite is true. And it would be foolish to say that these statements are both true because they refer to different levels of analysis. But will we be seeing yet another way that they might not be contradictory?" "Yes." "Okay, then consider my brain teased. Can we get on with it then?" [how this ends up when we go back to it much later: it's a lot like the layers thing, but it applies because of how things are 99+% approximations of 'deterministic' or 'indeterministic'. So the conditions of that are not as loose as inerrant theory, but still anyone who says K J Mitchell strong emergence or whatever is going on is still an idiot.] When a machine can adjust its own programming, who do we hold accountable for the actions performed by that machine? Do we count the machine itself as the author of its actions, or do we count the artificer who created the machine as the author of the machine's actions? I know a guy named Bob who created a poker-playing robot that could adjust its own settings, but Bob's first attempt at making such a machine was not very good. The robot started bad at poker, adjusted its settings upon getting experience, and remained bad at poker. There was no doubt that Bob was responsible for making a machine that dumped Bob's money at the poker table. On his second attempt, Bob made a better poker-playing robot. The robot stated bad at poker, adjusted its settings upon getting experience, and became good at poker. Clearly it was by virtue of Bob's programming that the end result was a robot that was good at poker. Even though the robot had to adjust its settings to become good after experience, it never occurred to anyone that the credit went to anyone other than Bob for designing a robot that does effective machine learning. Bob's next deed in robot design turned out to be a terrible misadventure, even worse than the first robot. Bob set out to make a robot that functions as a house servant, a really good one that can learn whatever tasks its owner wants done in his particular house. Another robot that can adjust its own settings. When Bob turned on the robot, it knew how to use some types of vacuum cleaners, but not the kind that Bob has at his house, so Bob showed the robot how to switch this vacuum cleaner between floor mode and hose mode, and then the robot was able to use that vacuum cleaner in both modes to excellent effect, and Bob's house was very clean.. for a time. One day, Bob came home to a police ambush. He was taken in for questioning. Bob's robot had taken Bob's gun and ammunition, and gone on a killing spree. When asked why the robot did that, Bob said that he couldn't imagine why a house servant robot might do that, and that maybe if they asked the robot why it did what it did they could find out. When asked, the robot said, "Well, when you think about it, murdering hoomans is a type of cleaning, really the best kind. So I went on a killing spree because I calculated that it would be the best way to accomplish my task of cleaning things." Was Bob responsile for the actions of that robot in the same way that he was responsible for the actions of his two poker-playing robots? During court proceedings, it was demonstrated that Bob had taken his designs and shown them to a group of robot safety experts before assembling the robot, and all the robot safety experts had agreed that it would be safe: it would clean houses, it might learn how to become a house cleaning expert, and it would pose no danger to anyone, they said. They even signed a certificate saying as much. Bob had taken all possible safety precautions before turning that robot loose. So who is responsible for all those people dying? The decisions of the court: Bob was found not guilty because he could not have foreseen the outcome, and had tried all he could to prevent just that. The robot was found not guilty because it was just following its programming - even though the programming included adjusting its own programming, that's all just a kind of programming, which was given to it. It was found that the guilty party was the impersonal conditions of the universe that gave people like us the hubris to make the things we tend to make and the shortness of foresight that makes us unable to anticipate the outcomes. Those impersonal conditions of the universe were given consecutive life sentences, but they evaded arrest. An example of a heuristic that goes wrong sometimes. Suppose you're about to buy a car, and you go to the car dealership, and after haggling on the price, the salesman says he'll let you have this car for no less than $30,200. And suppose you know for sure that if you drive for 1 hour to the other side of town, you can get an identical car for $30,100. Then you say, "I'm not going to drive across town to save one third of one percent off the price of this car. I'll just buy it here." Now suppose that later that day you go to the grocery store, planning to buy $100 worth of groceries for the week, but you get to the store, and a guy by the entrance says, "There's been a problem, and everything in the store is double-price today," but you know that if you drive for 1 hour to the other side of town, you can get the groceries you had planned to get for the price you had planned to pay for them. Then you say, "Yeah, I'll drive across town to save 50 percent on the price of this food." But wait, the choice was really the same in both cases. In both the car dealership and the grocery store, the option was either to make the purchase where you were, or to drive for 1 hour to save $100, and $100 at a car dealership is worth the same as $100 at a grocery store. The reason why you made the decision to drive in one case and to complete the transaction where you were in the other case is because you were comparing the $100 that was on the line in both cases to the total purchase price. There's a fictional creature called Homo Economicus, which is a lot like a Homo Sapiens, but with a few differences. Homo Economicus will always do exactly what traditional microeconomics predicts a rational agent will do. Indeed, the predictions of traditional microeconomics describe just what a Homo Economicus will do in all cases, and usually that's just what a Homo Sapiens will do, but not always. In the examples of the car dealership and the grocery store, a Homo Economicus will always make the same decision in both cases: either drive and save $100 in both cases, or stay in both cases and pay that much more than the alternative. The reason Homo Sapiens deviates from Homo Economicus is because Homo Sapiens uses decision-making heuristics. Homo Sapiens uses heuristics because that's what evolution programmed into Homo Sapiens, and evolution programmed those heuristics into Homo Sapiens because the amount of time for making a decision is always finite, so at least sometimes there has to be time-saving shortcuts to making decisions. In the examples of the car dealership and the grocery store, the Homo Sapiens making the decision to drive an hour to save $100 in the one case, and making the decision not to drive an hour to save $100 in the other case, those two decisions together could be called inconsistent with any kind of rationality i.e. one of the two decisions was irrational. [lorem - example the grocery store first and car dealership second]. Homo Economicus never does a set of irrational things like that because Homo Economicus always acts as if it's never using a shortcut in decision-making, as if it always has an infinite amount of time available for contemplating how to make a decision. So what you might be inclined to do at that grocery store and at that car dealership together are irrational, and it's an example of a heuristic going wrong. Fortunately for us, heuristics also often go right, so it's not entirely a bad thing that we're programmed to use them. Consider next this example: you're a baseball fielder, and there's a baseball high in the air that you're about to catch. The baseball is moving diagonally downward toward a point somewhere in front of you, and you're running toward that point. The trajectory of the baseball is going to curve in the manner of a projectile, so the next thing you do is you take out a pen and paper, write a system of kinematics equations, and.. no, if you do that, you will have failed to run and catch the ball by the time you're done doing the math. So what you do instead is you use a heuristic that enables you to catch the ball. Forget taking out pen and paper - the ball is moving diagonally down and toward a spot in front of you, you're running toward it, and at this moment, as your head is facing the current position of the ball, your head is inclined 40 degrees above the horizon. Now you run toward the ball at such a speed that as the ball keeps moving to a spot that's in front of you right now, and you run toward that spot as well, you choose just such a speed that your head remains inclined 40 degrees above the horizon as you keep running and the ball keeps flying and you keep tracking it. Keep that up for a couple of seconds, and by the time you and the ball have nearly met at the same approximate location, you find that you can reach out your hand and catch it. Traditionally, baseball players didn't know that they were using this "constant angle of gaze" heuristic, but to be sure, they were using some heuristic that makes this process of running and catching work, because whatever process they've been using to run and catch balls, it's something that works exactly as well as solving a system of kinematics equations, but also works quickly enough to get you to catching the ball. So that's a kind of heuristic that works excellently, because it matches what the result of a more rigorous procedure would be, and it gets that result quickly enough to satisfy within the time constraints. That's why we have heuristics for making decisions: because they're often both fast and accurate. But the same heuristics often go wrong, and that's when we get irrational decisions like driving an hour to save $100 one morning and then refusing to drive an hour to save $100 the same afternoon. According to Gerd Gigerenzer, "Meow, meow, meow, meow, meow, meow," but that's because Gerd Gigerenzer is also the name of my cat. I don't know if Gerd Gigerenzer the hooman can also be quoted as previously. Dumb thing to say: "A world can't be deterministic, but only things in that world can be." Why it's dumb: there can always be an outer verse, from which perspective it would be plain to see that the contained world is deterministic. Okay thing to say: "There is often no profit to be had by saying the world or some part of it is deterministic, even about things that are all billiard ball-sized or bigger." Dumb thing to say: "There's never any profit to be had by saying the world or any part of it is deterministic." Okay thing to say: "Even when considering things billiard ball-sized or bigger, there often is and often isn't profit to be had by saying the world or parts of it are deterministic." Dumb thing to say: "A thing made of deterministic parts can have emergent properties that refuse determinism." Okay thing to say: "A thing made of deterministic parts can have emergent properties that make it unprofitable to call it deterministic." So if we're interested in speaking profitably, it's okay to refer to the different levels as having contradictory properties. But if you do that, it's important that you know that there's a difference between profitable, approximate speech and technical, exact speech, and it's important that you don't take this approximate speech to have its ground in a contradiction or a bad framework. We all have to use these tools called words in certain ways, and it quite often happens that among two people using these words in the ways we all have to use them, one of them understands these important differences, and the other one takes the conventions of word usage as proof of terribly confused ideas. And that's one of the springs of bad ideas that we would all do well to excise. So we use approximate speech because it's profitable, profitable sometimes, except when it leads to confusion. Then it's profitable to avoid the confusion. One virtuous use of imagination is when we use it to make up a story that has a good lesson. That's why you can learn important things about life by reading a good novel. One harmful use of imagination is when you use it to figure out how to scam someone. So when you use the "possible to have done" idea, which is a process of imagination, are you using it like a novelist, to teach yourself something important, or are you using it like a scam artist, to fool yourself? "What would it feel good to believe?" has been the source of more disasters than almost any other thought that's ever been thunk. And that includes "What would it feel good to believe if I couldn't disprove it right now?" and "What would it feel good to believe if it couldn't ever be disproven?" There's too many ways for that to go wrong. It seems like it couldn't be harmful, and then before you know it you have all these ideas and you can't keep track of which of your thoughts were ever serious, and then you'll be refusing good sense just as often as accepting it. I avoid that decision criterion like it's poison, because I know it is. That's asking for trouble. That's smacking the hornet's nest. That's why when I use the word 'theologian', it's not a compliment. Compatibilism means compatible with determinism or compatible with mechanism? When a Bob Kane says, "This isn't compatibilism, because I don't think determinism is true," then what? Hard incompatibilism means that moral responsibility isn't compatible with mechanism. If you're not a hard incompatibilist becase you think moral responsibility is compatible with mechanism, shouldn't that be one of the definitions of compatibilism? Kane agrees with the hard incompatibilists about the metaphysical questions of "What way are things?" but disagrees about the ethical questions of "What should we do about it?" But neither count as a compatibilist? Where Bob Kane theory falls over: whether the person endorses the decision or not is a function of..? Then it's antecedent causes (plus randomness) all the way back. SFA theory is one important insight that makes part of a good compatibilism (but, framed as a part of a compatibilism, Bob Kane didn't even invent that bit?). When someone writes a bad essay that partakes both of bad argumentation and obscurantism, then knocking it down is a task that might need doing, but that knocking down is more frustrating than it needs to be. Bob Kane writes bad arguments clearly with no obscurantism. Knocking those down is a task that's not more frustrating than it needs to be, and it's still pretty tricky, because even though the writing is clear, it's still hard to figure out the specific points where it falls over. Some people say things like, "If the primeness of a number can cause things, and that's a sort of airy nothing, then soul stuff can be an uncaused cause." The number 29 would be a prime number even if no universes existed. But the only time the primeness of a number causes something in the universe, it's when a person or a machine is referring to that primeness, so the person or machine in combination with the primeness of the number is what's doing the causing. Imagine a world wherein we've all decided not to say things like, "The outcome of the next spin will be random," or, "The hammer will be just where I left it." Eco-fascism: "I'll tie you to the mast whether you want me to or not, and in time you will see that it's what you should have wanted all along. It sounds like reverse causation to say, "I'm going to the store right now because I will have bought something after I will have got there," but it isn't really reverse causation. And it sounds like reverse causation to say, "If I pick both boxes now, then the game dealer will have put nothing in the opaque box," but it isn't really reverse causation. [re about deciding based on what feels good and can't be disproven] When you believe something on the basis that it feels good and it can't be disproven, here's how that goes wrong. The thing you choose on that basis also entails other beliefs, and then when one of those runs into a competing claim that's reasonable to believe, now you have a choice between believing what's reasonable or rejecting what's reasonable because it conflicts with something that goes with what you chose to believe because it feels good. To take a particularly simple example, let's imagine someone who was born into a fortunate socio-economic status and who will never have to face any real hardship. This person says, "I think a god exists, and I think the god loves me in particular. It feels good to believe that, and no one can disprove it, and it even seems to match up with the facts I observe." Now suppose someone else says to that person that having compassion for less fortunate people is a good idea. And suppose our friend says, "But those are the people who the god doesn't love, so I don't want to have compassion for them." Now our friend is choosing to reject a really good idea, is not willing to consider whether it's even a good idea or not, because it conflicts with something that goes with something she had decided to believe because it feels good and can't be disproven. Real examples are often more complicated than that, but that's how it works in general. This exact mechanism and all the ways it shows up, they're responsible for such an enormous amount of the unnecessary suffering in this world, it's bananas. The "let's bore some holes in him" story from the Zhuangzi. If a chair were conscious, and you sat on it, would it say something like, "Ouch, I've been sat on"? Probably not, because it doesn't have a mouth for saying that. There's some amount of correlation between when we successfully do our tasks at locally reversing entropy and when we feel happy. Sometimes we're failing so bad at maintaining local entropy reversal that it looks like we're collectively not surviving, and a natural correlate of that is sadness. And when it looks like we are maintaining local entropy reversal and collectively surviving, a natural correlate of that is happiness. Local entropy reversal is the most defining feature of life, and to say that maximizing happiness is the best thing you can do with your life is something almost everyone agrees is at least close to correct. But happiness is an imperfect indicator of how well local entropy reversal is going. There are drugs that can give you temporary happiness while deterimenting your survival. There's maybe a near perfect correlation between sustained average happiness and our collective survival. That's how physics gave us the ability to detect and pursue the True and the Good when we're at our best. So two things we get from our loadout are, "I'm pretty sure I want what's good," and, "I don't really know how I'm supposed to decide anything." Let's suppose that one day I turned the light switch in my kitchen, and as a result, the light in my bedroom came on instead of the light in my kitchen. That's exceedingly unlikely ever to happen, but when we make electrical pathways really small and put them really close together, that exact sort of thing can be quite common. Suppose I have a computer with a bunch of transistors, and they're really tiny and close together, and I send some electrons down a pathway that leads to transistor number four-billion-and-two. Usually when I do that, transistor number four-billion-and-two gets activated as a result, because the electrons went along the pathway I sent them on. But sometimes I send some electrons down a pathway that leads to transistor number four-billion-and-two, and the result is that transistor number four-billion-and-three gets activated, because I sent the electrons down one pathway, and then they jumped across a solid wall as they were travelling, and that got them on the wrong path, and that's how they ended up at the wrong transistor. When I was in high school, there was one guy who was smarter than I was. His name was Bob Nakamoto. The people who got the highest grades in the math and science courses in our year, far and away, it was the two of us, me and Bob. Bob also got high grades in the other courses. I didn't get high grades in those courses because I had too many objections toward them and didn't care much about them. But even in the sciences, one time Bob and I were discussing a physics puzzle, and Bob's ability to reason about the math and logic stuff was.. was what made Bob's math-science brain the one envy of mine. One year after graduating high school, I was talking with Bob one time and he mentioned an idea he had. He said, "Those free newspapers you get at the transit stations, I was wondering how they could give those away for free. And then I realized that if they can pay whatever money it takes to make those and give them away for free, then the people who read them must be either losing money or losing something just as good." It was a vague idea, and he couldn't explain it really well, and then I told Bob about something I had learned in first year of university, from a course called principles of microeconomics. "Bob, your reasoning partakes of a certain fallacy of reasoning, and that is to assume that all interactions are zero-sum, meaning that if one person gains, then another person has to lose. The truth is that many hooman interactions are what's called positive-sum, and in many such interactions, all parties gain. Consider a simplified example. Suppose that a long time ago there were two villages, and one was near a forest and the other was near an ocean. In village 1, it's easy to get lumber but it's not easy to get food. In village 2, it's easy to get food, but it's not easy to get lumber. The natural outcome of this is that the people in village 1 will trade some of the lumber they get from their forest, and the people in village 2 will trade some of the fish they get from their ocean, and then both villages will have both lumber and food. If they couldn't reach this trading agreement, then village 1 would be short on food and village 2 would be short on lumber. But if they can make a trading agreement, then they will both be better off. Village 1 will be better off when village 1 and village 2 trade, and village 2 will be better off when village 1 and village 2 trade. They both agree to trade because each group sees gain for itself. That's a positive-sum exchange. It follows that when such a trade happens, the total amount of wealth in the whole system increases as a result of exchanging." Bob was also smart enough to know when he's heard an idea that makes sense and knocks down a vague idea he had. He left that conversation wiser, and without the zero-sum fallacy. What struck me as strange is where I learned the idea from. It took graduating high school, going to university, and then just happening to take a certain course that some small portion of the people there take, and that's finally where I learned about this "positive wealth is generated" principle. But it doesn't take a university student to understand the simplest form of the explanation of the principle. They could have found the time to teach that idea to us in high school, or in middle school, or in elementary school. And somehow it was absent from all those. Heck, in high school, I even took a course called "Business 10" as in "grade 10 business" and they didn't even mention that much of microeconomics in that course. That course was pretty anarchic, like most of the electives in high school, and there's pretty much no standards, and the teachers can do whatever they want in those courses, and that's often nothing. One might think that the "positive wealth is generated" principle is simple, and that almost anyone of any age can look around and realize how it works, and that's why there's no point in teaching it in public schools. Apparently not. Bob Nakamoto of all people a year after graduating high school came up with an argument that partakes of the fallacy that can only happen when you don't know the "positive wealth is generated" principle, and he thought he had figured something out. And he admitted what the fix to his reasoning was when I presented it. So apparently it's not all that obvious. To consider how many pointless facts there were in the public school curriculum and how in all that doing, they excluded teaching the "positive wealth is generated" principle from the whole thing, and to consider how many other ideas there are like that, excluded but would be worth making room for by excluding some of the pointless facts instead.. This is a terrible inefficiency - it's one of those things in the world that could be given a huge improvement at absolutely no cost other than throwing out bad policies and overcoming whatever inertia those have. [re the "possible to have done" dialog] "Some people would say it's possible to have done something other than what you did do because nothing was stopping you from doing those other things." "Sounds reasonable." [...] "The thing that actually happened happened, and that's what stopped all the other things that didn't happen from happening instead." The soft determinist's moral responsibility is a function of imagination, but that moral responsibility is not necessarily a fiction. The counterfactuals are fictions, but the relations between the real world and those fictional worlds are real relations. If I say that Spoderman is a fictional creature who stands 8 feet tall, then to say that I'm not as tall as Spoderman is a true statement. So when I say it's reasonable to imagine a world wherein you did something other than what you in fact did, I'm describing a fictional world, but when I say you're responsible in this world for not doing what you did in that fictional world, that's a real fact about responsibility. Most ignorance impasses you have to act as though assuming they're false, but libertarian free will is an ignorance impasse you have to act as though assuming it's true.