Jan 172003
 

A while back I took Objectivism to task for its argument in favor of free will. That argument is still lousy, for the reasons I supplied. But in its stead I proposed an equally bad argument, that Newcomb’s Paradox renders incoherent the concept of a superbeing with the ability to predict human behavior:

Consider the following thought experiment, known after its inventor as Newcomb’s Paradox: You have two boxes, A and B. A contains a thousand dollars. B contains either a million dollars or nothing. If you choose A, you get the contents of A and B. If you choose B, you get the contents of B only.

Imagine there is something — a machine, an intelligence, a mathematical demon — that can predict your choice with, say, 90% accuracy. If it predicts you choose A, it puts nothing in B. If it predicts you choose B, it puts the million in B. Which do you choose? (Just so you don’t get cute, if the machine predicts you will decide by some random method like a coin flip, it also leaves B empty.)

The paradox lies in the absolutely plausible arguments for either alternative. Two accepted principles of decision theory conflict. The expected utility principle argues for Box B: if you calculate your payoff you will find it far larger if the predictor is 90%, or even 55%, accurate. But the dominance principle, that if one strategy is always better you should choose it, argues for Box A. After all, the being has already made its decision. Why not take the contents of Box B and the extra thousand dollars?

I would argue that paradoxes cannot exist and that the predictor (and therefore, determinism) is impossible.

I ran this past the estimable Julian Sanchez, a far better philosopher than I, who answered as follows:

You are posed the problem of predicting the output of [a computer] program TO the program. The program asks you: predict what I will respond. And the program is perfectly deterministic, but structured such that once it takes your prediction as an input, the (stated) prediction will be false, even though, of course, you can PRIVATELY know that given your input, it will respond in some different way. (For simplicity’s sake, assume the only outputs are “yes” and “no” — if you “predict” yes to the program, you know that in actuality, it will output “no” — the opposite of your stated “predicition.) This isn’t perfectly analogous to Newcomb’s paradox (with the two boxes, etc.) but I think the point holds good. It looks like something is a problem with free will — if some superbeing predicted our behavior, we could always deliberately act to falisfy the prediction once we knew it. But as the example with the simple computer program shows, that’s not an issue of free will, it’s a problem with feedback loops — with making a projected future state of a system an input into that state.

The light dawned: it’s the interaction that’s the problem, not the superbeing himself. Julian is just as good with other people’s bad arguments and you ought to read him regularly.

  7 Responses to “Bad Argument Clinic: Free Will”

  1. Both Aaron’s and Julian’s arguments rest on the notion of the algorithm, that there is some method that can PREdict what decision will be made, if the decision is deterministic.

    We are used to algorithms. We like them. We use them a lot. We know what phase the moon will be in on this date in the year 2010 because the mathematics required to predict it follows guidelines that give us a shortcut to the solution. In this way algorithms are time machines.

    But I would argue that algorithms don’t exist for certain phenomena, like decision-making. Even if an equation could be produced to "predict", the actual computation would take longer than getting the real-time result.

    So what happens when no algorithm is available because the process itself is computationally irreducible?

    What happens when the shortest path to discovering the outcome is actually living through the outcome with no resort to Oracular mathematics?

    The answer is we lose any standard for judging whether a decision is made by free-will (whatever that means) or in some deterministic manner. The test, prediction, is a physical impossibility, thus the answer is forever beyond our ken.

    So long as the test is prediction,
    free-will v. determinism argument is in this interpretation "metaphysics" in the logical positivist meaning of the word — the unknowable.

    What we cannot speak about we must pass over in silence.

  2. You failed to grasp the Objectivist argument against determinism in the first place. If determinism is true then–not just your eye color–but all of your beliefs are determined by something other than thought. Why?Because of the nature of volition as man’s cognition self-regulator. The primary nature of thought and belief is to be volitional, as John Galt observed. If not, no proposition could ever be verified, including determinism. It is not a tautological argument, but a factual one. Determinism is a "self-contradiction" only in the same sense that any other "stolen concept" fallacy is; perhaps, as is often the case, Branden was misleading. But in OPAR, Peikoff could not be more clear. I’m trying to be polite, but I am truly shocked at the low level of understanding you convey regarding Objectivism. Do your homework!!

  3. Jim: The Objectivist argument assumes, but never tries to prove, that knowledge is volitional. Rand often asserts that consciousness (and thus knowledge) is volitional; so do Branden and Peikoff. But if anything beyond sense-perception were truly volitional, wouldn’t some people opt out? Wouldn’t some people be stuck seeing only qualia? Yet everyone passes out of this allegedy volitional stage. It seems to me that consciousness at some level is volitional; to assert that consciousness itself is volitional is, to borrow a phrase from Will Wilkinson, to unpack the concept tendentiously.

  4. Hmm… let me get this straight, "doing his homework" is supposed to entail checking that a fictional character said that thought is volitional? If he fails to "grasp the Objectivist argument," I suspect it’s because there’s no argument there to grasp. The fact that our actions "feel" free — though I wonder about the extent to which that’s really the case if you pay close attention — should hardly be decisive for Objectivists, of all people. And the assertion that somehow knowledge and belief formation require free will is… just that. An assertion. And one that seems at the very least to be in tension with the fact that computers make valid, but perfectly deterministic inferences all the time.

  5. That abstract consciousness is volitional is as self-evident as the fact that thinking (as opposed to seeing) takes an effort. Of this, we are directly, introspectively aware. As with any perceptual "given" it is a mistake to "argue" for it or to try to "prove" it. Like all observations, it is the starting point of awareness. (Julian Sanchez knows that "feelings" have nothing to do with it.)

    When Rand (through Galt) describes this with metaphors like "focus," this is simply the best that can be done: point and describe in words as best as you can what you are directly observing. The "proof" Aaron demands is precisely what Rand says is unnecessary and impossible. If one does not recognize such descriptions introspectively, there is nothing anyone can do for you. Sorry.

    Her "fictional character" explains why proof is unnecessary and impossible for all axioms, as I am sure you recall. It is a "stolen concept" fallacy to deny any of them.

    Because of its status as a perceptual given, volition is axiomatic. As axiomatic as consciousness itself is, and for the same reason. All abstraction involves decision-making, and, again, if one does not recognize this, there is nothing more to be said.

    If human consciousness is volitional, how on earth would you "prove" it except with consciousness and volition? Once you introspectively grasp that all abstraction is decision-making, then any "proof" becomes circular–as with any other axiomatic relationship.

    As with all axioms, you can prove that it is an axiom–if you cannot prove the substance of the axiom itself–by showing it to be a necessary predicate to further thought, as Rand also does in Galt’s Speech. Only a certain KIND of consciousness is voltional, but it is the kind that involves abstractions and arguments, which is where the "stolen concept" occurs. It is not merely that determinism makes a hash out of morality, which it does–it makes a hash out of objectivity, too.

    Some people ARE "stuck" (by choice) just perceiving the sensory–it depends upon the issue–right? Many do SELECTIVELY "opt-out" of reason, all too commonly, but to consistently opt-out would mean a quick death. It is the selective "opting-out" of abstract thought, as Aaron puts it, that is the very mechanism of free will, or, in this cae, evasion. At a low level, abstraction, while still taking some effort, is relatively easy and very habituated. It is in NEW and/or challenging areas of thought that one most frequently observes this process of evasion. Consitent evasion, of course, would–and does–lead to death.

    No, Galt is all that Aaron needed to have know better, and, skipping it, he did grave injustice to one short Branden essay taken out of its wider context. (And I can hardly be described as a Branden defender.)

    Integration is the Objectivist mandate, as you know.

  6. Jim says we shouldn’t try to prove axioms, which I don’t dispute. If, however, volition is an axiom, then Branden and Peikoff have something to answer for on this score. To try to prove a contradiction in determinism is to try to prove volition, no? So, for that matter, is Jim’s complaint about the consequences of determinism, its "making a hash" of morality and objectivity.

    I grant the correction of Julian’s word "feelings" to "introspection." Jim’s argument remains: we have volition because it appears that way to us. I cannot regard this as dispositive.

    And for all the pixels expended here I still don’t know at precisely what stage consciousness is supposed to become volitional. Nor, I suspect, will introspection shed much light on this question, since by the time we’re introspecting, we’re already well into the abstract realm.

  7. I tire of repeating myself to no effect, so this will be my last post on this subject.

    Volition is as obvious as existence itself, for there is no reason to discriminate between the direct observations of introspection and extrospection. All primary observations are of "axiomatic" status, and they are all equally "self-evident." Aaron completely fails to address my previous post on just this point. Volition is validated by direct observation. Neither Branden nor Peikoff, despite his erroneous assertion, have ever attempted to "prove" volition. What they have proven is that volition is axiomatic, which is all that can be "proven" here. By pointing out that something is the base of all subsequent knowledge, one proves that something is an axiom, not that the axiom is itself true. Rand said this; Branden said this; Peikoff said this; I said this in my post. Aaron might want to address their actual position.

    Introspection is the ONLY way to know "when consciousness is supposed to become volitional." Again, as I already indicated, it takes no effort to see when I open my eyes. It takes little effort to know what I’ve already figured out. But figuring out in the first place always takes an effort. That’s "where" free will begins, Aaron–exactly.

    Despite all of these concerns, it is always good to see Aaron putting 110% into his intellectual efforts, straining his last synapse to get his position right.

    It’s the right choice.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)