Jan 172003
 

A while back I took Objectivism to task for its argument in favor of free will. That argument is still lousy, for the reasons I supplied. But in its stead I proposed an equally bad argument, that Newcomb’s Paradox renders incoherent the concept of a superbeing with the ability to predict human behavior:

Consider the following thought experiment, known after its inventor as Newcomb’s Paradox: You have two boxes, A and B. A contains a thousand dollars. B contains either a million dollars or nothing. If you choose A, you get the contents of A and B. If you choose B, you get the contents of B only.

Imagine there is something — a machine, an intelligence, a mathematical demon — that can predict your choice with, say, 90% accuracy. If it predicts you choose A, it puts nothing in B. If it predicts you choose B, it puts the million in B. Which do you choose? (Just so you don’t get cute, if the machine predicts you will decide by some random method like a coin flip, it also leaves B empty.)

The paradox lies in the absolutely plausible arguments for either alternative. Two accepted principles of decision theory conflict. The expected utility principle argues for Box B: if you calculate your payoff you will find it far larger if the predictor is 90%, or even 55%, accurate. But the dominance principle, that if one strategy is always better you should choose it, argues for Box A. After all, the being has already made its decision. Why not take the contents of Box B and the extra thousand dollars?

I would argue that paradoxes cannot exist and that the predictor (and therefore, determinism) is impossible.

I ran this past the estimable Julian Sanchez, a far better philosopher than I, who answered as follows:

You are posed the problem of predicting the output of [a computer] program TO the program. The program asks you: predict what I will respond. And the program is perfectly deterministic, but structured such that once it takes your prediction as an input, the (stated) prediction will be false, even though, of course, you can PRIVATELY know that given your input, it will respond in some different way. (For simplicity’s sake, assume the only outputs are “yes” and “no” — if you “predict” yes to the program, you know that in actuality, it will output “no” — the opposite of your stated “predicition.) This isn’t perfectly analogous to Newcomb’s paradox (with the two boxes, etc.) but I think the point holds good. It looks like something is a problem with free will — if some superbeing predicted our behavior, we could always deliberately act to falisfy the prediction once we knew it. But as the example with the simple computer program shows, that’s not an issue of free will, it’s a problem with feedback loops — with making a projected future state of a system an input into that state.

The light dawned: it’s the interaction that’s the problem, not the superbeing himself. Julian is just as good with other people’s bad arguments and you ought to read him regularly.