Possibility in Newcomb's Paradox

Reference Material:

            Newcomb’s paradox is a philosophical thought experiment involving rationality, possibility, free will, and determinism. It’s been analyzed and reanalyzed way too many times, but it’s a perfect playground for the ideas involving possibility from the last post and major topics from still earlier posts. Let’s go.
            You need a million dollars, so you do what any self-respecting rationalist would do – head to the casino to win it. In a less-travelled corner of the casino floor, you find an open door into a darkened room. A sign above the doorway reads “Win A Million Dollars!”. Perfect. Inside, a mad scientist presents you with 2 boxes. Box A is clear and contains $1000 cash. Box B is opaque, and the mad scientist claims it contains either $1M or nothing. She then gives you with a seemingly bizarre choice – you’re allowed to choose either box B or both boxes, and then leave carrying whatever box[es] you’ve chosen. Strange, to be sure, but taking both boxes seems like an easy way to walk away with a sure $1000 and potentially the $1M you set out for. But of course, there’s a catch. As you walked through the door, a brain scanner imaged your brain and created a detailed simulation of your cognition. It then predicted whether you would choose box B or both boxes. If it predicted you would choose only box B, the mad scientist went ahead and put the $1M inside before presenting the boxes to you. If the cognition simulation predicted you’d take both boxes, the mad scientist left box B empty. The mad scientist claims the prediction is iron clad. After all, it’s never been wrong, and she’s been doing this her whole life.
            How does knowing about the brain scanner affect your decision? You might reason as you briefly did before. The $1M is either in box B or it isn’t, and there’s no changing that now. You’ve already walked through the scanner. If box B has the money, taking both boxes nets you $1M + $1000, slightly better than the $1M in box B alone. If box B is empty and you take both boxes, you wind up with $1000 – obviously better than the 0$ box B would get you. It seems whatever the status of the money in box B, taking both boxes leaves you better off.
            Except this seems to completely ignore the predictive accuracy of the cognition simulation. If you choose both boxes, it would have predicted you would choose both boxes and, if the mad scientist is to be trusted, box B would be empty. If you believe the mad scientist’s claims about the scanning technology and her manipulation of the $1M, it seems you should choose only box B. Only then will you get the $1M.
            How should you reconcile these rational sounding, but conflicting, ways of thinking? One approach is to consider the possible outcomes of the scenario using the language of possible worlds. Consider all the possible worlds where the scenario described above plays out as described. What we’ll vary across this set of possible worlds is anything the scenario doesn’t explicitly mention, as long as what we vary doesn’t produce a contradiction. For example, in one possible world, you visit the mad scientist only once, on a Tuesday at age 44. In another, you go once a week on a random weeknight for your entire life. In still another, you only go when raging drunk.
            Now, consider each possible world in this set and note the choice you made and the money you wound up with. If the brain scanner and cognition simulation work as described, and we believe the mad scientist about how she manipulates the $1M, then in the possible worlds in which you choose box B, you’ll get $1M, and in those where you choose both boxes, you’ll get just $1000. In none of these possible worlds do you choose box B and get nothing, and perhaps more importantly, in none of these worlds do you choose both boxes and get $1M. That this is true is simply a logical translation of the description of the scanner, cognition simulation, and mad scientist in the original scenario. The only possible outcomes of your choice are: take box B and get $1M, or take both boxes and get $1000.
            If this is right, taking both boxes seems quite foolish. But what are we to make of the totally sane-sounding reasoning that leads many to choose both boxes? “The $1M is either in box B or it isn’t, and there’s no changing that now. You’ve already walked through the scanner.” So far, so good. “If box B has the money, taking both boxes nets you $1M + $1000.” Wait, what? There are no possible worlds in which you take both boxes and box B has $1M in it. Considering this as a live possibility flatly contradicts the claimed efficacy of the scanner and cognition simulation. It’s this move – considering possible what is in fact impossible – that leads people astray here.
            But why is the mistake so common? This is where free will and determinism enter the picture. It seems like the reasoning above rests on the following simple idea – I’m free to choose whichever box[es] I like. If this is true, then the game theoretic reasoning works. I compare the result of choosing both boxes to the result of choosing only box B in the cases with and without the money in box B. Regardless of the status of the money in box B, choosing both boxes leaves me better off. It’s the dominant strategy. The problem is that the scanner, cognition simulation, and mad scientist combine to undermine my freedom. It’s no longer possible for me to choose both boxes when the money is in box B. This possibility is crucial to the strategic dominance of choosing both boxes. Without it, choosing both boxes is not only no longer dominant, but much worse. Of course, the idea that I’m not free to choose either box is very difficult to accept. It seems to me that the resistance to this idea, this lack of freedom, is what motivates the 2-box choice.
            It could be said that your freedom is only undermined because of the claim that the cognition simulation yields perfect predictions. Predicting human behavior that accurately is impossible, you could argue. So make the prediction 90% accurate. Now in 90% of the possible worlds in which you choose both boxes, box B will be empty, and in 90% of the possible worlds where you choose box B, you’ll get $1M. You don’t know which of these worlds you’re actually in, but you know the probabilities over these possible worlds of winning $1M conditional on your choices. If you agree that the rational decision is to maximize the expected value of your winnings, it’s a no-brainer. Your expected winnings choosing box B are $900k, whereas choosing both boxes yields just $101k on average. So even if the cognition simulation is more realistic (fallible), the game theoretic reasoning that suggests taking both boxes fails. It fails, I believe, because a core assumption, that you’re free to take either box, is, at best, misleading.
            Fascinating, confusing stuff. More on this next time.