Tuesday, April 9, 2013

Notes on Our Ninth Meeting

We are back from hiatus now. Here are some notes from our discussion of Nick Bostrom's "Are You Living in a Computer Simulation?":


Bostrom's simulation argument is very similar to Cartesian skepticism and brain in a vat cases, but it’s not clear what more it adds.

Perhaps it adds some detail and a narrative

But it does not seem to be in any significant way different from the earlier, familiar skepticism

Bostrom aims to establish the following disjunction: either (1) humanity will very likely not reach a posthuman stage; or (2) posthumans are very unlikely to run ancestor simulations; or (3) we are very likely living in a computer simulation.

The claim that seems to be at the hear of Bostrom's argument for (3): if it’s possible that posthumans will run ancestor simulations, then it’s probable that we are in a simulation. This has to do with the supposed high number of simulations that would be run and the high number of individuals in each simulation.

(NB: this is just a consideration in favor of (3), not his overall conclusion, which is that the disjunction of (1) or (2) or (3) is true.)

The disjunction is interesting because the three disjuncts are independently interesting. It is also interesting because those who write on these topics seem to generally hold that both (1) and (2) are false, which then suggests that we should take (3) very seriously.

Why an “ancestor simulation” as opposed to a simulation of intelligent creatures more generally?
            Perhaps because of motivation for self-knowledge

But: what about simulating other intelligences that are discovered but not one’s own ancestors?

Anyway, taking more simulations into account would seem to strengthen the argument, especially for the conclusion that we should give a high level of credence to the belief that we live in a simulation.

What probability are we to assign each disjunct?

"Stacked" simulations (simulations embedded in other simulations) put enormous pressure on the base computers (the computers that, in reality, are running the simulations), which threatens the entire structure. If the base computer crashes, then the whole thing crashes.

See p. 11: if they are running an ancestor simulation, then how could the actual laws diverge from those that hold in the simulation?
Perhaps there are multiple universes, not governed by the same laws, and such that some are more fundamental than others, and posthumans would come to live in a different universe, more fundamental than our own, and then simulate their ancestors, who would only be able to observe our actual universe (at least at some points in the simulation).

But: it’s not clear that this is even feasible, given current views about theoretical physics.

Even if posthumans want to understand their own workings, why would this lead them to create a large number of ancestor simulations?

Some interesting conclusions:
1.     it’s more likely than not that we are simulations (this seems doubtful)
2.     it is possible that we are simulations (this probably stands, just as it is possible that we are brains in vats)

The evidential basis for us being computer simulations seems stronger than that for us being brains in vats; but the epistemological consequences might be the same.

The disjuncts are themselves claims about probability, but that is not yet to assign a probability to any of the disjuncts. You could accept Bostrom's conclusion (that the disjunction is true) while denying any one of the disjuncts. Indeed, this seems to be one reason why the argument is interesting--many seem inclined to deny (1) and (2), so should accept (3).

How does this all relate to immortality?
Would recurrence in infinite (or a huge number of) simulations amount to immortality?

There are issues of personal identity: is a simulated me identical to actual me? There may be an amount of information that must be captured in order for us to claim that it is the same individual, even if we do not capture all of the information relevant to what constitutes their mind.

Consider the film we watched during our first meeting, “Life Begins at Rewirement,” where we have a simulation that runs indefinitely long. Does this count as a kind of immortality?

It seems that a simulated individual A might be identical to a simulated individual B, even if we grant that a simulated individual C could not be identical to a non-simulated individual D. In other words, it seems easier to see how to get from a simulated individual to an identical simulated individual, than from a non-simulated individual to an identical simulated individual. In the former case, we can sidestep issues related to Bostrom's "substrate independence thesis."

(Notice: Bostrom simply brushes off Searle’s critique of strong AI.)

Some possible criteria for individuating simulated individuals that are qualitatively identical:

Location on a computer chip: qualitatively identical individuals would still depend on functional operations that occur in different parts of the physical substrate that constitutes the computer running the simulation.

Relational properties: B might have the property 'being a simulation of A,' which A would lack, and so this property might distinguish B from A.

No comments:

Post a Comment