Thursday, April 18, 2013

Notes on Our Tenth Meeting

We read a paper discussing a body machine interface (BMI) involving macaques who learned to manipulate a mechanical arm via implanted electrodes. Here is some of what we talked about.


Does the BMI involve a physical connection with the brain?

There are different methods of measuring brain activity, with different profiles in terms of temporal and spatial precision. This one used implanted probes measuring electrical activity. This has the disadvantage of killing or damaging brain tissue.

Multi unit recorders last longer than single unit recorders.

They also showed that larger samples were yielding more accurate predictions.

They show that brain activity is not as localized as previous models suggest—at least with respect to these tasks.

The event of an individual cell firing seems to be important, even though no one cell or set of cells is always implicated in a specific behavior and different cell units can underwrite the same behavior. We just don't know enough about what is going on in every case: we don't always know if there is redundancy; we don't always know if the cells firing in a given case are merely transferring information, as opposed to originating a sequence of information transfer; etc.

3 things:
1.     the brain does not always make up for missing parts.
2.     Redundancy: multiple sets of neurons that perform (roughly) the same function
3.     Plasticity: the remaining parts of the brain re-learn how to do stuff (that they were not engaged in previously)
Age, etc. matters for plasticity (e.g., infants having half of brain removed but developing relatively normally)

Ablation studies: they inject something really nasty in the brain to kill a local area of neurons. They then want to say that killing these neurons had some effect, so we can infer that this region does certain thing. But this only underwrites inferring that the relevant area is implicated in the process that issues in the behavior, but not that the behavior originates in or is localized there.
           
It’s much easier to establish that a region is not necessary for something than that it is sufficient for something.

An interesting portion of the 'Discussion' section of the paper noted this: The way the brain learns to use the artificial arm is by engaging with it and using it, and this engagement and use does not rely on the same informational inputs as in the normal case. In the case where there is an artificial arm, there is no proprioceptive input, just the informational input from vision and the representation of the goal. The brain is shaped by the way that it receives information about the location of the arm and by the goals of the situation. This is interesting because it makes the representation of the goal more relevant to brain structure once the input from proprioception is eliminated.

Proprioception is important in the normal case, but it is not essential. The brain can still learn and manipulate in the absence of input from proprioception. Then the representation of the goal becomes more important than in the normal case.

But: Is vision part of proprioception?

Not ‘proprioception’ in the technical sense. This references a specific set of nerves that are responsive to stretching, firing more when stretched and less when not. How much they are stretched usually depends on the positioning of your limbs.

This is interesting in relation to issues raised by the work of Merleau-Ponty and others. The exciting part here is that there is evidence of informational input to action (in the normal case) that comes from the body to the mind controlling the action.

2 questions:
1.     What part of the brain is causing the body to move?
2.     Why someone did something, where this is given in terms of the mind, conceived of differently than just as identical to the brain.

The important idea for the M-P picture is that inputs and outputs are not distinct and distinctly directional in the way that the Cartesian picture (ghost in machine) envisions.

There is a connection here to old-school cybernetics, understood as the rigorous studying of machines as information transforming systems. A machine is something which takes a vector of information as input and produces a vector of information as output. A machine transforms a vector of information.
           
On this view, there could be no ghost distinct from the machine.

(Now, cybernetics means something like the study of all computer science, or implanting devices into the human body.)

This view entails that anything that the body responds to becomes a part of the system, which seems to be a claim that M-P would like.

From the biologist’s point of view is it important to distinguish between where you end and where the car begins. Form this perspective, BMI is better thought of as brain expansion. But there are other points of view that do not see it as necessary to make this distinction.

Tuesday, April 9, 2013

Notes on Our Ninth Meeting

We are back from hiatus now. Here are some notes from our discussion of Nick Bostrom's "Are You Living in a Computer Simulation?":


Bostrom's simulation argument is very similar to Cartesian skepticism and brain in a vat cases, but it’s not clear what more it adds.

Perhaps it adds some detail and a narrative

But it does not seem to be in any significant way different from the earlier, familiar skepticism

Bostrom aims to establish the following disjunction: either (1) humanity will very likely not reach a posthuman stage; or (2) posthumans are very unlikely to run ancestor simulations; or (3) we are very likely living in a computer simulation.

The claim that seems to be at the hear of Bostrom's argument for (3): if it’s possible that posthumans will run ancestor simulations, then it’s probable that we are in a simulation. This has to do with the supposed high number of simulations that would be run and the high number of individuals in each simulation.

(NB: this is just a consideration in favor of (3), not his overall conclusion, which is that the disjunction of (1) or (2) or (3) is true.)

The disjunction is interesting because the three disjuncts are independently interesting. It is also interesting because those who write on these topics seem to generally hold that both (1) and (2) are false, which then suggests that we should take (3) very seriously.

Why an “ancestor simulation” as opposed to a simulation of intelligent creatures more generally?
            Perhaps because of motivation for self-knowledge

But: what about simulating other intelligences that are discovered but not one’s own ancestors?

Anyway, taking more simulations into account would seem to strengthen the argument, especially for the conclusion that we should give a high level of credence to the belief that we live in a simulation.

What probability are we to assign each disjunct?

"Stacked" simulations (simulations embedded in other simulations) put enormous pressure on the base computers (the computers that, in reality, are running the simulations), which threatens the entire structure. If the base computer crashes, then the whole thing crashes.

See p. 11: if they are running an ancestor simulation, then how could the actual laws diverge from those that hold in the simulation?
Perhaps there are multiple universes, not governed by the same laws, and such that some are more fundamental than others, and posthumans would come to live in a different universe, more fundamental than our own, and then simulate their ancestors, who would only be able to observe our actual universe (at least at some points in the simulation).

But: it’s not clear that this is even feasible, given current views about theoretical physics.

Even if posthumans want to understand their own workings, why would this lead them to create a large number of ancestor simulations?

Some interesting conclusions:
1.     it’s more likely than not that we are simulations (this seems doubtful)
2.     it is possible that we are simulations (this probably stands, just as it is possible that we are brains in vats)

The evidential basis for us being computer simulations seems stronger than that for us being brains in vats; but the epistemological consequences might be the same.

The disjuncts are themselves claims about probability, but that is not yet to assign a probability to any of the disjuncts. You could accept Bostrom's conclusion (that the disjunction is true) while denying any one of the disjuncts. Indeed, this seems to be one reason why the argument is interesting--many seem inclined to deny (1) and (2), so should accept (3).

How does this all relate to immortality?
Would recurrence in infinite (or a huge number of) simulations amount to immortality?

There are issues of personal identity: is a simulated me identical to actual me? There may be an amount of information that must be captured in order for us to claim that it is the same individual, even if we do not capture all of the information relevant to what constitutes their mind.

Consider the film we watched during our first meeting, “Life Begins at Rewirement,” where we have a simulation that runs indefinitely long. Does this count as a kind of immortality?

It seems that a simulated individual A might be identical to a simulated individual B, even if we grant that a simulated individual C could not be identical to a non-simulated individual D. In other words, it seems easier to see how to get from a simulated individual to an identical simulated individual, than from a non-simulated individual to an identical simulated individual. In the former case, we can sidestep issues related to Bostrom's "substrate independence thesis."

(Notice: Bostrom simply brushes off Searle’s critique of strong AI.)

Some possible criteria for individuating simulated individuals that are qualitatively identical:

Location on a computer chip: qualitatively identical individuals would still depend on functional operations that occur in different parts of the physical substrate that constitutes the computer running the simulation.

Relational properties: B might have the property 'being a simulation of A,' which A would lack, and so this property might distinguish B from A.