Wednesday, May 1, 2013

Note on Our Twelfth Meeting


We read a paper that criticizes several contemporary theories of the self and offers a different view based on the phenomenological tradition, according to which the self essentially involves one's felt experience in the world.

Abstractions are constructs on a more fundamental thing—in this case, the abstract behavioral components are abstractions from contextualized behavior. This may cause problems for familiar discussions of mind-uploading.

First- and third-person perspectives:
The thesis that shifting between the first- and third-personal perspective may distort one or the other. If you start reflecting, trying to make yourself into an object (third-personal stance), this distorts subjectivity.

Two takes:
The third-personal perspective as the view from nowhere—totally abstracts away from one’s particularity. Detachment is what distinguishes the third-personal from the first-personal.

Phenomenologists’ view: think of third-personal in terms of intersubjectivity—seeing how one’s particular perspective fits in a law-governed way with other particular (and some actual) perspectives.

The first-person point of view and the way it has been experienced has been neglected by the tradition. It is often treated in a third-personal way, abstracting away from the particularities that make it the particular perspective it is.

The metaphor of a point of view: a point of view from nowhere is incoherent, because a point of view involves a particular perspective on a situation.

Cases in which action, intention and bodily ownership can come apart:

“My hand is moving, but I don’t know why.” In the Anarchic Hand Syndrome case, there is recognition that the behavior is intentional, but no recognition of the intention guiding the arm. Ownership of the arm, but not the action. No felt connection between the intention and the action.

“I have to do this, but I don’t want to.” In OCD, there is recognition of the intention, but it is experienced as foreign. Ownership of the action, but not the intention. A felt connection between the intention and the action.

It seems, then, that felt connection between the intention and the action is sufficient for ownership of an action (where ownership is not endorsement).

Merleau-Ponty: habits are common ways of organizing ourselves in the environment in meaningful ways. It is important that the action we are doing habitually is our own, even if we do not endorse the springs of the action.

Three phases of integration: integrating my bodily senses, integrating basic gestures tied up with sensation (motion-sensation interplay), reflection on intentions guiding an action.

The perspective of ownership is active, sensory awareness. As long as you can be actively aware through sensation of the purpose guiding a piece of behavior, then you own that action.
This makes sense of the Anarchic Hand cases: the person cannot experience the purpose guiding the behavior, and so does not own the action.

And also the OCD cases: the person can experience the purpose guiding the behavior, and so owns the action, even though she does not endorse it.

But when they say that the OCD patient owns the action but not the intention, it seems they must have a different sense of ‘ownership’ here.
            Better to talk of ‘endorsement’ of the intention here.

A suggestion: perhaps there are fewer problems with mind-uploading if the mind and the environment are created at the same time. This goes along nicely with the point that mental attitudes cannot be fully decontextualized.
There is no clear distinction between the mind and the environment, because there is no mental content in abstraction from the context given by the environment.

Notes on Our Eleventh Meeting

We read a paper on the inability to predict the behavior of a leech (swimming vs. crawling) from its neural activity just prior to a stimulus to which these different behaviors are reponses.

The paper brought to mind a famous study by Libet: tried to measure when decision-making happens and its relationship to conscious awareness. Libet claimed to have shown that most decisions happen prior to conscious awareness of them. One thing he argued was that this showed that we lack the kind of control required for free will. The picture seemed to suggest that conscious awareness is epiphenomenal.

Many people in action theory think that the Libet study did not show what Libet said it did.

Different theories of attention:
Bottom-up: what we pay attention to is out of our conscious control, things we have developed (evolutionarily) mechanisms to pay attention to.

But must admit that memory, not just evolution, plays a role (e.g., if you remember that something is dangerous, it will grab your attention).

Top-down: sure there are some things that can grab your attention, but there are things you can do to block out things that grab your attention (e.g., remembering that you are looking for someone in a red sweater in a crowded room); and there must be some top-down control that alters the mechanism that determines what you are paying attention to.

If the leech had top-down control, it could alter the way that the stimulus responded.

Some people claim that it is purely top-down (e.g., a kid with no experience with fire will be drawn to it, not repelled from it, until she learns it is dangerous).

The main issue is how they relate to each other in terms of working memory.

Does this issue relate to David Marr’s analysis, which we talked about a while ago?
Perhaps. Proponents of bottom-up approach skips a role for working-memory. Top-down approach claims a big role for working memory—i.e., the algorithm goes through your working memory.

The sticking point is whether you have any conscious control over the relevant processes. (e.g., conscious awareness affecting decisions).

How does this connect with what we are talking about?
The issues raised in this article and the previous one point to the conclusion that the problem of reverse engineering the human brain is not simply a matter of big data. There are enormous complexities—plasticity, figuring out the role of single and groups of neurons in behavioral outputs, etc.

The upshot is this: there are many complex issues to be worked out about the role of neurons and groups of neurons in the functional outputs of our brain, even given a complete map of the neuronal structure of the brain.

In this article: we have a central pattern generator, such that given a simple stimulus, we have a response. Once the choice is made, it goes on its own. But what choice it makes cannot be predicted from the central pattern generator. So it is unclear what the choice depends on. Once the mechanism is kick-started, we can tell what will happen. But what kick-starts the mechanism?
This generalizes: what accounts for our decision to begin walking with our right foot or left foot?

The paper seems to support a top-down approach: there is some control over when the mechanism becomes engaged, even thought the behavior unfolds without need for conscious control after the mechanism has been engaged (e.g., chewing is like this. So is walking—once started, you’ll go until your brain tells you to stop).

In the leech case: it seems form this study that what choice is made, which mechanism gets selected, swimming or crawling, is not determined by neurons internal to the mechanism that produces these behaviors. There is something else that determines which choice gets made (perhaps rest state prior to stimulus).

But remember: the neurons internal to the mechanism could very well overlap with other systems, involved in multiple mechanisms.

What we have here is a very simple brain, a well-defined, simple mechanism, a choice between two behaviors given a single stimulus, and yet we still cannot predict with accuracy what will result.
This makes it look very doubtful that we will be able to predict human behavior from a good understanding of the structure of the human brain anytime soon. Those predicting uploading in the near future seem to be way too optimistic.

The conclusion of this paper: either (i) choice depends on rest state prior to stimulus or (ii) the system is reset each time and then behaves non-deterministically after stimulus.

If the hypothesis is correct hat the behavioral output depends on the rest state prior to the stimulus, then it seems in principle possible to acquire the required information for predictive success.

But how do you define rest state? Of the whole system? Of the mechanism?

What about plasticity and changes in connective patterns? When does one neuron inhibit another?

But, given enough trials, shouldn’t we be able to rule out different possibilities and fine-tune our predictive models?

It is amazing that these studies even give us useful data. They involve slicing open live leeches, interfering with its body, brain, neurons, etc. Wouldn’t we expect that these interventions would interfere with the normal functioning of the systems?

Thursday, April 18, 2013

Notes on Our Tenth Meeting

We read a paper discussing a body machine interface (BMI) involving macaques who learned to manipulate a mechanical arm via implanted electrodes. Here is some of what we talked about.


Does the BMI involve a physical connection with the brain?

There are different methods of measuring brain activity, with different profiles in terms of temporal and spatial precision. This one used implanted probes measuring electrical activity. This has the disadvantage of killing or damaging brain tissue.

Multi unit recorders last longer than single unit recorders.

They also showed that larger samples were yielding more accurate predictions.

They show that brain activity is not as localized as previous models suggest—at least with respect to these tasks.

The event of an individual cell firing seems to be important, even though no one cell or set of cells is always implicated in a specific behavior and different cell units can underwrite the same behavior. We just don't know enough about what is going on in every case: we don't always know if there is redundancy; we don't always know if the cells firing in a given case are merely transferring information, as opposed to originating a sequence of information transfer; etc.

3 things:
1.     the brain does not always make up for missing parts.
2.     Redundancy: multiple sets of neurons that perform (roughly) the same function
3.     Plasticity: the remaining parts of the brain re-learn how to do stuff (that they were not engaged in previously)
Age, etc. matters for plasticity (e.g., infants having half of brain removed but developing relatively normally)

Ablation studies: they inject something really nasty in the brain to kill a local area of neurons. They then want to say that killing these neurons had some effect, so we can infer that this region does certain thing. But this only underwrites inferring that the relevant area is implicated in the process that issues in the behavior, but not that the behavior originates in or is localized there.
           
It’s much easier to establish that a region is not necessary for something than that it is sufficient for something.

An interesting portion of the 'Discussion' section of the paper noted this: The way the brain learns to use the artificial arm is by engaging with it and using it, and this engagement and use does not rely on the same informational inputs as in the normal case. In the case where there is an artificial arm, there is no proprioceptive input, just the informational input from vision and the representation of the goal. The brain is shaped by the way that it receives information about the location of the arm and by the goals of the situation. This is interesting because it makes the representation of the goal more relevant to brain structure once the input from proprioception is eliminated.

Proprioception is important in the normal case, but it is not essential. The brain can still learn and manipulate in the absence of input from proprioception. Then the representation of the goal becomes more important than in the normal case.

But: Is vision part of proprioception?

Not ‘proprioception’ in the technical sense. This references a specific set of nerves that are responsive to stretching, firing more when stretched and less when not. How much they are stretched usually depends on the positioning of your limbs.

This is interesting in relation to issues raised by the work of Merleau-Ponty and others. The exciting part here is that there is evidence of informational input to action (in the normal case) that comes from the body to the mind controlling the action.

2 questions:
1.     What part of the brain is causing the body to move?
2.     Why someone did something, where this is given in terms of the mind, conceived of differently than just as identical to the brain.

The important idea for the M-P picture is that inputs and outputs are not distinct and distinctly directional in the way that the Cartesian picture (ghost in machine) envisions.

There is a connection here to old-school cybernetics, understood as the rigorous studying of machines as information transforming systems. A machine is something which takes a vector of information as input and produces a vector of information as output. A machine transforms a vector of information.
           
On this view, there could be no ghost distinct from the machine.

(Now, cybernetics means something like the study of all computer science, or implanting devices into the human body.)

This view entails that anything that the body responds to becomes a part of the system, which seems to be a claim that M-P would like.

From the biologist’s point of view is it important to distinguish between where you end and where the car begins. Form this perspective, BMI is better thought of as brain expansion. But there are other points of view that do not see it as necessary to make this distinction.

Tuesday, April 9, 2013

Notes on Our Ninth Meeting

We are back from hiatus now. Here are some notes from our discussion of Nick Bostrom's "Are You Living in a Computer Simulation?":


Bostrom's simulation argument is very similar to Cartesian skepticism and brain in a vat cases, but it’s not clear what more it adds.

Perhaps it adds some detail and a narrative

But it does not seem to be in any significant way different from the earlier, familiar skepticism

Bostrom aims to establish the following disjunction: either (1) humanity will very likely not reach a posthuman stage; or (2) posthumans are very unlikely to run ancestor simulations; or (3) we are very likely living in a computer simulation.

The claim that seems to be at the hear of Bostrom's argument for (3): if it’s possible that posthumans will run ancestor simulations, then it’s probable that we are in a simulation. This has to do with the supposed high number of simulations that would be run and the high number of individuals in each simulation.

(NB: this is just a consideration in favor of (3), not his overall conclusion, which is that the disjunction of (1) or (2) or (3) is true.)

The disjunction is interesting because the three disjuncts are independently interesting. It is also interesting because those who write on these topics seem to generally hold that both (1) and (2) are false, which then suggests that we should take (3) very seriously.

Why an “ancestor simulation” as opposed to a simulation of intelligent creatures more generally?
            Perhaps because of motivation for self-knowledge

But: what about simulating other intelligences that are discovered but not one’s own ancestors?

Anyway, taking more simulations into account would seem to strengthen the argument, especially for the conclusion that we should give a high level of credence to the belief that we live in a simulation.

What probability are we to assign each disjunct?

"Stacked" simulations (simulations embedded in other simulations) put enormous pressure on the base computers (the computers that, in reality, are running the simulations), which threatens the entire structure. If the base computer crashes, then the whole thing crashes.

See p. 11: if they are running an ancestor simulation, then how could the actual laws diverge from those that hold in the simulation?
Perhaps there are multiple universes, not governed by the same laws, and such that some are more fundamental than others, and posthumans would come to live in a different universe, more fundamental than our own, and then simulate their ancestors, who would only be able to observe our actual universe (at least at some points in the simulation).

But: it’s not clear that this is even feasible, given current views about theoretical physics.

Even if posthumans want to understand their own workings, why would this lead them to create a large number of ancestor simulations?

Some interesting conclusions:
1.     it’s more likely than not that we are simulations (this seems doubtful)
2.     it is possible that we are simulations (this probably stands, just as it is possible that we are brains in vats)

The evidential basis for us being computer simulations seems stronger than that for us being brains in vats; but the epistemological consequences might be the same.

The disjuncts are themselves claims about probability, but that is not yet to assign a probability to any of the disjuncts. You could accept Bostrom's conclusion (that the disjunction is true) while denying any one of the disjuncts. Indeed, this seems to be one reason why the argument is interesting--many seem inclined to deny (1) and (2), so should accept (3).

How does this all relate to immortality?
Would recurrence in infinite (or a huge number of) simulations amount to immortality?

There are issues of personal identity: is a simulated me identical to actual me? There may be an amount of information that must be captured in order for us to claim that it is the same individual, even if we do not capture all of the information relevant to what constitutes their mind.

Consider the film we watched during our first meeting, “Life Begins at Rewirement,” where we have a simulation that runs indefinitely long. Does this count as a kind of immortality?

It seems that a simulated individual A might be identical to a simulated individual B, even if we grant that a simulated individual C could not be identical to a non-simulated individual D. In other words, it seems easier to see how to get from a simulated individual to an identical simulated individual, than from a non-simulated individual to an identical simulated individual. In the former case, we can sidestep issues related to Bostrom's "substrate independence thesis."

(Notice: Bostrom simply brushes off Searle’s critique of strong AI.)

Some possible criteria for individuating simulated individuals that are qualitatively identical:

Location on a computer chip: qualitatively identical individuals would still depend on functional operations that occur in different parts of the physical substrate that constitutes the computer running the simulation.

Relational properties: B might have the property 'being a simulation of A,' which A would lack, and so this property might distinguish B from A.

Wednesday, February 27, 2013

Hiatus

We will be taking a break until the first week in April, when spring quarter begins. In the meantime, I will try to figure out a good time for everyone to meet and what we would like to take up when we reconvene. Suggestions most welcome.

Have a great end of the quarter and spring break.

Note son our eighth meeting

We continued our discussion of Chalmers' singularity essay, beginning with Patrick's comment on the blog post from last week's meeting.


Patrick’s comment: How are we supposed to conceive of the extensions of intelligence and/or abilities that Chalmers talks about in sec 3?
            The idea is that the AI+(+) is an intelligence of a different kind

The way that AI+ will come about seems deeply dependent on what the abilities are.

One theme in phenomenology: consciousness/the mind is destined for the world—they are tied up in the context in which they make sense. For example, consider a proper functioning view: we get an ability that distinguishes us form animals and that functions properly in a certain context.

But it’s not clear (a) how we can be said to extend these same abilities to new contexts and (b) how these extended abilities might be said to be better.

Success is always success in a context. But we do not have access to the stage relevant to the success of AI+. This is significant because it blocks our ability to predict success relevant to AI++.

A related point (perhaps the same point put another way): the Wittgensteinian idea that our concepts are built for this world, and certain kinds of counterfactuals cannot be properly evaluated because they outstrip the context of our language game

Perhaps: pick a very simple measure for evaluation (e.g., ability to generate wealth, efficiency)

Bergsson: has an argument that every creature is the best example of its kind (Matter and Memory, at the end)

Is there a distinction to be made between a difference in degree and a difference in kind?
Perhaps we are responsible for assigning differences in kind given various differences in degree.

            But does this make the distinction irrelevant or uninteresting?

There are interesting issues here about reality, whether we can experience an objective reality or only ever a subjectively conditioned reality.

Will we ever reach a consensus regarding where to draw the line for a difference in kind? Perhaps, so long as we agree to some background presuppositions—e.g., whether to take a functional perspective or a perspective regarding material constitution.

What constitutes progress?
            Paradigm shifts, death of ideas, (greater or lesser) consensus?

Bostrom (2012) just defines intelligence as something like instrumental rationality
Are bacteria intelligent in the same way as modern AI? Yes, if we define reasoning behaviorally. And this definition of intelligence is easily measurable.

But is it safe to assume that the desire to have power over oneself and one’s environment are prerequisites for success at survival?
            Is this what we think intelligent people have?

All living things modify their internal environment in order to better survive (bacteria, plants, humans, etc.)

Gray goo: a nanobot that builds a copy of itself and the apocalypse comes about because it replicates itself in an uncontrolled fashion, eating all life on earth to feed its end of copying itself.

A problem: We have AI, then pick the capacities we most care about, extend them into AI+, and then the extension to AI++ would no longer be a sort of being we would value. The idea is that the set of things extended comes to include fewer things we care about, to the point that AI++ does not contain anything that we care about.

If we assume that intelligence is instrumental rationality, then this will be ramped up to the exclusion of other interests. But we have a system of interconnected interests—we have cognitive interests, say, in individuating objects in perception. But this might not be maintained in the pursuit of maximizing instrumental rationality.

What does it mean to give a machine values? Give them ends, in the sense relvant to means-ends reasoning.

An argument that a superintelligence might be both moral and extinguish humanity:
Suppose consequentialism is right and AI++ discovers the true conception of well-being. It might be that in order to achieve this they need to wipe out human beings. This would result in a better state of affairs, but extinction for us.

How should we feel about this?

Many of these issues come to a similar problem: The production of an AI++ will involve a loss of some things we find very valuable, and this presents us with a problem. Should we pursue or should we inhibit or constrain the relevant progress in intelligence?
This is probably closely related to Chalmers’ claim that motivational obstacles are the greatest.

What sort of control do we have over the singularity?
            We could delay it, but for how long?
            We could stop it from happening on Earth, say, by blowing up the planet.
We could constrain the ways in which the possibility of the singularity occurring unfolds.

Friday, February 22, 2013

Notes on our seventh meeting


We discussed David Chalmers' "The Singularity: A Philosophical Analysis," which we will continue to discuss next time.

We began by noting Chalmers’ moderate sense of ‘singularity’ (p. 3): referring to an intelligence explosion by a recursive mechanism, where successively more intelligent machines arise that are better at producing even more intelligent machines.

We also noted a nice distinction Chalmers makes (in the spirit of Parfit): identity vs survival

Parfit on Personal Identity: identity doesn’t really matter that much; trying to persuade us to get less attached to notions of identity
Eric Schwitzgebel’s view: it is convenient to have a logic with clean lines between people for us (we don’t fission, duplicate, upload), but in weird cases, this logic does not model well, so should switch to modeling what you care about (e.g., survival).

But practical issues remain (e.g., who pays the mortgage).

Enhancement: much of this has already happened
The Flynn effect: increasing IQs across generations, requires re-calibrating the IQ test to keep the norms in a certain range

There is room for skepticism about measuring general intelligence: (i) perhaps we are better test-takers; (ii) there are multiple intelligences, and IQ-style tests don't test for many (or even most) of them.

In sec 3 of Chalmer's essay: notice the embedding of ‘what we care about’ in the characterization of the relevant capacities. This is in line with the Parfitian approach to identity.

Values: There are many, complex issues here
            How to define them
            How to identify them
            Subjective vs objective
            Universal values (e.g., cross-cultural, across times)

3 different sense of ‘objectivity’ for values: judgment-independent, choice-independent, human nature-independent

Kant vs Hume:
            An issue about whether mistakes in value are mistakes in rationality (Hume: no; Kant: yes).
            And what does this entail about the moral behavior of AI+(+)?

See the new Pinker book: where he argues that we have beome both more intelligent and more moral over time.

Two sense of morality over time: across generations vs. over the course of an individual’s life
            It seems that older people have more sophisticated moral reasoning, but this is a distinct
            question from whether different cultures have more or less sophisticated moral reasoning and
            also from the issue whether one culture is more correct in its moral practices than another.

There are important things that transcend a particular context: e.g., math, logic
            Perhaps the survival instinct is another one

A distinction: one's moral beliefs vs. one's behavior

Another distinction: immortality vs longevity

Obstacles: Chalmers claims that motivational ones are most plausible to stop singularity from coming
            Is this right? Why, exactly, does he think this?
Perhaps there are structural obstacles: the intelligence growth becomes too hard, diminishing returns
Energy needs: can be a situational obstacle, but can also be tied to a motivational obstacle
And when there is a single system, because the energy requirements become greater, this can create a single entity and then it would all depend on its motivation

Some related issues:
Matrioshka brain: concentric circles around the sun, using all the energy, Dyson sphere brain

Kurzweil’s sixth epoch

The Fermi paradox: the odds are not good that we would be the first to reach superintelligence, so we should see evidence of others, but we don’t, so perhaps the process will stall out

Take-home messages from Chalmer's essay:
1.     a broadly functionalist account of the mind, such that we could be instantiated in a computer
              -So long as you have a nomologically possible world, conscious properties go with
              functional properties
2.     the real take-home: there’s a significant enough possibility of something like the singularity that we should seriously worry about it and consider how we are going to handle it