Wednesday, May 1, 2013

Note on Our Twelfth Meeting


We read a paper that criticizes several contemporary theories of the self and offers a different view based on the phenomenological tradition, according to which the self essentially involves one's felt experience in the world.

Abstractions are constructs on a more fundamental thing—in this case, the abstract behavioral components are abstractions from contextualized behavior. This may cause problems for familiar discussions of mind-uploading.

First- and third-person perspectives:
The thesis that shifting between the first- and third-personal perspective may distort one or the other. If you start reflecting, trying to make yourself into an object (third-personal stance), this distorts subjectivity.

Two takes:
The third-personal perspective as the view from nowhere—totally abstracts away from one’s particularity. Detachment is what distinguishes the third-personal from the first-personal.

Phenomenologists’ view: think of third-personal in terms of intersubjectivity—seeing how one’s particular perspective fits in a law-governed way with other particular (and some actual) perspectives.

The first-person point of view and the way it has been experienced has been neglected by the tradition. It is often treated in a third-personal way, abstracting away from the particularities that make it the particular perspective it is.

The metaphor of a point of view: a point of view from nowhere is incoherent, because a point of view involves a particular perspective on a situation.

Cases in which action, intention and bodily ownership can come apart:

“My hand is moving, but I don’t know why.” In the Anarchic Hand Syndrome case, there is recognition that the behavior is intentional, but no recognition of the intention guiding the arm. Ownership of the arm, but not the action. No felt connection between the intention and the action.

“I have to do this, but I don’t want to.” In OCD, there is recognition of the intention, but it is experienced as foreign. Ownership of the action, but not the intention. A felt connection between the intention and the action.

It seems, then, that felt connection between the intention and the action is sufficient for ownership of an action (where ownership is not endorsement).

Merleau-Ponty: habits are common ways of organizing ourselves in the environment in meaningful ways. It is important that the action we are doing habitually is our own, even if we do not endorse the springs of the action.

Three phases of integration: integrating my bodily senses, integrating basic gestures tied up with sensation (motion-sensation interplay), reflection on intentions guiding an action.

The perspective of ownership is active, sensory awareness. As long as you can be actively aware through sensation of the purpose guiding a piece of behavior, then you own that action.
This makes sense of the Anarchic Hand cases: the person cannot experience the purpose guiding the behavior, and so does not own the action.

And also the OCD cases: the person can experience the purpose guiding the behavior, and so owns the action, even though she does not endorse it.

But when they say that the OCD patient owns the action but not the intention, it seems they must have a different sense of ‘ownership’ here.
            Better to talk of ‘endorsement’ of the intention here.

A suggestion: perhaps there are fewer problems with mind-uploading if the mind and the environment are created at the same time. This goes along nicely with the point that mental attitudes cannot be fully decontextualized.
There is no clear distinction between the mind and the environment, because there is no mental content in abstraction from the context given by the environment.

Notes on Our Eleventh Meeting

We read a paper on the inability to predict the behavior of a leech (swimming vs. crawling) from its neural activity just prior to a stimulus to which these different behaviors are reponses.

The paper brought to mind a famous study by Libet: tried to measure when decision-making happens and its relationship to conscious awareness. Libet claimed to have shown that most decisions happen prior to conscious awareness of them. One thing he argued was that this showed that we lack the kind of control required for free will. The picture seemed to suggest that conscious awareness is epiphenomenal.

Many people in action theory think that the Libet study did not show what Libet said it did.

Different theories of attention:
Bottom-up: what we pay attention to is out of our conscious control, things we have developed (evolutionarily) mechanisms to pay attention to.

But must admit that memory, not just evolution, plays a role (e.g., if you remember that something is dangerous, it will grab your attention).

Top-down: sure there are some things that can grab your attention, but there are things you can do to block out things that grab your attention (e.g., remembering that you are looking for someone in a red sweater in a crowded room); and there must be some top-down control that alters the mechanism that determines what you are paying attention to.

If the leech had top-down control, it could alter the way that the stimulus responded.

Some people claim that it is purely top-down (e.g., a kid with no experience with fire will be drawn to it, not repelled from it, until she learns it is dangerous).

The main issue is how they relate to each other in terms of working memory.

Does this issue relate to David Marr’s analysis, which we talked about a while ago?
Perhaps. Proponents of bottom-up approach skips a role for working-memory. Top-down approach claims a big role for working memory—i.e., the algorithm goes through your working memory.

The sticking point is whether you have any conscious control over the relevant processes. (e.g., conscious awareness affecting decisions).

How does this connect with what we are talking about?
The issues raised in this article and the previous one point to the conclusion that the problem of reverse engineering the human brain is not simply a matter of big data. There are enormous complexities—plasticity, figuring out the role of single and groups of neurons in behavioral outputs, etc.

The upshot is this: there are many complex issues to be worked out about the role of neurons and groups of neurons in the functional outputs of our brain, even given a complete map of the neuronal structure of the brain.

In this article: we have a central pattern generator, such that given a simple stimulus, we have a response. Once the choice is made, it goes on its own. But what choice it makes cannot be predicted from the central pattern generator. So it is unclear what the choice depends on. Once the mechanism is kick-started, we can tell what will happen. But what kick-starts the mechanism?
This generalizes: what accounts for our decision to begin walking with our right foot or left foot?

The paper seems to support a top-down approach: there is some control over when the mechanism becomes engaged, even thought the behavior unfolds without need for conscious control after the mechanism has been engaged (e.g., chewing is like this. So is walking—once started, you’ll go until your brain tells you to stop).

In the leech case: it seems form this study that what choice is made, which mechanism gets selected, swimming or crawling, is not determined by neurons internal to the mechanism that produces these behaviors. There is something else that determines which choice gets made (perhaps rest state prior to stimulus).

But remember: the neurons internal to the mechanism could very well overlap with other systems, involved in multiple mechanisms.

What we have here is a very simple brain, a well-defined, simple mechanism, a choice between two behaviors given a single stimulus, and yet we still cannot predict with accuracy what will result.
This makes it look very doubtful that we will be able to predict human behavior from a good understanding of the structure of the human brain anytime soon. Those predicting uploading in the near future seem to be way too optimistic.

The conclusion of this paper: either (i) choice depends on rest state prior to stimulus or (ii) the system is reset each time and then behaves non-deterministically after stimulus.

If the hypothesis is correct hat the behavioral output depends on the rest state prior to the stimulus, then it seems in principle possible to acquire the required information for predictive success.

But how do you define rest state? Of the whole system? Of the mechanism?

What about plasticity and changes in connective patterns? When does one neuron inhibit another?

But, given enough trials, shouldn’t we be able to rule out different possibilities and fine-tune our predictive models?

It is amazing that these studies even give us useful data. They involve slicing open live leeches, interfering with its body, brain, neurons, etc. Wouldn’t we expect that these interventions would interfere with the normal functioning of the systems?