The paper brought to mind a famous study by Libet: tried to measure when decision-making happens
and its relationship to conscious awareness. Libet claimed to have shown that
most decisions happen prior to conscious awareness of them. One thing he argued
was that this showed that we lack the kind of control required for free will.
The picture seemed to suggest that conscious awareness is epiphenomenal.
Many people in action theory think that the Libet study did not show what Libet said it did.
Different theories of attention:
Bottom-up: what we pay attention to
is out of our conscious control, things we have developed (evolutionarily)
mechanisms to pay attention to.
But must admit that memory, not
just evolution, plays a role (e.g., if you remember that something is
dangerous, it will grab your attention).
Top-down: sure there are some
things that can grab your attention, but there are things you can do to block
out things that grab your attention (e.g., remembering that you are looking for
someone in a red sweater in a crowded room); and there must be some top-down
control that alters the mechanism that determines what you are paying attention
to.
If the leech had top-down control,
it could alter the way that the stimulus responded.
Some people claim that it is purely
top-down (e.g., a kid with no experience with fire will be drawn to it, not repelled
from it, until she learns it is dangerous).
The main
issue is how they relate to each other in terms of working memory.
Does this issue relate to David Marr’s analysis, which we
talked about a while ago?
Perhaps. Proponents of bottom-up
approach skips a role for working-memory. Top-down approach claims a big role
for working memory—i.e., the algorithm goes through your working memory.
The sticking point is whether you
have any conscious control over the relevant processes. (e.g., conscious awareness
affecting decisions).
How does this connect with what we are talking about?
The issues raised in this article
and the previous one point to the conclusion that the problem of reverse
engineering the human brain is not simply a matter of big data. There are
enormous complexities—plasticity, figuring out the role of single and groups of
neurons in behavioral outputs, etc.
The upshot is this: there are many
complex issues to be worked out about the role of neurons and groups of neurons
in the functional outputs of our brain, even given a complete map of the
neuronal structure of the brain.
In this article: we have a central pattern generator, such
that given a simple stimulus, we have a response. Once the choice is made, it
goes on its own. But what choice it makes cannot be predicted from the central
pattern generator. So it is unclear what the choice depends on. Once the
mechanism is kick-started, we can tell what will happen. But what kick-starts
the mechanism?
This generalizes: what accounts for
our decision to begin walking with our right foot or left foot?
The paper seems to support a
top-down approach: there is some control over when the mechanism becomes
engaged, even thought the behavior unfolds without need for conscious control
after the mechanism has been engaged (e.g., chewing is like this. So is
walking—once started, you’ll go until your brain tells you to stop).
In the leech case: it seems form this study that what choice
is made, which mechanism gets selected, swimming or crawling, is not determined
by neurons internal to the mechanism that produces these behaviors. There is
something else that determines which choice gets made (perhaps rest state prior
to stimulus).
But remember: the neurons internal
to the mechanism could very well overlap with other systems, involved in
multiple mechanisms.
What we have here is a very simple brain, a well-defined,
simple mechanism, a choice between two behaviors given a single stimulus, and
yet we still cannot predict with accuracy what will result.
This makes it look very doubtful
that we will be able to predict human behavior from a good understanding of the
structure of the human brain anytime soon. Those predicting uploading in the
near future seem to be way too optimistic.
The conclusion of this paper: either (i) choice depends on
rest state prior to stimulus or (ii) the system is reset each time and then
behaves non-deterministically after stimulus.
If the hypothesis is correct hat the behavioral output
depends on the rest state prior to the stimulus, then it seems in principle
possible to acquire the required information for predictive success.
But how do you define rest state? Of the whole system? Of the mechanism?
What about plasticity and changes in connective patterns?
When does one neuron inhibit another?
But, given enough trials, shouldn’t we be able to rule out
different possibilities and fine-tune our predictive models?
No comments:
Post a Comment