Wednesday, May 1, 2013

Note on Our Twelfth Meeting


We read a paper that criticizes several contemporary theories of the self and offers a different view based on the phenomenological tradition, according to which the self essentially involves one's felt experience in the world.

Abstractions are constructs on a more fundamental thing—in this case, the abstract behavioral components are abstractions from contextualized behavior. This may cause problems for familiar discussions of mind-uploading.

First- and third-person perspectives:
The thesis that shifting between the first- and third-personal perspective may distort one or the other. If you start reflecting, trying to make yourself into an object (third-personal stance), this distorts subjectivity.

Two takes:
The third-personal perspective as the view from nowhere—totally abstracts away from one’s particularity. Detachment is what distinguishes the third-personal from the first-personal.

Phenomenologists’ view: think of third-personal in terms of intersubjectivity—seeing how one’s particular perspective fits in a law-governed way with other particular (and some actual) perspectives.

The first-person point of view and the way it has been experienced has been neglected by the tradition. It is often treated in a third-personal way, abstracting away from the particularities that make it the particular perspective it is.

The metaphor of a point of view: a point of view from nowhere is incoherent, because a point of view involves a particular perspective on a situation.

Cases in which action, intention and bodily ownership can come apart:

“My hand is moving, but I don’t know why.” In the Anarchic Hand Syndrome case, there is recognition that the behavior is intentional, but no recognition of the intention guiding the arm. Ownership of the arm, but not the action. No felt connection between the intention and the action.

“I have to do this, but I don’t want to.” In OCD, there is recognition of the intention, but it is experienced as foreign. Ownership of the action, but not the intention. A felt connection between the intention and the action.

It seems, then, that felt connection between the intention and the action is sufficient for ownership of an action (where ownership is not endorsement).

Merleau-Ponty: habits are common ways of organizing ourselves in the environment in meaningful ways. It is important that the action we are doing habitually is our own, even if we do not endorse the springs of the action.

Three phases of integration: integrating my bodily senses, integrating basic gestures tied up with sensation (motion-sensation interplay), reflection on intentions guiding an action.

The perspective of ownership is active, sensory awareness. As long as you can be actively aware through sensation of the purpose guiding a piece of behavior, then you own that action.
This makes sense of the Anarchic Hand cases: the person cannot experience the purpose guiding the behavior, and so does not own the action.

And also the OCD cases: the person can experience the purpose guiding the behavior, and so owns the action, even though she does not endorse it.

But when they say that the OCD patient owns the action but not the intention, it seems they must have a different sense of ‘ownership’ here.
            Better to talk of ‘endorsement’ of the intention here.

A suggestion: perhaps there are fewer problems with mind-uploading if the mind and the environment are created at the same time. This goes along nicely with the point that mental attitudes cannot be fully decontextualized.
There is no clear distinction between the mind and the environment, because there is no mental content in abstraction from the context given by the environment.

Notes on Our Eleventh Meeting

We read a paper on the inability to predict the behavior of a leech (swimming vs. crawling) from its neural activity just prior to a stimulus to which these different behaviors are reponses.

The paper brought to mind a famous study by Libet: tried to measure when decision-making happens and its relationship to conscious awareness. Libet claimed to have shown that most decisions happen prior to conscious awareness of them. One thing he argued was that this showed that we lack the kind of control required for free will. The picture seemed to suggest that conscious awareness is epiphenomenal.

Many people in action theory think that the Libet study did not show what Libet said it did.

Different theories of attention:
Bottom-up: what we pay attention to is out of our conscious control, things we have developed (evolutionarily) mechanisms to pay attention to.

But must admit that memory, not just evolution, plays a role (e.g., if you remember that something is dangerous, it will grab your attention).

Top-down: sure there are some things that can grab your attention, but there are things you can do to block out things that grab your attention (e.g., remembering that you are looking for someone in a red sweater in a crowded room); and there must be some top-down control that alters the mechanism that determines what you are paying attention to.

If the leech had top-down control, it could alter the way that the stimulus responded.

Some people claim that it is purely top-down (e.g., a kid with no experience with fire will be drawn to it, not repelled from it, until she learns it is dangerous).

The main issue is how they relate to each other in terms of working memory.

Does this issue relate to David Marr’s analysis, which we talked about a while ago?
Perhaps. Proponents of bottom-up approach skips a role for working-memory. Top-down approach claims a big role for working memory—i.e., the algorithm goes through your working memory.

The sticking point is whether you have any conscious control over the relevant processes. (e.g., conscious awareness affecting decisions).

How does this connect with what we are talking about?
The issues raised in this article and the previous one point to the conclusion that the problem of reverse engineering the human brain is not simply a matter of big data. There are enormous complexities—plasticity, figuring out the role of single and groups of neurons in behavioral outputs, etc.

The upshot is this: there are many complex issues to be worked out about the role of neurons and groups of neurons in the functional outputs of our brain, even given a complete map of the neuronal structure of the brain.

In this article: we have a central pattern generator, such that given a simple stimulus, we have a response. Once the choice is made, it goes on its own. But what choice it makes cannot be predicted from the central pattern generator. So it is unclear what the choice depends on. Once the mechanism is kick-started, we can tell what will happen. But what kick-starts the mechanism?
This generalizes: what accounts for our decision to begin walking with our right foot or left foot?

The paper seems to support a top-down approach: there is some control over when the mechanism becomes engaged, even thought the behavior unfolds without need for conscious control after the mechanism has been engaged (e.g., chewing is like this. So is walking—once started, you’ll go until your brain tells you to stop).

In the leech case: it seems form this study that what choice is made, which mechanism gets selected, swimming or crawling, is not determined by neurons internal to the mechanism that produces these behaviors. There is something else that determines which choice gets made (perhaps rest state prior to stimulus).

But remember: the neurons internal to the mechanism could very well overlap with other systems, involved in multiple mechanisms.

What we have here is a very simple brain, a well-defined, simple mechanism, a choice between two behaviors given a single stimulus, and yet we still cannot predict with accuracy what will result.
This makes it look very doubtful that we will be able to predict human behavior from a good understanding of the structure of the human brain anytime soon. Those predicting uploading in the near future seem to be way too optimistic.

The conclusion of this paper: either (i) choice depends on rest state prior to stimulus or (ii) the system is reset each time and then behaves non-deterministically after stimulus.

If the hypothesis is correct hat the behavioral output depends on the rest state prior to the stimulus, then it seems in principle possible to acquire the required information for predictive success.

But how do you define rest state? Of the whole system? Of the mechanism?

What about plasticity and changes in connective patterns? When does one neuron inhibit another?

But, given enough trials, shouldn’t we be able to rule out different possibilities and fine-tune our predictive models?

It is amazing that these studies even give us useful data. They involve slicing open live leeches, interfering with its body, brain, neurons, etc. Wouldn’t we expect that these interventions would interfere with the normal functioning of the systems?

Thursday, April 18, 2013

Notes on Our Tenth Meeting

We read a paper discussing a body machine interface (BMI) involving macaques who learned to manipulate a mechanical arm via implanted electrodes. Here is some of what we talked about.


Does the BMI involve a physical connection with the brain?

There are different methods of measuring brain activity, with different profiles in terms of temporal and spatial precision. This one used implanted probes measuring electrical activity. This has the disadvantage of killing or damaging brain tissue.

Multi unit recorders last longer than single unit recorders.

They also showed that larger samples were yielding more accurate predictions.

They show that brain activity is not as localized as previous models suggest—at least with respect to these tasks.

The event of an individual cell firing seems to be important, even though no one cell or set of cells is always implicated in a specific behavior and different cell units can underwrite the same behavior. We just don't know enough about what is going on in every case: we don't always know if there is redundancy; we don't always know if the cells firing in a given case are merely transferring information, as opposed to originating a sequence of information transfer; etc.

3 things:
1.     the brain does not always make up for missing parts.
2.     Redundancy: multiple sets of neurons that perform (roughly) the same function
3.     Plasticity: the remaining parts of the brain re-learn how to do stuff (that they were not engaged in previously)
Age, etc. matters for plasticity (e.g., infants having half of brain removed but developing relatively normally)

Ablation studies: they inject something really nasty in the brain to kill a local area of neurons. They then want to say that killing these neurons had some effect, so we can infer that this region does certain thing. But this only underwrites inferring that the relevant area is implicated in the process that issues in the behavior, but not that the behavior originates in or is localized there.
           
It’s much easier to establish that a region is not necessary for something than that it is sufficient for something.

An interesting portion of the 'Discussion' section of the paper noted this: The way the brain learns to use the artificial arm is by engaging with it and using it, and this engagement and use does not rely on the same informational inputs as in the normal case. In the case where there is an artificial arm, there is no proprioceptive input, just the informational input from vision and the representation of the goal. The brain is shaped by the way that it receives information about the location of the arm and by the goals of the situation. This is interesting because it makes the representation of the goal more relevant to brain structure once the input from proprioception is eliminated.

Proprioception is important in the normal case, but it is not essential. The brain can still learn and manipulate in the absence of input from proprioception. Then the representation of the goal becomes more important than in the normal case.

But: Is vision part of proprioception?

Not ‘proprioception’ in the technical sense. This references a specific set of nerves that are responsive to stretching, firing more when stretched and less when not. How much they are stretched usually depends on the positioning of your limbs.

This is interesting in relation to issues raised by the work of Merleau-Ponty and others. The exciting part here is that there is evidence of informational input to action (in the normal case) that comes from the body to the mind controlling the action.

2 questions:
1.     What part of the brain is causing the body to move?
2.     Why someone did something, where this is given in terms of the mind, conceived of differently than just as identical to the brain.

The important idea for the M-P picture is that inputs and outputs are not distinct and distinctly directional in the way that the Cartesian picture (ghost in machine) envisions.

There is a connection here to old-school cybernetics, understood as the rigorous studying of machines as information transforming systems. A machine is something which takes a vector of information as input and produces a vector of information as output. A machine transforms a vector of information.
           
On this view, there could be no ghost distinct from the machine.

(Now, cybernetics means something like the study of all computer science, or implanting devices into the human body.)

This view entails that anything that the body responds to becomes a part of the system, which seems to be a claim that M-P would like.

From the biologist’s point of view is it important to distinguish between where you end and where the car begins. Form this perspective, BMI is better thought of as brain expansion. But there are other points of view that do not see it as necessary to make this distinction.

Tuesday, April 9, 2013

Notes on Our Ninth Meeting

We are back from hiatus now. Here are some notes from our discussion of Nick Bostrom's "Are You Living in a Computer Simulation?":


Bostrom's simulation argument is very similar to Cartesian skepticism and brain in a vat cases, but it’s not clear what more it adds.

Perhaps it adds some detail and a narrative

But it does not seem to be in any significant way different from the earlier, familiar skepticism

Bostrom aims to establish the following disjunction: either (1) humanity will very likely not reach a posthuman stage; or (2) posthumans are very unlikely to run ancestor simulations; or (3) we are very likely living in a computer simulation.

The claim that seems to be at the hear of Bostrom's argument for (3): if it’s possible that posthumans will run ancestor simulations, then it’s probable that we are in a simulation. This has to do with the supposed high number of simulations that would be run and the high number of individuals in each simulation.

(NB: this is just a consideration in favor of (3), not his overall conclusion, which is that the disjunction of (1) or (2) or (3) is true.)

The disjunction is interesting because the three disjuncts are independently interesting. It is also interesting because those who write on these topics seem to generally hold that both (1) and (2) are false, which then suggests that we should take (3) very seriously.

Why an “ancestor simulation” as opposed to a simulation of intelligent creatures more generally?
            Perhaps because of motivation for self-knowledge

But: what about simulating other intelligences that are discovered but not one’s own ancestors?

Anyway, taking more simulations into account would seem to strengthen the argument, especially for the conclusion that we should give a high level of credence to the belief that we live in a simulation.

What probability are we to assign each disjunct?

"Stacked" simulations (simulations embedded in other simulations) put enormous pressure on the base computers (the computers that, in reality, are running the simulations), which threatens the entire structure. If the base computer crashes, then the whole thing crashes.

See p. 11: if they are running an ancestor simulation, then how could the actual laws diverge from those that hold in the simulation?
Perhaps there are multiple universes, not governed by the same laws, and such that some are more fundamental than others, and posthumans would come to live in a different universe, more fundamental than our own, and then simulate their ancestors, who would only be able to observe our actual universe (at least at some points in the simulation).

But: it’s not clear that this is even feasible, given current views about theoretical physics.

Even if posthumans want to understand their own workings, why would this lead them to create a large number of ancestor simulations?

Some interesting conclusions:
1.     it’s more likely than not that we are simulations (this seems doubtful)
2.     it is possible that we are simulations (this probably stands, just as it is possible that we are brains in vats)

The evidential basis for us being computer simulations seems stronger than that for us being brains in vats; but the epistemological consequences might be the same.

The disjuncts are themselves claims about probability, but that is not yet to assign a probability to any of the disjuncts. You could accept Bostrom's conclusion (that the disjunction is true) while denying any one of the disjuncts. Indeed, this seems to be one reason why the argument is interesting--many seem inclined to deny (1) and (2), so should accept (3).

How does this all relate to immortality?
Would recurrence in infinite (or a huge number of) simulations amount to immortality?

There are issues of personal identity: is a simulated me identical to actual me? There may be an amount of information that must be captured in order for us to claim that it is the same individual, even if we do not capture all of the information relevant to what constitutes their mind.

Consider the film we watched during our first meeting, “Life Begins at Rewirement,” where we have a simulation that runs indefinitely long. Does this count as a kind of immortality?

It seems that a simulated individual A might be identical to a simulated individual B, even if we grant that a simulated individual C could not be identical to a non-simulated individual D. In other words, it seems easier to see how to get from a simulated individual to an identical simulated individual, than from a non-simulated individual to an identical simulated individual. In the former case, we can sidestep issues related to Bostrom's "substrate independence thesis."

(Notice: Bostrom simply brushes off Searle’s critique of strong AI.)

Some possible criteria for individuating simulated individuals that are qualitatively identical:

Location on a computer chip: qualitatively identical individuals would still depend on functional operations that occur in different parts of the physical substrate that constitutes the computer running the simulation.

Relational properties: B might have the property 'being a simulation of A,' which A would lack, and so this property might distinguish B from A.

Wednesday, February 27, 2013

Hiatus

We will be taking a break until the first week in April, when spring quarter begins. In the meantime, I will try to figure out a good time for everyone to meet and what we would like to take up when we reconvene. Suggestions most welcome.

Have a great end of the quarter and spring break.

Note son our eighth meeting

We continued our discussion of Chalmers' singularity essay, beginning with Patrick's comment on the blog post from last week's meeting.


Patrick’s comment: How are we supposed to conceive of the extensions of intelligence and/or abilities that Chalmers talks about in sec 3?
            The idea is that the AI+(+) is an intelligence of a different kind

The way that AI+ will come about seems deeply dependent on what the abilities are.

One theme in phenomenology: consciousness/the mind is destined for the world—they are tied up in the context in which they make sense. For example, consider a proper functioning view: we get an ability that distinguishes us form animals and that functions properly in a certain context.

But it’s not clear (a) how we can be said to extend these same abilities to new contexts and (b) how these extended abilities might be said to be better.

Success is always success in a context. But we do not have access to the stage relevant to the success of AI+. This is significant because it blocks our ability to predict success relevant to AI++.

A related point (perhaps the same point put another way): the Wittgensteinian idea that our concepts are built for this world, and certain kinds of counterfactuals cannot be properly evaluated because they outstrip the context of our language game

Perhaps: pick a very simple measure for evaluation (e.g., ability to generate wealth, efficiency)

Bergsson: has an argument that every creature is the best example of its kind (Matter and Memory, at the end)

Is there a distinction to be made between a difference in degree and a difference in kind?
Perhaps we are responsible for assigning differences in kind given various differences in degree.

            But does this make the distinction irrelevant or uninteresting?

There are interesting issues here about reality, whether we can experience an objective reality or only ever a subjectively conditioned reality.

Will we ever reach a consensus regarding where to draw the line for a difference in kind? Perhaps, so long as we agree to some background presuppositions—e.g., whether to take a functional perspective or a perspective regarding material constitution.

What constitutes progress?
            Paradigm shifts, death of ideas, (greater or lesser) consensus?

Bostrom (2012) just defines intelligence as something like instrumental rationality
Are bacteria intelligent in the same way as modern AI? Yes, if we define reasoning behaviorally. And this definition of intelligence is easily measurable.

But is it safe to assume that the desire to have power over oneself and one’s environment are prerequisites for success at survival?
            Is this what we think intelligent people have?

All living things modify their internal environment in order to better survive (bacteria, plants, humans, etc.)

Gray goo: a nanobot that builds a copy of itself and the apocalypse comes about because it replicates itself in an uncontrolled fashion, eating all life on earth to feed its end of copying itself.

A problem: We have AI, then pick the capacities we most care about, extend them into AI+, and then the extension to AI++ would no longer be a sort of being we would value. The idea is that the set of things extended comes to include fewer things we care about, to the point that AI++ does not contain anything that we care about.

If we assume that intelligence is instrumental rationality, then this will be ramped up to the exclusion of other interests. But we have a system of interconnected interests—we have cognitive interests, say, in individuating objects in perception. But this might not be maintained in the pursuit of maximizing instrumental rationality.

What does it mean to give a machine values? Give them ends, in the sense relvant to means-ends reasoning.

An argument that a superintelligence might be both moral and extinguish humanity:
Suppose consequentialism is right and AI++ discovers the true conception of well-being. It might be that in order to achieve this they need to wipe out human beings. This would result in a better state of affairs, but extinction for us.

How should we feel about this?

Many of these issues come to a similar problem: The production of an AI++ will involve a loss of some things we find very valuable, and this presents us with a problem. Should we pursue or should we inhibit or constrain the relevant progress in intelligence?
This is probably closely related to Chalmers’ claim that motivational obstacles are the greatest.

What sort of control do we have over the singularity?
            We could delay it, but for how long?
            We could stop it from happening on Earth, say, by blowing up the planet.
We could constrain the ways in which the possibility of the singularity occurring unfolds.

Friday, February 22, 2013

Notes on our seventh meeting


We discussed David Chalmers' "The Singularity: A Philosophical Analysis," which we will continue to discuss next time.

We began by noting Chalmers’ moderate sense of ‘singularity’ (p. 3): referring to an intelligence explosion by a recursive mechanism, where successively more intelligent machines arise that are better at producing even more intelligent machines.

We also noted a nice distinction Chalmers makes (in the spirit of Parfit): identity vs survival

Parfit on Personal Identity: identity doesn’t really matter that much; trying to persuade us to get less attached to notions of identity
Eric Schwitzgebel’s view: it is convenient to have a logic with clean lines between people for us (we don’t fission, duplicate, upload), but in weird cases, this logic does not model well, so should switch to modeling what you care about (e.g., survival).

But practical issues remain (e.g., who pays the mortgage).

Enhancement: much of this has already happened
The Flynn effect: increasing IQs across generations, requires re-calibrating the IQ test to keep the norms in a certain range

There is room for skepticism about measuring general intelligence: (i) perhaps we are better test-takers; (ii) there are multiple intelligences, and IQ-style tests don't test for many (or even most) of them.

In sec 3 of Chalmer's essay: notice the embedding of ‘what we care about’ in the characterization of the relevant capacities. This is in line with the Parfitian approach to identity.

Values: There are many, complex issues here
            How to define them
            How to identify them
            Subjective vs objective
            Universal values (e.g., cross-cultural, across times)

3 different sense of ‘objectivity’ for values: judgment-independent, choice-independent, human nature-independent

Kant vs Hume:
            An issue about whether mistakes in value are mistakes in rationality (Hume: no; Kant: yes).
            And what does this entail about the moral behavior of AI+(+)?

See the new Pinker book: where he argues that we have beome both more intelligent and more moral over time.

Two sense of morality over time: across generations vs. over the course of an individual’s life
            It seems that older people have more sophisticated moral reasoning, but this is a distinct
            question from whether different cultures have more or less sophisticated moral reasoning and
            also from the issue whether one culture is more correct in its moral practices than another.

There are important things that transcend a particular context: e.g., math, logic
            Perhaps the survival instinct is another one

A distinction: one's moral beliefs vs. one's behavior

Another distinction: immortality vs longevity

Obstacles: Chalmers claims that motivational ones are most plausible to stop singularity from coming
            Is this right? Why, exactly, does he think this?
Perhaps there are structural obstacles: the intelligence growth becomes too hard, diminishing returns
Energy needs: can be a situational obstacle, but can also be tied to a motivational obstacle
And when there is a single system, because the energy requirements become greater, this can create a single entity and then it would all depend on its motivation

Some related issues:
Matrioshka brain: concentric circles around the sun, using all the energy, Dyson sphere brain

Kurzweil’s sixth epoch

The Fermi paradox: the odds are not good that we would be the first to reach superintelligence, so we should see evidence of others, but we don’t, so perhaps the process will stall out

Take-home messages from Chalmer's essay:
1.     a broadly functionalist account of the mind, such that we could be instantiated in a computer
              -So long as you have a nomologically possible world, conscious properties go with
              functional properties
2.     the real take-home: there’s a significant enough possibility of something like the singularity that we should seriously worry about it and consider how we are going to handle it

Wednesday, February 13, 2013

Notes on our sixth meeting

For this meeting, we read two more chapters in Kurzweil's The Singularity Is Near. Our discussion was rather wide-ranging and did not follow the text very closely. But it was interesting nonetheless.


We began with this question: Recall that Vinge distinguishes between AI and IA. In which of these ways does Kurzweil envision the Singularity coming about? That is, does Kurzweil think that the Singularity will arise in combination with our minds (IA), or else as a result of an artificial intelligence we produce (AI)?

The significance of this question has to do with the issue of mind-uploading. Why would we have to upload our minds to the Singularity, as Kurzweil suggested in the reading from last week, if the Singularity arises in combination with our minds?

An Answer: Kurzweil envisions a combination of the two: AI will lead to IA (e.g., Google), which will lead to strong AI in the future, which will then come back and beam us up to the heavens. In any case, the two approaches very much compliment each other.

Kurzweil is suggesting that there will be an AI that is smarter than humans before the uploading. But not certain how it will occur.

Might IA involve uploading in the process of the Singularity coming about? The uploading enters the equation before the Singularity.

What exactly is uploading? A transfer. When a blow to the head no longer matters. A change in substrate. Technically: uploading means that one makes a copy, and then a copy of a copy. Not just plugging in.

One consideration against thinking that Kurzweil envisions a certain version of the IA route to the SIngularity: Kurzweil doesn’t like the single global consciousness idea, because he thinks that it would preclude him being there. He assumes that his individual self would not persist.

This brings up issues about where to draw the boundary of the individual mind: These are salient, not only for the picture where we are plugged in to a growing intelligence that eventually becomes the Singluarity, but also for the picture according to which we are uploaded to a pre-existing Singularity.

How is Kurzweil using the term ‘the Singularity’? And how does this relate to Vinge’s use?: Kurzweil uses the term to refer to an event in human history, not necessarily a particular intelligence that comes into existence, as Vinge does. But Kurzweil does seem to have the arrival of this intelligence in mind.

Kurzweil’s focus on progress in intelligence seems myopic. There have been other periods of advancement in human history that have seen the same pattern of change (perhaps not quite as fast) in different areas of human experience. Why privilege the type of change that interests Kurzweil?

Kurzweil seems to greatly underestimate two things: (1) the limits of technology (need more hardware as well as more code) and (2) the power behind biology (he assumes that technology is better because our chemical synapses slows down our thinking—but there is more going on than just transfer of electrical signals, a trade-off between speed and fine control, also not just signal transfer but also what goes on inside neurons).

Many of the signals required for higher thought don’t transfer info but rather change the way neurons behave—and even the nanobots might not be able to tell us all the ways in which the neurons are functioning

Because of the many complexities to how our brains work, in thought, it may be possible that the robot person might be slower than the human person, even though the robot is faster at transferring electrical signals that carry information. For example, what look like limitations given our biology might be mechanisms that help to achieve optimum speed, given the various operations imolicated in our minds' functioning.

Articles on creating a baby robot (one that they teach):
Stuck on certain tasks: e.g., trying to pay attention to what it is holding, and this is because its eyesight is too good and doesn’t discriminate enough
The key was to make its eyes worse

The process of life as it is may not be the most efficient way to do things, but it is hard to make certain the stronger claim that it is not the most efficient way to do things.

Record to MP3 analogy, or live music to recording analogy: Music recorded on a record (in analog) has no gaps and so has a sound quality that cannot be matched by digital means (e.g., MP3).
Might the new medium be missing some qualitative characteristics of the old medium? And might these be essential to the experience? Can the same be said for different substrates for purported conscious experience?

The challenge is to 'the substrate independence thesis' (e.g., invoked by Bostrom).

Need to be careful: need to be aware if and when nostalgia plays a role in evaluation

Is evolution slow?
            Well it might seem so, only if one assumes that the environment changes slowly

Is there a good distinction to be made between biological advancement/evolution vs technological advancement/evolution?

The main consideration in favor of the distinction is that technological advancement/evolution essentially involves intentions and design by an intelligence. Biological evolution is normally considered to be a 'blind' process in that it is not guided by an intelligent hand.
 
In biology: random mutations give rise to new features, that are more or less adaptable to the environment.

How does the environment influence the mutations?: by changing the rate, but not the kind—they are still random.

What is randomness in this context? Seems to be not by intelligent design.

So “evolution” cannot begin with an intentionally produced mutation

What exactly is evolution?

What is the difference between the other tool using animals and us, such that advancements according to our intentions are of a different category than advancements according to their intentions?

Humans make tools by reproducing things we’ve seen by making them better.

And other animals don’t pass down the acquired knowledge to future generations

In biological evolution: we are talking about the traits of a species.

In technological evolution: can also talk about traits (e.g., a computer having wifi), but then can distinguish between the processes that selected those trait.

There is a different set of useful predictions from intentional vs. unintentional adaptations. We use the label 'biological evolution' in certain contexts, and we use the label 'technological evolution' in another, and this distinction is useful. It is useful to talk about these two processes differently, because it makes certain things easier to discuss: (1) the extreme differences in the observed rates and (2) because of certain other predictions (e.g., the vastly increased capability of tech to make large jumps to break out of local maxima (small change detrimental, but large change possibly beneficial)).

In Darwinian evolution: no such things as revolutions, only evolutions; Darwinian evolution predicts unnecessary/inefficient intermediary steps that are not predicted by technological evolution. And Darwinian evolution is normally considered biological evolution.

The view in favor of the distinction seems to be that technological evolution originates in an intention. But stopping the causal chain at the intention can seem arbitrary from a certain point of view. The intention, after all, may just be a part of the event-cuasal order, and so it will have causes, and they will have causes, and so on. Thus, it seems to be an arbitrary stopping point from the perspective of causal explanation.

Friday, February 8, 2013

Notes on our fifth meeting


We started out with Patrick’s nice comment on the blog about Nietzsche. You can read it, below. This led to a discussion of related issues:

Is the Singularity a continuation of human existence? A particular human’s (i.e., Kurzweil’s) existence?

What constitutes 'fndamental' change? When is a change in degree a change in kind?

Are there limits to human progress and development?
It seems so: we can only think and extend our ideas in a human way, along a restricted range of options. These limits might not be known or knowable to us, but they are there all the same.

But: if we assume that we are essentially limited in certain ways, where do we draw the line? Before vaccines, we might have claimed that we are essentially subject to certain diseases. But now we do not think that.

One clear fundamental difference between humans and the Singularity: the Singularity will not be carbon-based.

But: There still must be matter that is a prerequisite for any existence. This is so, even if the Singularity stands to the matter that underlies it up in a different relation than we stand to the matter that underlies us. (Is 'underlie' the right relation here?)
The Singularity can move through the space of information in a different way than we can move through physical space.

But this does not mean that the relation of Singularity to matter is different than that of human to matter. It seems to be a matter of salience.

Could envision, not the Singularity, but a collection of superhuman consciousnesses

A difference between the relation of the Singularity to its physical instantiation and me to my body: the Singularity can transfer to a different physical instantiation in a way I cannot (when one portion of the computer network goes down, a very different portion can keep the consciousness that is the Singularity going—perhaps even has been all along: multiple, parallel realization).


Recall from the Chomsky piece that there are different conceptions of underlying principles: behaviorism (copying) vs Chomsky (understanding): Perhaps Kurzweil is just using the copying conception. And perhaps he is getting mileage off of trading on the ambiguity between the two  interpretations of ‘capturing underlying principles'.

An objection to the Input/output picture: it treats the mind as a black-box.

Views that call for filling in the black box: don’t need to appeal to a soul.

One might claim that mental states are strongly historical: they do not supervene on mere time-slices of functional organization; allows that physical systems count as minds partly in virtue of their past (cf. Dennett).

This is, perhaps, illustrated by Jason’s sprinx case: one imagines a sprinx thousands of years before evolution creates one. Have I seen a sprinx?

Distinction: the content of a mental state vs. something being a mental state
Less controversial to claim relevance of history to content (content externalism) than to say the same for being a mental state

A claim in physics: the universe is a state function
For any given state, future states can be predicted from it in ignorance of past states
All future time moments would be predicted the same, regardless of past staes leading to the given state

Two issues:
1.     The rise of the Singularity
2.     Its enabling us to achieve immortality

There are many sub-issues for each of these two issues.

Just given a qualitative change in the intelligence, it does not follow that it cannot be us who survive.

In the personal identity literature, there are some who think it is not a matter of whether I continue, but whether there is the right kind of continuity for me to care about the one who continues.

Kurzweil is trying to live as long as he can, so that he can be around for the Singularity in order to achieve immortality

If it is a leap to a new form of intelligence, one that transcends human limitations, then couldn’t be me, because a different form of life. (Perhaps this was Russell’s point from earlier, or in the background of what he said.)

Varia:
A different view of uploading: not me in a computer, but a child of mine in a computer.

A good distinction: logical possibility vs natural possibility

The way the brain works (parallel processing) vs the way the computer processes (logic trees, etc.)

Didn’t the IA Singularity already occur?

Thursday, January 31, 2013

Notes on Our Fourth Meeting

I thought our discussion of the Vinge and Moravec pieces was really great. Thank you everyone for such interesting comments and questions. Since we will be continuing with this topic for at least a week or two longer, I hope the discussion continues to excite everyone.

Here are some of the highlights, as I recall, from this past Tuesday:

Both pieces ended on what seemed like different notes: Moravec sounded like something of a mystic or along the lines of a Buddhist or Hindu, with a much more positive slant to what he was saying, whereas Vinge seemed to express a sense of impending doom, or at least a worrisome outlook.

Some questions about motivation: What would the motivation of a superintelligent being (of the sort that the Singularity is characterized to be) be like? Human and animal motivation is shaped in a large part by the need to find food and take care of other basic needs. What about an artifical superintelligence?

Some questions about intelligence: How do we define intelligence? What characteristics are essential for a recognizable form of intelligence (e.g., creativity, inspiration, nostalgia)? Could the Singularity possess these characteristics? In what way is the form of intelligence characteristic of the Singularity supposed to be beyond our ken? The form of intelligence of a mature adult human is beyond the ken of a baby human. Is there supposed to be a difference in the case of the Singularity's being beyond our ken? What is this difference?

Some questions pertaining to our supposed inability to predict what the Singularity would be like:
1.     With a new sort of intelligence, the Turing test won’t apply. What sort of continuity is there between them?
2.     Epistemological claim about our predictions: there will be an event beyond which we cannot predict where things will go. Might the ignorance be connected to question 1?
3.     What makes the Singularity unique? We cannot predict future theories of our own even now. So what’s the difference between the uncertainties we face everyday and the ones this possibility presents?

How is the concept of the singularity already a projection into the future of what we already know? How would we recognize it? Might it already exist, and we don’t know yet?

On some conceptions, the Singularity seems to transcend individuality. Is this a difference between our conception of ourselves as humans and the kind of entity that the Singularity is supposed to be? Does it factor into issues about the desirability of the coming of the Singularity

Why the Singularity might scare us: A future where people aren’t running things anymore is fundamentally different from our present. We might no longer be at the center of things. AI would be scary because has no continuity with our current existence. A future superintelligence might be hostile toward humans.

But is the Singularity to be feared? Would a superintelligence (necessarily, most likely) respect biodiversity, the rights of other creatures, and so on? Would it recognize moral values? WOuld it be a moral exemplar?

The contrast between Aritifical Intelligence (AI) and Intelligence Amplification (IA), in Vinge, was very interesting: Which is the more plausible route to the Singularity? Which is the most desirable, from the perspective of our own well-being as humans? How discontinuous would the Singularity be with human existence if it arose in this way, as opposed to through more traditional AI? Does IA lead to something like a hive-mind or a superintelligence that takes a cue from the Gaia hypothesis?

Would the Singularity (or any other superintelligence) become bored? What characteristics might cause or prevent this? What sort of immortality would it have? What importance does the fact that even a superintelligence has a physical base have with respect to its longevity prospects?


Some different issues:
1.     Could there be a different kind of entity that is super-intelligent?
2.     Could it be immortal?
3.     Could I be immortal in the sense that I have these super-enhanced capabilities?

An irony: Psychology teaches us that those who are deeply religious live longest, so, ironically, the people who live the longest would not believe in a Singularity (on the assumption that this is not something that the religious believe in).

Nietzsche came up a few times: How does he describe the Ubermensch?How does the Ubermensch relate to the Singularity, if at all?

The notion that it might be our function to enable the development of the Singularity also came up: What sense of 'function' is in play here? What does this imply about our relationship to the Singularity (causal, normative)? What about the Singularity's relationship to us (ancestor worship, fuel)?