Wednesday, February 27, 2013

Hiatus

We will be taking a break until the first week in April, when spring quarter begins. In the meantime, I will try to figure out a good time for everyone to meet and what we would like to take up when we reconvene. Suggestions most welcome.

Have a great end of the quarter and spring break.

Note son our eighth meeting

We continued our discussion of Chalmers' singularity essay, beginning with Patrick's comment on the blog post from last week's meeting.


Patrick’s comment: How are we supposed to conceive of the extensions of intelligence and/or abilities that Chalmers talks about in sec 3?
            The idea is that the AI+(+) is an intelligence of a different kind

The way that AI+ will come about seems deeply dependent on what the abilities are.

One theme in phenomenology: consciousness/the mind is destined for the world—they are tied up in the context in which they make sense. For example, consider a proper functioning view: we get an ability that distinguishes us form animals and that functions properly in a certain context.

But it’s not clear (a) how we can be said to extend these same abilities to new contexts and (b) how these extended abilities might be said to be better.

Success is always success in a context. But we do not have access to the stage relevant to the success of AI+. This is significant because it blocks our ability to predict success relevant to AI++.

A related point (perhaps the same point put another way): the Wittgensteinian idea that our concepts are built for this world, and certain kinds of counterfactuals cannot be properly evaluated because they outstrip the context of our language game

Perhaps: pick a very simple measure for evaluation (e.g., ability to generate wealth, efficiency)

Bergsson: has an argument that every creature is the best example of its kind (Matter and Memory, at the end)

Is there a distinction to be made between a difference in degree and a difference in kind?
Perhaps we are responsible for assigning differences in kind given various differences in degree.

            But does this make the distinction irrelevant or uninteresting?

There are interesting issues here about reality, whether we can experience an objective reality or only ever a subjectively conditioned reality.

Will we ever reach a consensus regarding where to draw the line for a difference in kind? Perhaps, so long as we agree to some background presuppositions—e.g., whether to take a functional perspective or a perspective regarding material constitution.

What constitutes progress?
            Paradigm shifts, death of ideas, (greater or lesser) consensus?

Bostrom (2012) just defines intelligence as something like instrumental rationality
Are bacteria intelligent in the same way as modern AI? Yes, if we define reasoning behaviorally. And this definition of intelligence is easily measurable.

But is it safe to assume that the desire to have power over oneself and one’s environment are prerequisites for success at survival?
            Is this what we think intelligent people have?

All living things modify their internal environment in order to better survive (bacteria, plants, humans, etc.)

Gray goo: a nanobot that builds a copy of itself and the apocalypse comes about because it replicates itself in an uncontrolled fashion, eating all life on earth to feed its end of copying itself.

A problem: We have AI, then pick the capacities we most care about, extend them into AI+, and then the extension to AI++ would no longer be a sort of being we would value. The idea is that the set of things extended comes to include fewer things we care about, to the point that AI++ does not contain anything that we care about.

If we assume that intelligence is instrumental rationality, then this will be ramped up to the exclusion of other interests. But we have a system of interconnected interests—we have cognitive interests, say, in individuating objects in perception. But this might not be maintained in the pursuit of maximizing instrumental rationality.

What does it mean to give a machine values? Give them ends, in the sense relvant to means-ends reasoning.

An argument that a superintelligence might be both moral and extinguish humanity:
Suppose consequentialism is right and AI++ discovers the true conception of well-being. It might be that in order to achieve this they need to wipe out human beings. This would result in a better state of affairs, but extinction for us.

How should we feel about this?

Many of these issues come to a similar problem: The production of an AI++ will involve a loss of some things we find very valuable, and this presents us with a problem. Should we pursue or should we inhibit or constrain the relevant progress in intelligence?
This is probably closely related to Chalmers’ claim that motivational obstacles are the greatest.

What sort of control do we have over the singularity?
            We could delay it, but for how long?
            We could stop it from happening on Earth, say, by blowing up the planet.
We could constrain the ways in which the possibility of the singularity occurring unfolds.

Friday, February 22, 2013

Notes on our seventh meeting


We discussed David Chalmers' "The Singularity: A Philosophical Analysis," which we will continue to discuss next time.

We began by noting Chalmers’ moderate sense of ‘singularity’ (p. 3): referring to an intelligence explosion by a recursive mechanism, where successively more intelligent machines arise that are better at producing even more intelligent machines.

We also noted a nice distinction Chalmers makes (in the spirit of Parfit): identity vs survival

Parfit on Personal Identity: identity doesn’t really matter that much; trying to persuade us to get less attached to notions of identity
Eric Schwitzgebel’s view: it is convenient to have a logic with clean lines between people for us (we don’t fission, duplicate, upload), but in weird cases, this logic does not model well, so should switch to modeling what you care about (e.g., survival).

But practical issues remain (e.g., who pays the mortgage).

Enhancement: much of this has already happened
The Flynn effect: increasing IQs across generations, requires re-calibrating the IQ test to keep the norms in a certain range

There is room for skepticism about measuring general intelligence: (i) perhaps we are better test-takers; (ii) there are multiple intelligences, and IQ-style tests don't test for many (or even most) of them.

In sec 3 of Chalmer's essay: notice the embedding of ‘what we care about’ in the characterization of the relevant capacities. This is in line with the Parfitian approach to identity.

Values: There are many, complex issues here
            How to define them
            How to identify them
            Subjective vs objective
            Universal values (e.g., cross-cultural, across times)

3 different sense of ‘objectivity’ for values: judgment-independent, choice-independent, human nature-independent

Kant vs Hume:
            An issue about whether mistakes in value are mistakes in rationality (Hume: no; Kant: yes).
            And what does this entail about the moral behavior of AI+(+)?

See the new Pinker book: where he argues that we have beome both more intelligent and more moral over time.

Two sense of morality over time: across generations vs. over the course of an individual’s life
            It seems that older people have more sophisticated moral reasoning, but this is a distinct
            question from whether different cultures have more or less sophisticated moral reasoning and
            also from the issue whether one culture is more correct in its moral practices than another.

There are important things that transcend a particular context: e.g., math, logic
            Perhaps the survival instinct is another one

A distinction: one's moral beliefs vs. one's behavior

Another distinction: immortality vs longevity

Obstacles: Chalmers claims that motivational ones are most plausible to stop singularity from coming
            Is this right? Why, exactly, does he think this?
Perhaps there are structural obstacles: the intelligence growth becomes too hard, diminishing returns
Energy needs: can be a situational obstacle, but can also be tied to a motivational obstacle
And when there is a single system, because the energy requirements become greater, this can create a single entity and then it would all depend on its motivation

Some related issues:
Matrioshka brain: concentric circles around the sun, using all the energy, Dyson sphere brain

Kurzweil’s sixth epoch

The Fermi paradox: the odds are not good that we would be the first to reach superintelligence, so we should see evidence of others, but we don’t, so perhaps the process will stall out

Take-home messages from Chalmer's essay:
1.     a broadly functionalist account of the mind, such that we could be instantiated in a computer
              -So long as you have a nomologically possible world, conscious properties go with
              functional properties
2.     the real take-home: there’s a significant enough possibility of something like the singularity that we should seriously worry about it and consider how we are going to handle it

Wednesday, February 13, 2013

Notes on our sixth meeting

For this meeting, we read two more chapters in Kurzweil's The Singularity Is Near. Our discussion was rather wide-ranging and did not follow the text very closely. But it was interesting nonetheless.


We began with this question: Recall that Vinge distinguishes between AI and IA. In which of these ways does Kurzweil envision the Singularity coming about? That is, does Kurzweil think that the Singularity will arise in combination with our minds (IA), or else as a result of an artificial intelligence we produce (AI)?

The significance of this question has to do with the issue of mind-uploading. Why would we have to upload our minds to the Singularity, as Kurzweil suggested in the reading from last week, if the Singularity arises in combination with our minds?

An Answer: Kurzweil envisions a combination of the two: AI will lead to IA (e.g., Google), which will lead to strong AI in the future, which will then come back and beam us up to the heavens. In any case, the two approaches very much compliment each other.

Kurzweil is suggesting that there will be an AI that is smarter than humans before the uploading. But not certain how it will occur.

Might IA involve uploading in the process of the Singularity coming about? The uploading enters the equation before the Singularity.

What exactly is uploading? A transfer. When a blow to the head no longer matters. A change in substrate. Technically: uploading means that one makes a copy, and then a copy of a copy. Not just plugging in.

One consideration against thinking that Kurzweil envisions a certain version of the IA route to the SIngularity: Kurzweil doesn’t like the single global consciousness idea, because he thinks that it would preclude him being there. He assumes that his individual self would not persist.

This brings up issues about where to draw the boundary of the individual mind: These are salient, not only for the picture where we are plugged in to a growing intelligence that eventually becomes the Singluarity, but also for the picture according to which we are uploaded to a pre-existing Singularity.

How is Kurzweil using the term ‘the Singularity’? And how does this relate to Vinge’s use?: Kurzweil uses the term to refer to an event in human history, not necessarily a particular intelligence that comes into existence, as Vinge does. But Kurzweil does seem to have the arrival of this intelligence in mind.

Kurzweil’s focus on progress in intelligence seems myopic. There have been other periods of advancement in human history that have seen the same pattern of change (perhaps not quite as fast) in different areas of human experience. Why privilege the type of change that interests Kurzweil?

Kurzweil seems to greatly underestimate two things: (1) the limits of technology (need more hardware as well as more code) and (2) the power behind biology (he assumes that technology is better because our chemical synapses slows down our thinking—but there is more going on than just transfer of electrical signals, a trade-off between speed and fine control, also not just signal transfer but also what goes on inside neurons).

Many of the signals required for higher thought don’t transfer info but rather change the way neurons behave—and even the nanobots might not be able to tell us all the ways in which the neurons are functioning

Because of the many complexities to how our brains work, in thought, it may be possible that the robot person might be slower than the human person, even though the robot is faster at transferring electrical signals that carry information. For example, what look like limitations given our biology might be mechanisms that help to achieve optimum speed, given the various operations imolicated in our minds' functioning.

Articles on creating a baby robot (one that they teach):
Stuck on certain tasks: e.g., trying to pay attention to what it is holding, and this is because its eyesight is too good and doesn’t discriminate enough
The key was to make its eyes worse

The process of life as it is may not be the most efficient way to do things, but it is hard to make certain the stronger claim that it is not the most efficient way to do things.

Record to MP3 analogy, or live music to recording analogy: Music recorded on a record (in analog) has no gaps and so has a sound quality that cannot be matched by digital means (e.g., MP3).
Might the new medium be missing some qualitative characteristics of the old medium? And might these be essential to the experience? Can the same be said for different substrates for purported conscious experience?

The challenge is to 'the substrate independence thesis' (e.g., invoked by Bostrom).

Need to be careful: need to be aware if and when nostalgia plays a role in evaluation

Is evolution slow?
            Well it might seem so, only if one assumes that the environment changes slowly

Is there a good distinction to be made between biological advancement/evolution vs technological advancement/evolution?

The main consideration in favor of the distinction is that technological advancement/evolution essentially involves intentions and design by an intelligence. Biological evolution is normally considered to be a 'blind' process in that it is not guided by an intelligent hand.
 
In biology: random mutations give rise to new features, that are more or less adaptable to the environment.

How does the environment influence the mutations?: by changing the rate, but not the kind—they are still random.

What is randomness in this context? Seems to be not by intelligent design.

So “evolution” cannot begin with an intentionally produced mutation

What exactly is evolution?

What is the difference between the other tool using animals and us, such that advancements according to our intentions are of a different category than advancements according to their intentions?

Humans make tools by reproducing things we’ve seen by making them better.

And other animals don’t pass down the acquired knowledge to future generations

In biological evolution: we are talking about the traits of a species.

In technological evolution: can also talk about traits (e.g., a computer having wifi), but then can distinguish between the processes that selected those trait.

There is a different set of useful predictions from intentional vs. unintentional adaptations. We use the label 'biological evolution' in certain contexts, and we use the label 'technological evolution' in another, and this distinction is useful. It is useful to talk about these two processes differently, because it makes certain things easier to discuss: (1) the extreme differences in the observed rates and (2) because of certain other predictions (e.g., the vastly increased capability of tech to make large jumps to break out of local maxima (small change detrimental, but large change possibly beneficial)).

In Darwinian evolution: no such things as revolutions, only evolutions; Darwinian evolution predicts unnecessary/inefficient intermediary steps that are not predicted by technological evolution. And Darwinian evolution is normally considered biological evolution.

The view in favor of the distinction seems to be that technological evolution originates in an intention. But stopping the causal chain at the intention can seem arbitrary from a certain point of view. The intention, after all, may just be a part of the event-cuasal order, and so it will have causes, and they will have causes, and so on. Thus, it seems to be an arbitrary stopping point from the perspective of causal explanation.

Friday, February 8, 2013

Notes on our fifth meeting


We started out with Patrick’s nice comment on the blog about Nietzsche. You can read it, below. This led to a discussion of related issues:

Is the Singularity a continuation of human existence? A particular human’s (i.e., Kurzweil’s) existence?

What constitutes 'fndamental' change? When is a change in degree a change in kind?

Are there limits to human progress and development?
It seems so: we can only think and extend our ideas in a human way, along a restricted range of options. These limits might not be known or knowable to us, but they are there all the same.

But: if we assume that we are essentially limited in certain ways, where do we draw the line? Before vaccines, we might have claimed that we are essentially subject to certain diseases. But now we do not think that.

One clear fundamental difference between humans and the Singularity: the Singularity will not be carbon-based.

But: There still must be matter that is a prerequisite for any existence. This is so, even if the Singularity stands to the matter that underlies it up in a different relation than we stand to the matter that underlies us. (Is 'underlie' the right relation here?)
The Singularity can move through the space of information in a different way than we can move through physical space.

But this does not mean that the relation of Singularity to matter is different than that of human to matter. It seems to be a matter of salience.

Could envision, not the Singularity, but a collection of superhuman consciousnesses

A difference between the relation of the Singularity to its physical instantiation and me to my body: the Singularity can transfer to a different physical instantiation in a way I cannot (when one portion of the computer network goes down, a very different portion can keep the consciousness that is the Singularity going—perhaps even has been all along: multiple, parallel realization).


Recall from the Chomsky piece that there are different conceptions of underlying principles: behaviorism (copying) vs Chomsky (understanding): Perhaps Kurzweil is just using the copying conception. And perhaps he is getting mileage off of trading on the ambiguity between the two  interpretations of ‘capturing underlying principles'.

An objection to the Input/output picture: it treats the mind as a black-box.

Views that call for filling in the black box: don’t need to appeal to a soul.

One might claim that mental states are strongly historical: they do not supervene on mere time-slices of functional organization; allows that physical systems count as minds partly in virtue of their past (cf. Dennett).

This is, perhaps, illustrated by Jason’s sprinx case: one imagines a sprinx thousands of years before evolution creates one. Have I seen a sprinx?

Distinction: the content of a mental state vs. something being a mental state
Less controversial to claim relevance of history to content (content externalism) than to say the same for being a mental state

A claim in physics: the universe is a state function
For any given state, future states can be predicted from it in ignorance of past states
All future time moments would be predicted the same, regardless of past staes leading to the given state

Two issues:
1.     The rise of the Singularity
2.     Its enabling us to achieve immortality

There are many sub-issues for each of these two issues.

Just given a qualitative change in the intelligence, it does not follow that it cannot be us who survive.

In the personal identity literature, there are some who think it is not a matter of whether I continue, but whether there is the right kind of continuity for me to care about the one who continues.

Kurzweil is trying to live as long as he can, so that he can be around for the Singularity in order to achieve immortality

If it is a leap to a new form of intelligence, one that transcends human limitations, then couldn’t be me, because a different form of life. (Perhaps this was Russell’s point from earlier, or in the background of what he said.)

Varia:
A different view of uploading: not me in a computer, but a child of mine in a computer.

A good distinction: logical possibility vs natural possibility

The way the brain works (parallel processing) vs the way the computer processes (logic trees, etc.)

Didn’t the IA Singularity already occur?