Wednesday, February 27, 2013

Note son our eighth meeting

We continued our discussion of Chalmers' singularity essay, beginning with Patrick's comment on the blog post from last week's meeting.


Patrick’s comment: How are we supposed to conceive of the extensions of intelligence and/or abilities that Chalmers talks about in sec 3?
            The idea is that the AI+(+) is an intelligence of a different kind

The way that AI+ will come about seems deeply dependent on what the abilities are.

One theme in phenomenology: consciousness/the mind is destined for the world—they are tied up in the context in which they make sense. For example, consider a proper functioning view: we get an ability that distinguishes us form animals and that functions properly in a certain context.

But it’s not clear (a) how we can be said to extend these same abilities to new contexts and (b) how these extended abilities might be said to be better.

Success is always success in a context. But we do not have access to the stage relevant to the success of AI+. This is significant because it blocks our ability to predict success relevant to AI++.

A related point (perhaps the same point put another way): the Wittgensteinian idea that our concepts are built for this world, and certain kinds of counterfactuals cannot be properly evaluated because they outstrip the context of our language game

Perhaps: pick a very simple measure for evaluation (e.g., ability to generate wealth, efficiency)

Bergsson: has an argument that every creature is the best example of its kind (Matter and Memory, at the end)

Is there a distinction to be made between a difference in degree and a difference in kind?
Perhaps we are responsible for assigning differences in kind given various differences in degree.

            But does this make the distinction irrelevant or uninteresting?

There are interesting issues here about reality, whether we can experience an objective reality or only ever a subjectively conditioned reality.

Will we ever reach a consensus regarding where to draw the line for a difference in kind? Perhaps, so long as we agree to some background presuppositions—e.g., whether to take a functional perspective or a perspective regarding material constitution.

What constitutes progress?
            Paradigm shifts, death of ideas, (greater or lesser) consensus?

Bostrom (2012) just defines intelligence as something like instrumental rationality
Are bacteria intelligent in the same way as modern AI? Yes, if we define reasoning behaviorally. And this definition of intelligence is easily measurable.

But is it safe to assume that the desire to have power over oneself and one’s environment are prerequisites for success at survival?
            Is this what we think intelligent people have?

All living things modify their internal environment in order to better survive (bacteria, plants, humans, etc.)

Gray goo: a nanobot that builds a copy of itself and the apocalypse comes about because it replicates itself in an uncontrolled fashion, eating all life on earth to feed its end of copying itself.

A problem: We have AI, then pick the capacities we most care about, extend them into AI+, and then the extension to AI++ would no longer be a sort of being we would value. The idea is that the set of things extended comes to include fewer things we care about, to the point that AI++ does not contain anything that we care about.

If we assume that intelligence is instrumental rationality, then this will be ramped up to the exclusion of other interests. But we have a system of interconnected interests—we have cognitive interests, say, in individuating objects in perception. But this might not be maintained in the pursuit of maximizing instrumental rationality.

What does it mean to give a machine values? Give them ends, in the sense relvant to means-ends reasoning.

An argument that a superintelligence might be both moral and extinguish humanity:
Suppose consequentialism is right and AI++ discovers the true conception of well-being. It might be that in order to achieve this they need to wipe out human beings. This would result in a better state of affairs, but extinction for us.

How should we feel about this?

Many of these issues come to a similar problem: The production of an AI++ will involve a loss of some things we find very valuable, and this presents us with a problem. Should we pursue or should we inhibit or constrain the relevant progress in intelligence?
This is probably closely related to Chalmers’ claim that motivational obstacles are the greatest.

What sort of control do we have over the singularity?
            We could delay it, but for how long?
            We could stop it from happening on Earth, say, by blowing up the planet.
We could constrain the ways in which the possibility of the singularity occurring unfolds.

No comments:

Post a Comment