Friday, February 22, 2013

Notes on our seventh meeting


We discussed David Chalmers' "The Singularity: A Philosophical Analysis," which we will continue to discuss next time.

We began by noting Chalmers’ moderate sense of ‘singularity’ (p. 3): referring to an intelligence explosion by a recursive mechanism, where successively more intelligent machines arise that are better at producing even more intelligent machines.

We also noted a nice distinction Chalmers makes (in the spirit of Parfit): identity vs survival

Parfit on Personal Identity: identity doesn’t really matter that much; trying to persuade us to get less attached to notions of identity
Eric Schwitzgebel’s view: it is convenient to have a logic with clean lines between people for us (we don’t fission, duplicate, upload), but in weird cases, this logic does not model well, so should switch to modeling what you care about (e.g., survival).

But practical issues remain (e.g., who pays the mortgage).

Enhancement: much of this has already happened
The Flynn effect: increasing IQs across generations, requires re-calibrating the IQ test to keep the norms in a certain range

There is room for skepticism about measuring general intelligence: (i) perhaps we are better test-takers; (ii) there are multiple intelligences, and IQ-style tests don't test for many (or even most) of them.

In sec 3 of Chalmer's essay: notice the embedding of ‘what we care about’ in the characterization of the relevant capacities. This is in line with the Parfitian approach to identity.

Values: There are many, complex issues here
            How to define them
            How to identify them
            Subjective vs objective
            Universal values (e.g., cross-cultural, across times)

3 different sense of ‘objectivity’ for values: judgment-independent, choice-independent, human nature-independent

Kant vs Hume:
            An issue about whether mistakes in value are mistakes in rationality (Hume: no; Kant: yes).
            And what does this entail about the moral behavior of AI+(+)?

See the new Pinker book: where he argues that we have beome both more intelligent and more moral over time.

Two sense of morality over time: across generations vs. over the course of an individual’s life
            It seems that older people have more sophisticated moral reasoning, but this is a distinct
            question from whether different cultures have more or less sophisticated moral reasoning and
            also from the issue whether one culture is more correct in its moral practices than another.

There are important things that transcend a particular context: e.g., math, logic
            Perhaps the survival instinct is another one

A distinction: one's moral beliefs vs. one's behavior

Another distinction: immortality vs longevity

Obstacles: Chalmers claims that motivational ones are most plausible to stop singularity from coming
            Is this right? Why, exactly, does he think this?
Perhaps there are structural obstacles: the intelligence growth becomes too hard, diminishing returns
Energy needs: can be a situational obstacle, but can also be tied to a motivational obstacle
And when there is a single system, because the energy requirements become greater, this can create a single entity and then it would all depend on its motivation

Some related issues:
Matrioshka brain: concentric circles around the sun, using all the energy, Dyson sphere brain

Kurzweil’s sixth epoch

The Fermi paradox: the odds are not good that we would be the first to reach superintelligence, so we should see evidence of others, but we don’t, so perhaps the process will stall out

Take-home messages from Chalmer's essay:
1.     a broadly functionalist account of the mind, such that we could be instantiated in a computer
              -So long as you have a nomologically possible world, conscious properties go with
              functional properties
2.     the real take-home: there’s a significant enough possibility of something like the singularity that we should seriously worry about it and consider how we are going to handle it

2 comments:

  1. Chalmers's piece has got me thinking about what abilities would need to be extended to get to AI+. When discussing emulation he suggests that the emulation itself might not lead to AI+ but that connecting several emulations or improving hardware might. When discussing direct programming he claims that improving pieces of the program might lead to AI+ (pgs12-13). Two questions cam up in my efforts to think through what exactly we are supplementing: 1) is the shift from AI to AI+ a difference in kind? Chalmers says that the shift from AI+ to AI++ would be like the difference between a mouse and human. I think that the relevant sense in which humans are more intelligent than mice is a difference in kind of intelligence not just degree, but I'm not sure. If the relevant difference in kind is not a difference in kind then it is not clear to me why the move would be a point beyond which we cannot comprehend.

    2) What is the relevant ability that must be improved in order to arrive at AI+. Maybe this isn't even the right question, but it seems to me that improving certain abilities would not amount to more intelligence. Certain animals, for example, have vastly superior abilities of perception than humans but are not more intelligent in the relevant respect. This matters because improving some abilities without improving the ability that matters in the right kind of way (whatever that is) won't amount in AI+. For example, we can imagine synthetic supplements to our brains that improve the degree of detail that we are capable of perceiving or the amount of information that we can remember with photographic accuracy. I'm not convinced that such improvements amount to AI+.

    One way of thinking about what the right thing is would be to consider philosophical accounts of the difference between animal and human consciousness. A theme in the phenomenological tradition is that whatever the difference is, it is "destined" to a certain kind of world. To make this less opaque a simplification: the difference that emerges in human has a function, it functions properly only in the context for which it was "designed/adapted." This being the case, then our grip on the comparison of intelligence consists in how well a system performs the function that the human mind properly serves in the context of living a life. If the sorts of enhancements that Chalmers talks about takes the function of intelligence out of this context, then it is not clear to me that it is "better" intelligence; for, the context that provides a set of standards for evaluating better is lost.

    I'm not convinced that this is the case, but I do think that given certain understandings of the relevant ability "enhancement" would not amount to true improvement of intelligence. Thus, I think that we should think more about what it is relevant to improve in order to get to AI+

    ReplyDelete
  2. I forgot to mention that section 3 in particular animated these questions

    ReplyDelete