We discussed David Chalmers' "The Singularity: A Philosophical Analysis," which we will continue to discuss next time.
We began by noting Chalmers’ moderate sense of ‘singularity’ (p. 3): referring to an intelligence explosion by a recursive mechanism, where successively more intelligent machines arise that are better at producing even more intelligent machines.
We also noted a nice distinction Chalmers makes (in the spirit of Parfit): identity vs survival
Parfit on Personal Identity: identity doesn’t really matter that much; trying to persuade us to get less attached to notions of identity
Eric Schwitzgebel’s view: it is convenient to have a logic with clean lines between people for us (we don’t fission, duplicate, upload), but in weird cases, this logic does not model well, so should switch to modeling what you care about (e.g., survival).
But practical issues remain (e.g., who pays the mortgage).
Enhancement: much of this has already happened
The Flynn effect: increasing IQs across generations, requires re-calibrating the IQ test to keep the norms in a certain range
There is room for skepticism about measuring general intelligence: (i) perhaps we are better test-takers; (ii) there are multiple intelligences, and IQ-style tests don't test for many (or even most) of them.
In sec 3 of Chalmer's essay: notice the embedding of ‘what we care about’ in the characterization of the relevant capacities. This is in line with the Parfitian approach to identity.
Values: There are many, complex issues here
How to define them
How to identify them
Subjective vs objective
Universal values (e.g., cross-cultural, across times)
3 different sense of ‘objectivity’ for values: judgment-independent, choice-independent, human nature-independent
Kant vs Hume:
An issue about whether mistakes in value are mistakes in rationality (Hume: no; Kant: yes).
And what does this entail about the moral behavior of AI+(+)?
See the new Pinker book: where he argues that we have beome both more intelligent and more moral over time.
Two sense of morality over time: across generations vs. over the course of an individual’s life
It seems that older people have more sophisticated moral reasoning, but this is a distinct
question from whether different cultures have more or less sophisticated moral reasoning and
also from the issue whether one culture is more correct in its moral practices than another.
There are important things that transcend a particular context: e.g., math, logic
Perhaps the survival instinct is another one
A distinction: one's moral beliefs vs. one's behavior
Another distinction: immortality vs longevity
Obstacles: Chalmers claims that motivational ones are most plausible to stop singularity from coming
Is this right? Why, exactly, does he think this?
Perhaps there are structural obstacles: the intelligence growth becomes too hard, diminishing returns
Energy needs: can be a situational obstacle, but can also be tied to a motivational obstacle
And when there is a single system, because the energy requirements become greater, this can create a single entity and then it would all depend on its motivation
Some related issues:
Matrioshka brain: concentric circles around the sun, using all the energy, Dyson sphere brain
Kurzweil’s sixth epoch
The Fermi paradox: the odds are not good that we would be the first to reach superintelligence, so we should see evidence of others, but we don’t, so perhaps the process will stall out
Take-home messages from Chalmer's essay:
1. a broadly functionalist account of the mind, such that we could be instantiated in a computer
-So long as you have a nomologically possible world, conscious properties go with
2. the real take-home: there’s a significant enough possibility of something like the singularity that we should seriously worry about it and consider how we are going to handle it