We started out with Patrick’s nice comment on the blog about
Nietzsche. You can read it, below. This led to a discussion of related issues:
Is the Singularity a continuation of human existence? A
particular human’s (i.e., Kurzweil’s) existence?
What constitutes 'fndamental' change? When is a change in
degree a change in kind?
Are there limits to human progress and development?
It seems so: we can only think and extend our ideas in a human way, along a
restricted range of options. These limits might not be known or knowable to us,
but they are there all the same.
But: if we assume that we are essentially limited in certain ways, where do we draw the line? Before vaccines, we
might have claimed that we are essentially subject to certain diseases. But now
we do not think that.
One clear fundamental difference between humans and the Singularity:
the Singularity will not be carbon-based.
But: There still must be matter that is a prerequisite for any existence. This is so, even if the Singularity stands to the matter that underlies it up in a different relation than we stand to the matter that underlies us. (Is 'underlie' the right relation here?)
The Singularity can move through
the space of information in a different way than we can move through physical
space.
But this does not mean that the relation
of Singularity to matter is different than that of human to matter. It seems to
be a matter of salience.
Could envision, not the Singularity, but a collection of superhuman
consciousnesses
A difference between the relation
of the Singularity to its physical instantiation and me to my body: the Singularity
can transfer to a different physical instantiation in a way I cannot (when one portion
of the computer network goes down, a very different portion can keep the
consciousness that is the Singularity going—perhaps even has been all along:
multiple, parallel realization).
An objection to the Input/output picture: it treats the mind as a black-box.
Views that call for filling in the black box: don’t need to
appeal to a soul.
One might claim that mental states are strongly historical: they
do not supervene on mere time-slices of functional organization; allows that
physical systems count as minds partly in virtue of their past (cf. Dennett).
This is, perhaps, illustrated by Jason’s sprinx case: one imagines a sprinx thousands of years
before evolution creates one. Have I seen a sprinx?
Distinction: the content of a mental state vs. something
being a mental state
Less controversial to claim
relevance of history to content (content externalism) than to say the same for
being a mental state
A claim in physics: the universe is a state function
For any given state, future states
can be predicted from it in ignorance of past states
All future time moments would be predicted
the same, regardless of past staes leading to the given state
Two issues:
1. The
rise of the Singularity
2. Its
enabling us to achieve immortality
There are many sub-issues for each of these two issues.
Just given a qualitative change in the intelligence, it does
not follow that it cannot be us who survive.
In the personal identity literature, there are some who
think it is not a matter of whether I continue, but whether there is the right
kind of continuity for me to care about the one who continues.
Kurzweil is trying to live as long as he can, so that he can
be around for the Singularity in order to achieve immortality
If it is a leap to a new form of intelligence, one that transcends
human limitations, then couldn’t be me, because a different form of life.
(Perhaps this was Russell’s point from earlier, or in the background of what he
said.)
Varia:
A different view of uploading: not me in a computer, but a
child of mine in a computer.
A good distinction: logical possibility vs natural
possibility
The way the brain works (parallel processing) vs the way the
computer processes (logic trees, etc.)
Didn’t the IA Singularity already occur?
I saw this review for a book on the ethics of AI, robots, and related things we have been talking about:
ReplyDeletehttp://ndpr.nd.edu/news/37494-the-machine-question-critical-perspectives-on-ai-robots-and-ethics/
One interesting point is that the literature on "machine ethics" (apparently) has focused on moral agency, but has neglected the "moral patiency" of machines. So if we program our machines not to eat us, we might also want to write laws that protect what rights they may have.