Thursday, January 31, 2013

Notes on Our Fourth Meeting

I thought our discussion of the Vinge and Moravec pieces was really great. Thank you everyone for such interesting comments and questions. Since we will be continuing with this topic for at least a week or two longer, I hope the discussion continues to excite everyone.

Here are some of the highlights, as I recall, from this past Tuesday:

Both pieces ended on what seemed like different notes: Moravec sounded like something of a mystic or along the lines of a Buddhist or Hindu, with a much more positive slant to what he was saying, whereas Vinge seemed to express a sense of impending doom, or at least a worrisome outlook.

Some questions about motivation: What would the motivation of a superintelligent being (of the sort that the Singularity is characterized to be) be like? Human and animal motivation is shaped in a large part by the need to find food and take care of other basic needs. What about an artifical superintelligence?

Some questions about intelligence: How do we define intelligence? What characteristics are essential for a recognizable form of intelligence (e.g., creativity, inspiration, nostalgia)? Could the Singularity possess these characteristics? In what way is the form of intelligence characteristic of the Singularity supposed to be beyond our ken? The form of intelligence of a mature adult human is beyond the ken of a baby human. Is there supposed to be a difference in the case of the Singularity's being beyond our ken? What is this difference?

Some questions pertaining to our supposed inability to predict what the Singularity would be like:
1.     With a new sort of intelligence, the Turing test won’t apply. What sort of continuity is there between them?
2.     Epistemological claim about our predictions: there will be an event beyond which we cannot predict where things will go. Might the ignorance be connected to question 1?
3.     What makes the Singularity unique? We cannot predict future theories of our own even now. So what’s the difference between the uncertainties we face everyday and the ones this possibility presents?

How is the concept of the singularity already a projection into the future of what we already know? How would we recognize it? Might it already exist, and we don’t know yet?

On some conceptions, the Singularity seems to transcend individuality. Is this a difference between our conception of ourselves as humans and the kind of entity that the Singularity is supposed to be? Does it factor into issues about the desirability of the coming of the Singularity

Why the Singularity might scare us: A future where people aren’t running things anymore is fundamentally different from our present. We might no longer be at the center of things. AI would be scary because has no continuity with our current existence. A future superintelligence might be hostile toward humans.

But is the Singularity to be feared? Would a superintelligence (necessarily, most likely) respect biodiversity, the rights of other creatures, and so on? Would it recognize moral values? WOuld it be a moral exemplar?

The contrast between Aritifical Intelligence (AI) and Intelligence Amplification (IA), in Vinge, was very interesting: Which is the more plausible route to the Singularity? Which is the most desirable, from the perspective of our own well-being as humans? How discontinuous would the Singularity be with human existence if it arose in this way, as opposed to through more traditional AI? Does IA lead to something like a hive-mind or a superintelligence that takes a cue from the Gaia hypothesis?

Would the Singularity (or any other superintelligence) become bored? What characteristics might cause or prevent this? What sort of immortality would it have? What importance does the fact that even a superintelligence has a physical base have with respect to its longevity prospects?

Some different issues:
1.     Could there be a different kind of entity that is super-intelligent?
2.     Could it be immortal?
3.     Could I be immortal in the sense that I have these super-enhanced capabilities?

An irony: Psychology teaches us that those who are deeply religious live longest, so, ironically, the people who live the longest would not believe in a Singularity (on the assumption that this is not something that the religious believe in).

Nietzsche came up a few times: How does he describe the Ubermensch?How does the Ubermensch relate to the Singularity, if at all?

The notion that it might be our function to enable the development of the Singularity also came up: What sense of 'function' is in play here? What does this imply about our relationship to the Singularity (causal, normative)? What about the Singularity's relationship to us (ancestor worship, fuel)?

1 comment:

  1. I thought I would make some connections between Nietzsche and the singularity, since this came up at the end of discussion last time.

    There are a lot of different views on who Nietzsche's figure of the Uebermensch or overman actually is. The understanding that I find most plausible, however, might relate to our discussion of the singularity. The overman is supposed to be a figure that emerges in history after traditional morality has been “overcome.” To arrive at this figure, Nietzsche traces a genealogy or historical account of the development of morality (in On the Genealogy of Morality). He argues that animal impulses that characterize interactions among individuals in early societies developed into our contemporary conception of morality. The story is complicated, but he shows how human beings who were originally only motivated by their instincts come to be motivated by a sense of moral obligation. He does this by making a distinction between the original purpose of social practices and the good that they end up serving. He argues that Christian morality, and the feelings of guilt that it involves, originally serves the purpose of expressing an aggressive animal instinct in the form of self-cruelty. This way of expressing instinct, however, also ends up allowing individuals to recognize a sense of obligation. Human beings end up with the ability to live in an ethical community, where each person can respect his obligation to others. The original function of morality is “overcome” when moral action is not pervaded by self-cruelty, but by a sense of obligation. The overman, on this picture, is the individual who overcomes the self-cruelty of traditional morality, but retains the good of respecting obligations that emerged from it.

    I see Vinge’s picture of the singularity along similar lines, in terms of the distinction between original purpose on the one hand, and a separate function or good that emerges from this practice on the other. Computers were originally conceived as calculators, “computing” a certain output on the basis of human input and a set of rules. As their complexity and capacity grow, the output becomes farther removed from the input. The idea of an intelligence emerging from a sophisticated input-output relation is a kind of unforeseen function of the sort that Nietzsche saw in ethical life. I see Nietzsche’s ideas most clearly in Vinge’s notion of IA, where the practice of using computers for human tasks end up producing a new form of activity altogether—a new kind of intelligence that is distinct from the human and computer activities that make it up. A kind of IA version of the overman would involve overcoming the human-input/computer-output model of interaction, in order to produce a new form of activity—data creation, hyperlinking, systematic connection, …? What Nietzsche’s overman might tells us about the singularity is that it will not be something entirely new, but rather it will involve a new interpretation of the practices that we are already engaged in.