Wednesday, February 13, 2013

Notes on our sixth meeting

For this meeting, we read two more chapters in Kurzweil's The Singularity Is Near. Our discussion was rather wide-ranging and did not follow the text very closely. But it was interesting nonetheless.


We began with this question: Recall that Vinge distinguishes between AI and IA. In which of these ways does Kurzweil envision the Singularity coming about? That is, does Kurzweil think that the Singularity will arise in combination with our minds (IA), or else as a result of an artificial intelligence we produce (AI)?

The significance of this question has to do with the issue of mind-uploading. Why would we have to upload our minds to the Singularity, as Kurzweil suggested in the reading from last week, if the Singularity arises in combination with our minds?

An Answer: Kurzweil envisions a combination of the two: AI will lead to IA (e.g., Google), which will lead to strong AI in the future, which will then come back and beam us up to the heavens. In any case, the two approaches very much compliment each other.

Kurzweil is suggesting that there will be an AI that is smarter than humans before the uploading. But not certain how it will occur.

Might IA involve uploading in the process of the Singularity coming about? The uploading enters the equation before the Singularity.

What exactly is uploading? A transfer. When a blow to the head no longer matters. A change in substrate. Technically: uploading means that one makes a copy, and then a copy of a copy. Not just plugging in.

One consideration against thinking that Kurzweil envisions a certain version of the IA route to the SIngularity: Kurzweil doesn’t like the single global consciousness idea, because he thinks that it would preclude him being there. He assumes that his individual self would not persist.

This brings up issues about where to draw the boundary of the individual mind: These are salient, not only for the picture where we are plugged in to a growing intelligence that eventually becomes the Singluarity, but also for the picture according to which we are uploaded to a pre-existing Singularity.

How is Kurzweil using the term ‘the Singularity’? And how does this relate to Vinge’s use?: Kurzweil uses the term to refer to an event in human history, not necessarily a particular intelligence that comes into existence, as Vinge does. But Kurzweil does seem to have the arrival of this intelligence in mind.

Kurzweil’s focus on progress in intelligence seems myopic. There have been other periods of advancement in human history that have seen the same pattern of change (perhaps not quite as fast) in different areas of human experience. Why privilege the type of change that interests Kurzweil?

Kurzweil seems to greatly underestimate two things: (1) the limits of technology (need more hardware as well as more code) and (2) the power behind biology (he assumes that technology is better because our chemical synapses slows down our thinking—but there is more going on than just transfer of electrical signals, a trade-off between speed and fine control, also not just signal transfer but also what goes on inside neurons).

Many of the signals required for higher thought don’t transfer info but rather change the way neurons behave—and even the nanobots might not be able to tell us all the ways in which the neurons are functioning

Because of the many complexities to how our brains work, in thought, it may be possible that the robot person might be slower than the human person, even though the robot is faster at transferring electrical signals that carry information. For example, what look like limitations given our biology might be mechanisms that help to achieve optimum speed, given the various operations imolicated in our minds' functioning.

Articles on creating a baby robot (one that they teach):
Stuck on certain tasks: e.g., trying to pay attention to what it is holding, and this is because its eyesight is too good and doesn’t discriminate enough
The key was to make its eyes worse

The process of life as it is may not be the most efficient way to do things, but it is hard to make certain the stronger claim that it is not the most efficient way to do things.

Record to MP3 analogy, or live music to recording analogy: Music recorded on a record (in analog) has no gaps and so has a sound quality that cannot be matched by digital means (e.g., MP3).
Might the new medium be missing some qualitative characteristics of the old medium? And might these be essential to the experience? Can the same be said for different substrates for purported conscious experience?

The challenge is to 'the substrate independence thesis' (e.g., invoked by Bostrom).

Need to be careful: need to be aware if and when nostalgia plays a role in evaluation

Is evolution slow?
            Well it might seem so, only if one assumes that the environment changes slowly

Is there a good distinction to be made between biological advancement/evolution vs technological advancement/evolution?

The main consideration in favor of the distinction is that technological advancement/evolution essentially involves intentions and design by an intelligence. Biological evolution is normally considered to be a 'blind' process in that it is not guided by an intelligent hand.
 
In biology: random mutations give rise to new features, that are more or less adaptable to the environment.

How does the environment influence the mutations?: by changing the rate, but not the kind—they are still random.

What is randomness in this context? Seems to be not by intelligent design.

So “evolution” cannot begin with an intentionally produced mutation

What exactly is evolution?

What is the difference between the other tool using animals and us, such that advancements according to our intentions are of a different category than advancements according to their intentions?

Humans make tools by reproducing things we’ve seen by making them better.

And other animals don’t pass down the acquired knowledge to future generations

In biological evolution: we are talking about the traits of a species.

In technological evolution: can also talk about traits (e.g., a computer having wifi), but then can distinguish between the processes that selected those trait.

There is a different set of useful predictions from intentional vs. unintentional adaptations. We use the label 'biological evolution' in certain contexts, and we use the label 'technological evolution' in another, and this distinction is useful. It is useful to talk about these two processes differently, because it makes certain things easier to discuss: (1) the extreme differences in the observed rates and (2) because of certain other predictions (e.g., the vastly increased capability of tech to make large jumps to break out of local maxima (small change detrimental, but large change possibly beneficial)).

In Darwinian evolution: no such things as revolutions, only evolutions; Darwinian evolution predicts unnecessary/inefficient intermediary steps that are not predicted by technological evolution. And Darwinian evolution is normally considered biological evolution.

The view in favor of the distinction seems to be that technological evolution originates in an intention. But stopping the causal chain at the intention can seem arbitrary from a certain point of view. The intention, after all, may just be a part of the event-cuasal order, and so it will have causes, and they will have causes, and so on. Thus, it seems to be an arbitrary stopping point from the perspective of causal explanation.

Friday, February 8, 2013

Notes on our fifth meeting


We started out with Patrick’s nice comment on the blog about Nietzsche. You can read it, below. This led to a discussion of related issues:

Is the Singularity a continuation of human existence? A particular human’s (i.e., Kurzweil’s) existence?

What constitutes 'fndamental' change? When is a change in degree a change in kind?

Are there limits to human progress and development?
It seems so: we can only think and extend our ideas in a human way, along a restricted range of options. These limits might not be known or knowable to us, but they are there all the same.

But: if we assume that we are essentially limited in certain ways, where do we draw the line? Before vaccines, we might have claimed that we are essentially subject to certain diseases. But now we do not think that.

One clear fundamental difference between humans and the Singularity: the Singularity will not be carbon-based.

But: There still must be matter that is a prerequisite for any existence. This is so, even if the Singularity stands to the matter that underlies it up in a different relation than we stand to the matter that underlies us. (Is 'underlie' the right relation here?)
The Singularity can move through the space of information in a different way than we can move through physical space.

But this does not mean that the relation of Singularity to matter is different than that of human to matter. It seems to be a matter of salience.

Could envision, not the Singularity, but a collection of superhuman consciousnesses

A difference between the relation of the Singularity to its physical instantiation and me to my body: the Singularity can transfer to a different physical instantiation in a way I cannot (when one portion of the computer network goes down, a very different portion can keep the consciousness that is the Singularity going—perhaps even has been all along: multiple, parallel realization).


Recall from the Chomsky piece that there are different conceptions of underlying principles: behaviorism (copying) vs Chomsky (understanding): Perhaps Kurzweil is just using the copying conception. And perhaps he is getting mileage off of trading on the ambiguity between the two  interpretations of ‘capturing underlying principles'.

An objection to the Input/output picture: it treats the mind as a black-box.

Views that call for filling in the black box: don’t need to appeal to a soul.

One might claim that mental states are strongly historical: they do not supervene on mere time-slices of functional organization; allows that physical systems count as minds partly in virtue of their past (cf. Dennett).

This is, perhaps, illustrated by Jason’s sprinx case: one imagines a sprinx thousands of years before evolution creates one. Have I seen a sprinx?

Distinction: the content of a mental state vs. something being a mental state
Less controversial to claim relevance of history to content (content externalism) than to say the same for being a mental state

A claim in physics: the universe is a state function
For any given state, future states can be predicted from it in ignorance of past states
All future time moments would be predicted the same, regardless of past staes leading to the given state

Two issues:
1.     The rise of the Singularity
2.     Its enabling us to achieve immortality

There are many sub-issues for each of these two issues.

Just given a qualitative change in the intelligence, it does not follow that it cannot be us who survive.

In the personal identity literature, there are some who think it is not a matter of whether I continue, but whether there is the right kind of continuity for me to care about the one who continues.

Kurzweil is trying to live as long as he can, so that he can be around for the Singularity in order to achieve immortality

If it is a leap to a new form of intelligence, one that transcends human limitations, then couldn’t be me, because a different form of life. (Perhaps this was Russell’s point from earlier, or in the background of what he said.)

Varia:
A different view of uploading: not me in a computer, but a child of mine in a computer.

A good distinction: logical possibility vs natural possibility

The way the brain works (parallel processing) vs the way the computer processes (logic trees, etc.)

Didn’t the IA Singularity already occur?

Thursday, January 31, 2013

Notes on Our Fourth Meeting

I thought our discussion of the Vinge and Moravec pieces was really great. Thank you everyone for such interesting comments and questions. Since we will be continuing with this topic for at least a week or two longer, I hope the discussion continues to excite everyone.

Here are some of the highlights, as I recall, from this past Tuesday:

Both pieces ended on what seemed like different notes: Moravec sounded like something of a mystic or along the lines of a Buddhist or Hindu, with a much more positive slant to what he was saying, whereas Vinge seemed to express a sense of impending doom, or at least a worrisome outlook.

Some questions about motivation: What would the motivation of a superintelligent being (of the sort that the Singularity is characterized to be) be like? Human and animal motivation is shaped in a large part by the need to find food and take care of other basic needs. What about an artifical superintelligence?

Some questions about intelligence: How do we define intelligence? What characteristics are essential for a recognizable form of intelligence (e.g., creativity, inspiration, nostalgia)? Could the Singularity possess these characteristics? In what way is the form of intelligence characteristic of the Singularity supposed to be beyond our ken? The form of intelligence of a mature adult human is beyond the ken of a baby human. Is there supposed to be a difference in the case of the Singularity's being beyond our ken? What is this difference?

Some questions pertaining to our supposed inability to predict what the Singularity would be like:
1.     With a new sort of intelligence, the Turing test won’t apply. What sort of continuity is there between them?
2.     Epistemological claim about our predictions: there will be an event beyond which we cannot predict where things will go. Might the ignorance be connected to question 1?
3.     What makes the Singularity unique? We cannot predict future theories of our own even now. So what’s the difference between the uncertainties we face everyday and the ones this possibility presents?

How is the concept of the singularity already a projection into the future of what we already know? How would we recognize it? Might it already exist, and we don’t know yet?

On some conceptions, the Singularity seems to transcend individuality. Is this a difference between our conception of ourselves as humans and the kind of entity that the Singularity is supposed to be? Does it factor into issues about the desirability of the coming of the Singularity

Why the Singularity might scare us: A future where people aren’t running things anymore is fundamentally different from our present. We might no longer be at the center of things. AI would be scary because has no continuity with our current existence. A future superintelligence might be hostile toward humans.

But is the Singularity to be feared? Would a superintelligence (necessarily, most likely) respect biodiversity, the rights of other creatures, and so on? Would it recognize moral values? WOuld it be a moral exemplar?

The contrast between Aritifical Intelligence (AI) and Intelligence Amplification (IA), in Vinge, was very interesting: Which is the more plausible route to the Singularity? Which is the most desirable, from the perspective of our own well-being as humans? How discontinuous would the Singularity be with human existence if it arose in this way, as opposed to through more traditional AI? Does IA lead to something like a hive-mind or a superintelligence that takes a cue from the Gaia hypothesis?

Would the Singularity (or any other superintelligence) become bored? What characteristics might cause or prevent this? What sort of immortality would it have? What importance does the fact that even a superintelligence has a physical base have with respect to its longevity prospects?


Some different issues:
1.     Could there be a different kind of entity that is super-intelligent?
2.     Could it be immortal?
3.     Could I be immortal in the sense that I have these super-enhanced capabilities?

An irony: Psychology teaches us that those who are deeply religious live longest, so, ironically, the people who live the longest would not believe in a Singularity (on the assumption that this is not something that the religious believe in).

Nietzsche came up a few times: How does he describe the Ubermensch?How does the Ubermensch relate to the Singularity, if at all?

The notion that it might be our function to enable the development of the Singularity also came up: What sense of 'function' is in play here? What does this imply about our relationship to the Singularity (causal, normative)? What about the Singularity's relationship to us (ancestor worship, fuel)?

Sunday, January 27, 2013

Notes on Our Third Meeting

We discussed an interview with Noam Chomsky, where he articulated some criticisms of the current state of several fields, including neuroscience, connectomics and AI.

One distinction we drew was between these questions:
1. Is the mimicking of some behavior  produced by statistical analysis ever going to be fully accurate?
2. Even if it were, would it provide us with an understanding of the internal processing of the relevant agent of this behavior?

It seemed that Chomsky's main criticism was of the latter kind. Predicting behavior on the basis of statistical analysis does not provide understanding of why that behavior was actually produced--it does not provide insight into the general principles according to which the relevant system functions and the ways in which those principles are instantiated. (This criticism draws on Marr's three levels for understanding a complex biological organism.)

We also distinguished between these four projects:

1.     Create a machine that performs a function that we used to need humans for.
2.     Create something that can perform the full range of human functions at least as well as humans can.
3.     Create something that does things the way humans do.
4.     Create something that does things exactly the way a specific human does.

Each project is much more difficult than the one that came before it. And each project has an intelligible goal. But we might disagree about the relative values of these goals.

We also talked about the goal of unification in science: a single theory to understand everything.


What is Chomsky's notion of success in science?
It does not seem to be mere predictive success, but rather predictive success across a wide range of contexts. This would rule out the sort of predictive success achieved by big data projects (such as the fictional one he characterizes in the case of the video camera looking out the window) and help to motivate the appeal he finds in a concept of science as seeking to understand the general principles guiding the overt behavior of different systems.

But what about outliers? How do general principles capture their behavior?

What difference does it make to this notion of success whether behavior is instinctual or not?



We also discussed Searle's Chinese Room experiment and the objection it raises for strong AI (and functionalism about the mind/consciousness). [Searle coins "strong AI" to refer to the program of trying to understand our minds by creating artificial intelligences that can behave like us.] Roughly (and we will read this piece later on, hopefully), the idea is that the mere ability to pass the Turing test (to have a behavioral output that is indistinguishable (by other humans) from a human being's) is not sufficient for understanding (or consciousness).
          One observation was that not all AI is "strong AI" in Searle's sense.
         

A question: Might Searle's argument suggestive that predictive success across all contexts may still leave out something essential? That is, might we be able to articulate general principles that govern the mental behavior characteristic of us and be able to specify what areas of the brain instantiate these algorithms and yet still not understand consciousness (say, because consciousness essentially depends on instantiating these algorithms in the type of material that makes up our brains)?

Thursday, January 17, 2013

Note on our second meeting

Last meeting we talked about connectomics--a research program that aims to map the neural structure of the human brain. There seemed to be widespread skepticism within the group about the ability of connectomics to contribute to goal of uploading a human mind onto a computer. Three key complaints were:

1. It is not clear that it would be feasible to handle the vast amount of data required to describe the complete neural structure of the human brain.

2. It is not clear that describing the complete neural structure of the human brain is sufficient to describe the complete functional structure of the human brain (i.e., it leaves out glia, leaves out other important components of the organism of which the brain is a part, ignores plasticity and development over time, etc.).

3. It is not clear that describing the complete functional structure of the human brain is sufficient to describe the human mind.

Apart from (but sometimes connected to) these worries were the following issues:

-A neural map of the brain does not by itself describe the different interactions between the different areas of the brain, the rules that govern these interactions, nor the purposes served by these interactions.

-A neural map of a given brain at a given time does not describe the ways that brain has changed over time in the past nor the ways it is disposed to change in the future (i.e., it is a time-slice snapshot of an entity that is in flux).

-The human brain is connected to many other parts of the human organism, and these connections are important to understanding the brain's functioning and development.

-A map of the neural structure of the brain does not describe the ways that individual neurons that are a part of the map function or change over time (e.g., whether they are "on" or "off," what changes they may be subject to over time, etc.).

-Connectomics is modeled on genomics, and one lesson from the latter is that we do not know much at all about the related issues. While there is a sense in which we are our genes, it is not clear what lessons to draw from this. Similarly, even while there may be a sense in which we are our connectomes, it remains unclear what lessons to draw from this. In particular, it remains unclear how the truth of this claim might underwrite mind-uploading of the sort that might allow for continued human life.

-It is unclear whether we should aim at artificially modelling the human mind by trying to copy actual human minds (as connectomics suggests) or trying to develop increasingly more sophisticated artificial intelligences that resemble human intelligence.

If there are points I have missed, please feel free to add them in the comments. Also, if there is something more you would like to say, please do so in the comments here.

Monday, January 14, 2013

Notes on Our First Meeting

Sorry for posting this so late in the week between our meetings. I have been caught up with other things that took more time than expected, and this first meeting was mostly a matter of laying out issues to discuss in more detail as we go on. In any case, here are some of the main issues raised in our discussion this past Tuesday:

-The self: we talked a lot about what constitutes the self (e.g., memories, genes), whether this may be contextual and/or socially constructed, and what constraints there may be on a view about the constitution of the self (e.g., numerical identity).

-Personal identity: related to issues about the self, we talked about the constraints on personal identity over time, difficult cases (e.g., fission), the importance of personal identity (e.g., as opposed to some other sense of self that might persist over time), the role of subjective experience in determining whether one persists (e.g., the experience of being trapped in what used to seem like one's own body), the importance of consciousness and whether consciousness requires an organic substrate.

-Conceptions of the future: we talked about the prevalence of dystopias in science fiction treatments of the future, whether certain technologies are coherent (e.g., the way the visitation room worked in the film) or desirable (e.g., the jewel in the short story).

These seem to me to be the main themes of our discussion, ones we came back to multiple times. The general desirability of immortality was also mentioned, and the role and relevance of normal human development was implicit in some of the discussion (e.g., the case of identical twins, the normal development of and changes in the self over time). Perhaps I missed something. Feel free to add to the list.

See you tomorrow,

Ben

Thursday, January 3, 2013

Some Proposed Readings

This reading group will span both winter and spring quarters. (But don't panic! You are not required to stick around if you can't or don't want to.)

Here is a list of proposed readings (with links to the text where possible) that would span 18 sessions. We can add, delete, replace, restructure as we see fit. And we can hold more or fewer sessions (as things are, we would be able to take 10th week and finals week off each quarter). The list is broken down into several themes. Let's discuss this at our first meeting.



1. Introduction:
“Life Begins at Rewirement” (short film)
Futurestates.tv

“Learning to Be Me”
Egan pdf

Connectomics:
2. “The Strange Science of Immortality”

“Mapping the Human Connectome”

3. “Chomsky on Where Artificial Intelligence Went Wrong”

The Singularity:
4. “Today’s Computers, Intelligent Machines and Our Future”
http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html

“What is the Singularity?”

5. “Achieving the Software of Human Intelligence: How to Reverse Engineer the Human Brain”
Kurzweil(1) pdf

6. “The Singularity: A Philosophical Analysis”
consc.net/papers/singularity.pdf

Functionalism:
7. Selections from The Conscious Mind
Chalmers pdf

8. “Minds, Brains, and Programs”
https://docs.google.com/viewer?a=v&q=cache:sOMI4SIuVNkJ:www.class.uh.edu/phil/garson/MindsBrainsandPrograms.pdf+&hl=en&gl=us&pid=bl&srcid=ADGEESh-jdV6MGa5c93Eqwx5YLqn8CjZdARsmVe01P9B-pI7uYGTQebx_V7gXt3To_MGA6hGR2769QerLTA0i_QZjKF_X4ScBLpojCUEYPxp8Xp0mOsnnoFkfa7aWeIj6l-VwbmMKkjU&sig=AHIEtbTc__qF2IQDdwYYItHbXofTyPDmQA

“Kurzweil’s Chinese Room”
Kurzweil(2) pdf

9. “Human Immortality”

10. “Are You Living in a Computer Simulation?”

Personal Identity and Human Agency:
11. “Personal Identity”

12. “Moral Responsibility and the Self”
Shoemaker pdf

13. “Where Am I?”
http://www.newbanner.com/SecHumSCM/WhereAmI.html

14. “Bodies, Selves”

The Desirability of Immortality:
15. “The Immortal”
Borges pdf

16. “Why Immortality Is Not So Bad”