Thursday, January 31, 2013

Notes on Our Fourth Meeting

I thought our discussion of the Vinge and Moravec pieces was really great. Thank you everyone for such interesting comments and questions. Since we will be continuing with this topic for at least a week or two longer, I hope the discussion continues to excite everyone.

Here are some of the highlights, as I recall, from this past Tuesday:

Both pieces ended on what seemed like different notes: Moravec sounded like something of a mystic or along the lines of a Buddhist or Hindu, with a much more positive slant to what he was saying, whereas Vinge seemed to express a sense of impending doom, or at least a worrisome outlook.

Some questions about motivation: What would the motivation of a superintelligent being (of the sort that the Singularity is characterized to be) be like? Human and animal motivation is shaped in a large part by the need to find food and take care of other basic needs. What about an artifical superintelligence?

Some questions about intelligence: How do we define intelligence? What characteristics are essential for a recognizable form of intelligence (e.g., creativity, inspiration, nostalgia)? Could the Singularity possess these characteristics? In what way is the form of intelligence characteristic of the Singularity supposed to be beyond our ken? The form of intelligence of a mature adult human is beyond the ken of a baby human. Is there supposed to be a difference in the case of the Singularity's being beyond our ken? What is this difference?

Some questions pertaining to our supposed inability to predict what the Singularity would be like:
1.     With a new sort of intelligence, the Turing test won’t apply. What sort of continuity is there between them?
2.     Epistemological claim about our predictions: there will be an event beyond which we cannot predict where things will go. Might the ignorance be connected to question 1?
3.     What makes the Singularity unique? We cannot predict future theories of our own even now. So what’s the difference between the uncertainties we face everyday and the ones this possibility presents?

How is the concept of the singularity already a projection into the future of what we already know? How would we recognize it? Might it already exist, and we don’t know yet?

On some conceptions, the Singularity seems to transcend individuality. Is this a difference between our conception of ourselves as humans and the kind of entity that the Singularity is supposed to be? Does it factor into issues about the desirability of the coming of the Singularity

Why the Singularity might scare us: A future where people aren’t running things anymore is fundamentally different from our present. We might no longer be at the center of things. AI would be scary because has no continuity with our current existence. A future superintelligence might be hostile toward humans.

But is the Singularity to be feared? Would a superintelligence (necessarily, most likely) respect biodiversity, the rights of other creatures, and so on? Would it recognize moral values? WOuld it be a moral exemplar?

The contrast between Aritifical Intelligence (AI) and Intelligence Amplification (IA), in Vinge, was very interesting: Which is the more plausible route to the Singularity? Which is the most desirable, from the perspective of our own well-being as humans? How discontinuous would the Singularity be with human existence if it arose in this way, as opposed to through more traditional AI? Does IA lead to something like a hive-mind or a superintelligence that takes a cue from the Gaia hypothesis?

Would the Singularity (or any other superintelligence) become bored? What characteristics might cause or prevent this? What sort of immortality would it have? What importance does the fact that even a superintelligence has a physical base have with respect to its longevity prospects?


Some different issues:
1.     Could there be a different kind of entity that is super-intelligent?
2.     Could it be immortal?
3.     Could I be immortal in the sense that I have these super-enhanced capabilities?

An irony: Psychology teaches us that those who are deeply religious live longest, so, ironically, the people who live the longest would not believe in a Singularity (on the assumption that this is not something that the religious believe in).

Nietzsche came up a few times: How does he describe the Ubermensch?How does the Ubermensch relate to the Singularity, if at all?

The notion that it might be our function to enable the development of the Singularity also came up: What sense of 'function' is in play here? What does this imply about our relationship to the Singularity (causal, normative)? What about the Singularity's relationship to us (ancestor worship, fuel)?

Sunday, January 27, 2013

Notes on Our Third Meeting

We discussed an interview with Noam Chomsky, where he articulated some criticisms of the current state of several fields, including neuroscience, connectomics and AI.

One distinction we drew was between these questions:
1. Is the mimicking of some behavior  produced by statistical analysis ever going to be fully accurate?
2. Even if it were, would it provide us with an understanding of the internal processing of the relevant agent of this behavior?

It seemed that Chomsky's main criticism was of the latter kind. Predicting behavior on the basis of statistical analysis does not provide understanding of why that behavior was actually produced--it does not provide insight into the general principles according to which the relevant system functions and the ways in which those principles are instantiated. (This criticism draws on Marr's three levels for understanding a complex biological organism.)

We also distinguished between these four projects:

1.     Create a machine that performs a function that we used to need humans for.
2.     Create something that can perform the full range of human functions at least as well as humans can.
3.     Create something that does things the way humans do.
4.     Create something that does things exactly the way a specific human does.

Each project is much more difficult than the one that came before it. And each project has an intelligible goal. But we might disagree about the relative values of these goals.

We also talked about the goal of unification in science: a single theory to understand everything.


What is Chomsky's notion of success in science?
It does not seem to be mere predictive success, but rather predictive success across a wide range of contexts. This would rule out the sort of predictive success achieved by big data projects (such as the fictional one he characterizes in the case of the video camera looking out the window) and help to motivate the appeal he finds in a concept of science as seeking to understand the general principles guiding the overt behavior of different systems.

But what about outliers? How do general principles capture their behavior?

What difference does it make to this notion of success whether behavior is instinctual or not?



We also discussed Searle's Chinese Room experiment and the objection it raises for strong AI (and functionalism about the mind/consciousness). [Searle coins "strong AI" to refer to the program of trying to understand our minds by creating artificial intelligences that can behave like us.] Roughly (and we will read this piece later on, hopefully), the idea is that the mere ability to pass the Turing test (to have a behavioral output that is indistinguishable (by other humans) from a human being's) is not sufficient for understanding (or consciousness).
          One observation was that not all AI is "strong AI" in Searle's sense.
         

A question: Might Searle's argument suggestive that predictive success across all contexts may still leave out something essential? That is, might we be able to articulate general principles that govern the mental behavior characteristic of us and be able to specify what areas of the brain instantiate these algorithms and yet still not understand consciousness (say, because consciousness essentially depends on instantiating these algorithms in the type of material that makes up our brains)?

Thursday, January 17, 2013

Note on our second meeting

Last meeting we talked about connectomics--a research program that aims to map the neural structure of the human brain. There seemed to be widespread skepticism within the group about the ability of connectomics to contribute to goal of uploading a human mind onto a computer. Three key complaints were:

1. It is not clear that it would be feasible to handle the vast amount of data required to describe the complete neural structure of the human brain.

2. It is not clear that describing the complete neural structure of the human brain is sufficient to describe the complete functional structure of the human brain (i.e., it leaves out glia, leaves out other important components of the organism of which the brain is a part, ignores plasticity and development over time, etc.).

3. It is not clear that describing the complete functional structure of the human brain is sufficient to describe the human mind.

Apart from (but sometimes connected to) these worries were the following issues:

-A neural map of the brain does not by itself describe the different interactions between the different areas of the brain, the rules that govern these interactions, nor the purposes served by these interactions.

-A neural map of a given brain at a given time does not describe the ways that brain has changed over time in the past nor the ways it is disposed to change in the future (i.e., it is a time-slice snapshot of an entity that is in flux).

-The human brain is connected to many other parts of the human organism, and these connections are important to understanding the brain's functioning and development.

-A map of the neural structure of the brain does not describe the ways that individual neurons that are a part of the map function or change over time (e.g., whether they are "on" or "off," what changes they may be subject to over time, etc.).

-Connectomics is modeled on genomics, and one lesson from the latter is that we do not know much at all about the related issues. While there is a sense in which we are our genes, it is not clear what lessons to draw from this. Similarly, even while there may be a sense in which we are our connectomes, it remains unclear what lessons to draw from this. In particular, it remains unclear how the truth of this claim might underwrite mind-uploading of the sort that might allow for continued human life.

-It is unclear whether we should aim at artificially modelling the human mind by trying to copy actual human minds (as connectomics suggests) or trying to develop increasingly more sophisticated artificial intelligences that resemble human intelligence.

If there are points I have missed, please feel free to add them in the comments. Also, if there is something more you would like to say, please do so in the comments here.

Monday, January 14, 2013

Notes on Our First Meeting

Sorry for posting this so late in the week between our meetings. I have been caught up with other things that took more time than expected, and this first meeting was mostly a matter of laying out issues to discuss in more detail as we go on. In any case, here are some of the main issues raised in our discussion this past Tuesday:

-The self: we talked a lot about what constitutes the self (e.g., memories, genes), whether this may be contextual and/or socially constructed, and what constraints there may be on a view about the constitution of the self (e.g., numerical identity).

-Personal identity: related to issues about the self, we talked about the constraints on personal identity over time, difficult cases (e.g., fission), the importance of personal identity (e.g., as opposed to some other sense of self that might persist over time), the role of subjective experience in determining whether one persists (e.g., the experience of being trapped in what used to seem like one's own body), the importance of consciousness and whether consciousness requires an organic substrate.

-Conceptions of the future: we talked about the prevalence of dystopias in science fiction treatments of the future, whether certain technologies are coherent (e.g., the way the visitation room worked in the film) or desirable (e.g., the jewel in the short story).

These seem to me to be the main themes of our discussion, ones we came back to multiple times. The general desirability of immortality was also mentioned, and the role and relevance of normal human development was implicit in some of the discussion (e.g., the case of identical twins, the normal development of and changes in the self over time). Perhaps I missed something. Feel free to add to the list.

See you tomorrow,

Ben

Thursday, January 3, 2013

Some Proposed Readings

This reading group will span both winter and spring quarters. (But don't panic! You are not required to stick around if you can't or don't want to.)

Here is a list of proposed readings (with links to the text where possible) that would span 18 sessions. We can add, delete, replace, restructure as we see fit. And we can hold more or fewer sessions (as things are, we would be able to take 10th week and finals week off each quarter). The list is broken down into several themes. Let's discuss this at our first meeting.



1. Introduction:
“Life Begins at Rewirement” (short film)
Futurestates.tv

“Learning to Be Me”
Egan pdf

Connectomics:
2. “The Strange Science of Immortality”

“Mapping the Human Connectome”

3. “Chomsky on Where Artificial Intelligence Went Wrong”

The Singularity:
4. “Today’s Computers, Intelligent Machines and Our Future”
http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html

“What is the Singularity?”

5. “Achieving the Software of Human Intelligence: How to Reverse Engineer the Human Brain”
Kurzweil(1) pdf

6. “The Singularity: A Philosophical Analysis”
consc.net/papers/singularity.pdf

Functionalism:
7. Selections from The Conscious Mind
Chalmers pdf

8. “Minds, Brains, and Programs”
https://docs.google.com/viewer?a=v&q=cache:sOMI4SIuVNkJ:www.class.uh.edu/phil/garson/MindsBrainsandPrograms.pdf+&hl=en&gl=us&pid=bl&srcid=ADGEESh-jdV6MGa5c93Eqwx5YLqn8CjZdARsmVe01P9B-pI7uYGTQebx_V7gXt3To_MGA6hGR2769QerLTA0i_QZjKF_X4ScBLpojCUEYPxp8Xp0mOsnnoFkfa7aWeIj6l-VwbmMKkjU&sig=AHIEtbTc__qF2IQDdwYYItHbXofTyPDmQA

“Kurzweil’s Chinese Room”
Kurzweil(2) pdf

9. “Human Immortality”

10. “Are You Living in a Computer Simulation?”

Personal Identity and Human Agency:
11. “Personal Identity”

12. “Moral Responsibility and the Self”
Shoemaker pdf

13. “Where Am I?”
http://www.newbanner.com/SecHumSCM/WhereAmI.html

14. “Bodies, Selves”

The Desirability of Immortality:
15. “The Immortal”
Borges pdf

16. “Why Immortality Is Not So Bad”


Wednesday, January 2, 2013

Welcome!

This blog will provide a space for us to communicate with each other outside of our regular meetings.

After each meeting I plan on posting some of the main points that came up in our discussion. The hope is that those interested in doing so can continue the discussion here, by commenting on the post for that week. (It should go without saying (but I will make it explicit anyway) that online discussions should meet the same standards of decency, etc. as our in-person discussions.)

I will also post announcements for the group here, including announcements about the reading schedule. The first such post lists some proposed readings for the group. This is really just a proposal. I would love feedback on what to add to the list and what to replace. I think we should also take this to be something of a fluid reading list, reserving the right to change things as we go along.

I am really looking forward to this reading group. I think we have an excellent group of people coming together to discuss an interest set of topics. It should be fun and enlightening.