Sunday, January 27, 2013

Notes on Our Third Meeting

We discussed an interview with Noam Chomsky, where he articulated some criticisms of the current state of several fields, including neuroscience, connectomics and AI.

One distinction we drew was between these questions:
1. Is the mimicking of some behavior  produced by statistical analysis ever going to be fully accurate?
2. Even if it were, would it provide us with an understanding of the internal processing of the relevant agent of this behavior?

It seemed that Chomsky's main criticism was of the latter kind. Predicting behavior on the basis of statistical analysis does not provide understanding of why that behavior was actually produced--it does not provide insight into the general principles according to which the relevant system functions and the ways in which those principles are instantiated. (This criticism draws on Marr's three levels for understanding a complex biological organism.)

We also distinguished between these four projects:

1.     Create a machine that performs a function that we used to need humans for.
2.     Create something that can perform the full range of human functions at least as well as humans can.
3.     Create something that does things the way humans do.
4.     Create something that does things exactly the way a specific human does.

Each project is much more difficult than the one that came before it. And each project has an intelligible goal. But we might disagree about the relative values of these goals.

We also talked about the goal of unification in science: a single theory to understand everything.


What is Chomsky's notion of success in science?
It does not seem to be mere predictive success, but rather predictive success across a wide range of contexts. This would rule out the sort of predictive success achieved by big data projects (such as the fictional one he characterizes in the case of the video camera looking out the window) and help to motivate the appeal he finds in a concept of science as seeking to understand the general principles guiding the overt behavior of different systems.

But what about outliers? How do general principles capture their behavior?

What difference does it make to this notion of success whether behavior is instinctual or not?



We also discussed Searle's Chinese Room experiment and the objection it raises for strong AI (and functionalism about the mind/consciousness). [Searle coins "strong AI" to refer to the program of trying to understand our minds by creating artificial intelligences that can behave like us.] Roughly (and we will read this piece later on, hopefully), the idea is that the mere ability to pass the Turing test (to have a behavioral output that is indistinguishable (by other humans) from a human being's) is not sufficient for understanding (or consciousness).
          One observation was that not all AI is "strong AI" in Searle's sense.
         

A question: Might Searle's argument suggestive that predictive success across all contexts may still leave out something essential? That is, might we be able to articulate general principles that govern the mental behavior characteristic of us and be able to specify what areas of the brain instantiate these algorithms and yet still not understand consciousness (say, because consciousness essentially depends on instantiating these algorithms in the type of material that makes up our brains)?

No comments:

Post a Comment