Thursday, July 21, 2011

We already have good broad theories of intelligence

In Taking "Singularity" Apart Stuart Staniford writes:
"The idea that intelligence will accelerate in future does not follow from the possibility of developing human-level machine intelligence. It assumes a completely unproven 'law of intelligence' that the smarter you are, the easier it is to produce an intelligence even greater. Maybe it works the other way altogether - the smarter you are, the more complex and difficult it is to produce an even greater intelligence. Perhaps there's some fundamental limit to how intelligent it's possible for any agent to be. We have no clue. We haven't so far seen even a single generation of intelligences (us) producing a more intelligent entity, so the whole intelligence explosion idea consists of extrapolating from less than one data point. It's utter speculation"
I'd like to make a distinction between the hypothetical 'law of intelligence' Stuart states and the idea that intelligence is lawful, that there are patterns between the makeup of an entity and how good it will be at some particular mental task. In a simplified case, you could put certain properties of an entity's brain into a set of equations and get back scores related to its learning, focus, creativity, and such.

It is this kind of 'Laws of Intelligence', laws as scientific theory, which would lead to higher intelligences being better able to apply those laws. I think there is a vast amount of evidence that intelligence is lawful in exactly this way.

One law of intelligence is that from certain measures you can score higher by using more raw speed or memory. You might not paint a better painting or understand a deeper metaphor, but with a proper architecture you can certainly make more paintings, consider more metaphors, and learn faster. Even with no further understanding of it's own intelligence, if a digital being could copy itself to another system, it could do twice the work.

This alone pushes the questions of lawfulness and improvement via intelligence into the realm of manufacturing. It seems clear that 100 engineers would be better than 1 at designing systems to acquire resources, process them, and finally process the designers themselves faster.

Going back to raw algorithms, the degree to which higher intelligence helps design still higher intelligence is likely to be domain specific. Designing a version of yourself that is better at genetic engineering is not that different than writing an automatic tool that solves certain molecular problems quickly, in that both rely on the lawfulness of the problem. It will probably be much easier to improve an AI's ability to program than to improve it's ability to paint, as coding is very lawful and painting less so.

The best evidence that greater intelligence leads to greater ability to create intelligence is the basis for Stuarts own belief that "while I cannot say it's a certainty, it appears to be more likely than not that machines will gradually approach human levels of intelligence in the present century." It took billions of years for mindless natural selection to produce human level intelligence and perhaps a few thousand for us. Today already it appears that a thinking thing can look a program and find ways to make that smarter, or find whole news ways to solve the same problem. More specifically, it also appears that those with higher scores in certain metrics are better at it than others.

The trends we see and projections of improved future AI's only make sense if intelligence is something that can be figured out and constructed. A predictable explosion of figuring-out is just a natural consequence of this.

No comments: