Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

You are here

Potential for computers to be as intelligent as humans

jrohwer's picture
Projects: 
Here is the paper I mentioned in class during our discussion of whether or not computers will ever be able to achieve behavioral complexity comparable to that of humans: When will computer hardware match the human brain, by Hans Moravec in 1997. "This paper describes how the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. The processing power and memory capacity necessary to match general intellectual performance of the human brain are estimated. Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s." That soon? It sounds a little bit crazy, but the general idea is pretty solid, in my opinion: that eventually, in the not-too-distant future (maybe not quite so soon as 2022) computers will easily rival or surpass human intelligence. I would be interested to know whether or not Moravec can convince Professor Grobstein.

Comments

PaulGrobstein's picture

Intrigued by the article, but no, not convinced. For a variety of reasons, including some we talked about in the last couple of sessions. Moravec's basic measurement baseline is the MIP, ie a million instructions processed per second by a CPU. Alternatively characterized in recent years by computer scientists themselves in the plural as "Meaningless Indication of Processor Speed". Which is to say that even in a contemporary serial computer, it is recognized that the speed of the processor itself is frequently not the most significant variable in actual performance.

What's much more important though is that the brain is NOT a serial computer. Its a parallel network of something in the vicinity of 1012 neurons whose processing speed individually is .... hard to be certain of (is it digital or analogue?, running on discrete or continuous time? For fun, one might guess that neurons are processing 106 "instructions" (?) per second since many can generate a million signals a second. On this estimate, the nervous system is as a whole processing 1018 MIPS. This is substantially (to put it mildly) greater than Moravec's estimate of 108. The discrepancy is that Moravec's estimate is based on the number of instructions a serial computer needs to do a particular task in the retina (one of many), which he then presumes to be characteristic of any volume of not only the retina but the entire nervous system.

The brain is not only a parallel rather than a serial computer but a parallel computer with a very organized set of interconnections or architecture. So the problem is not only that of matching MIPS but also of simulating in a serial device the effects of the architecuture. I'm not sure there is any known way of guessing how much of an increase in MIPS would be necessary for this but I'm sure its quite substantial. Finally, as argued in class today, it seems pretty clear that the brain is designed as much for novelty generation as it is for particular computations aimed at unambiguously achieving particular tasks. Whether this increases or decreases the MIPS requirement I don't know but here too I'm disinclined to lay any money on Moravec's prediction that computers will "match general intellectual performance of the human brain ... in the 2020's".

As I hope I made clear in class, my skepticism isn't about whether artificial neural networks could in principle achieve all of the properties of the human brain; that the brain seems indeed to be a network of relatively simple elements says the answer is almost certainly yes. The tougher problems are getting clear exactly what those properties are, clarifying whether they can be simulated on a serial computer and, if so, writing the software to do so, and/or mimicking them with an artificial parallel device. There is a substantial likelihood that, if nothing else, the sheer size of the needed code for a serial device and/or the architectual demands for a parallel one will not only require a much longer time than Moravec predicts but may well preclude it ever being done other than emergently.

jrohwer's picture

I see what you mean when you say Moravec's estimate for the processing speed of the brain is low, but I still think your rough estimate is also probably high by a few orders of magnitude, simply because, (from what I've learned in biopsych): -rarely (or at least, substantially less than 100% of the time - not sure on any exact stats) do neurons fire constantly at their maximum speed. -the brain isn't using all it's circuits at once - at any given time a lot of the neurons are not firing or are firing only occasionally (I'm not sure on what the numbers are on this, I've heard 10% active volume/total volume = average, just as the word on the street sort of thing) However, again raising the estimate is the fact that one instruction doesn't necessarily equal one nerve impulse. In fact, it would take many instructions to simulate a single nerve impulse in a standard computer, wouldn't it? Since w/ MIPS they're measuring each single cycle through the arithmetic unit thing of the processor or something, and that should roughly correspond to half a synaptic interaction, since you have to multiply by the weight and then add that to the sum of the current total. So... depending on how many synapses per neuron, this is pretty bad news for the estimate I suppose. Still, who knows what technological advances the computer engineers will come up with? Also, I agree that the program will have to be designed to evolve/emerge. I think the idea of that is a little scary, personally. How much control can you have over it if you don't know exactly how it's doing what it does? So the day they create a hyper-intelligent 80 terrahertz machine entity, I'm not gonna lie, I'll be a little concerned for the welfare of the human race. I mean, even if you do an Asimov style thing and give it some absolute rules, consider the fact that a network occasionally makes mistakes. Imagine a robot with an interest in politics (since machine rights are a political issue, or something) whose computer brain fails to generalize the "Do not kill humans" rule to "Do not assassinate the president" or something crazy like that. And then this robot would say "I'm so sorry, I didn't realize he was a human, I thought 'president' was something different, I just want to be treated as an equal." Or something.
jrohwer's picture

So actually, I guess that second part about instructions not equating to nerve impulses contradicts my opening statement in the previous comment. I concede. Maybe.
PaulGrobstein's picture

A conversation very much worth having, however either of us (or anyone else) thinks it comes out (for the present). Yes, there are of course issues of how much activity there is in the brain at any given time (see Shoham et al. "How silent is the brain?" J Comp Physiol A, 2006). Most neurons don't, as you say, generate action potentials all the time at maxium frequency. On the flip side, though, there is lots of relevant signalling/processing going on in neurons without any action potential generation at all (of the five cell types in the retina only ganglion cells actually generate action potentials). My estimate was "for fun" but I still think adequate to show why Moravec's was flawed.

They key issues are, in any case, actually not here but in the architectual realm and, as you say, in the "what if" realm. And there I'm a little less concerned about the latter than you are. We actually do already and every day create large numbers of emergent "intelligences" (we call them "babies"). And they do indeed create a variety of problems and hazards. But we also have several millenia at least of experience in working with such unpredictable machines and so know a fair amount both about their benefits and about how to guard against the associated risks.

You (and others) might be amused by efforts to estimate the computing power not only of brains but of the universe. See If the Universe Were a Computer. Interesting to think about whether these estimates do/do not have some of the same problems as Moravec.

SarahMalayaSniezek's picture

I honestly feel that computers will never be able to be as intelligent as humans. But as usual, I guess it depends on the definition of intelligent. I think that computers can maybe some how get close enought to human, but there will always be something that humans can do that computers cannot do. It does scare me though if computers are able to rival or surpass human intelligence. There have been many different movies about types of machines surpassing humans and trying to take over the world. I wonder if that is possible. I also found it interesting that Professor Grobstein mentioned that we create these intelligent machines known as babies. I just think though that even though computers are coming close to mimicing human like behaviors, it is all how we train the machine and what we want it to do. I know that computers will probably be able to do things on their own and maybe eventually evolve in intelligence as humans, but it just does not seem possible. I guess anything is possible, but I find it highly unlikely that we will be able to have computers as intelligent as humans.