This blog post was originally my reply to a question posted on Kurzweilai forums.
I do not think a car would be an artificial general intelligence.
For something to qualify as being an AGI, it must have the capacity to learn to do something it it was not designed for. A spider is not demonstrating general intelligence when it weaves a web. Instead, it was designed by nature to execute a web-weaving program. But when a human being knits a sweater, that is an example of GI because humans did not evolve to knit sweaters. 
A car is designed to provide a means of transport, so if we are strtictly talking about cars there really is not that much scope to expand its capabilities to novel new areas, like figuring out through learning experiences how to knit a sweater or identify a spider from a fly or myraid other things a GI could conceivably do. An autonomous car is probably better thought of as a ‘Not So Narrow Artificial Intelligence’, rather than a full-blown AI. It would be an NSNAI because it would have to cope with a far greater variety of circumstances within its specialised field compared to, say, those robotic arms you see building cars. Where those robots are concerned, their environment is deliberately set up in such a way that the same sequence of events happens again and again. This would be like a robot car that has been programmed to drive around a particular circuit and only in dry conditions. Change the route even slightly and the car would career off course completely ignorant to the fact it was doing something wrong. A car equipped with NSNAI, however, would have the ability to learn how to drive around a variety of circuits and in a variety of conditions.
Now, if we were to be more flexible in our definition of ‘car’ so that it encompasses any kind of vehicle including such things as rovers like the kind sent to explore Mars and R2D2, then we may find some ‘cars’ which would be artificial general intelligences. Imagine something kind of like a fork-lift truck but with much more flexible appendages, making it capable of performing a variety of grips, kind of like the way your hands can. Now we have something whose body design provides an opportunity to learn skills pretty far from anything its creators had in mind. This is not just a car being a car in different places. It could go bake a cake or direct traffic or write a book….the list goes on and on. It does not [/i]have[i] to be just a vehicle, unlike a car which, by its very design, is rather useless at being anything else.
An interesting possibility is that the ‘car’ aspect of our autonomous vehicle need not necessarily be part of its conscious mind. A spider does not need to know how to make a web, just as you do not know how to make a poo. I mean, your body knows, obviously, but you can live in complete ignorance of how your body produces those those brown lumps, and a spider is completely ignorant of how its body constructs webs. Well, of course I do not know that a spider is not conscious of what it is doing, but I do know it need not necessarily have to be conscious of what it is doing.
Similarly, when it comes to driving from A to B with possible diversions by way of C,D,E etc, a robot need not be consciously aware of all the mental processes involved in enabling it to navigate its way to its destination. This could mean that, if you were to ask it how it managed to successfully get to where it was meant to be, it would shrug and say something like ‘I dunno…it just felt like I was going the right way”. 

This entry was posted in forum thoughts, technology and us and tagged , , , , , . Bookmark the permalink.


  1. Yes but . . . đŸ™‚ Evolution is. Humans have evolved to make sweaters – because we make sweaters. Consciousness is about doing for sure, but it is also about being. Do you you feel yourself to be a conscious being? I am sure you do. Even if a smart car’s horizons are limited, if it becomes aware, if it feels itself to be an aware being, then we should be very cautious about denying its awareness. So I guess the question is, what building blocks of consciousness do include in differing machine functions at the design stage?

    • Bare in mind this blog post is really concerned with artificial general intelligence (AGI), rather than Strong Artificial Intelligence. Strong AI is an AI that is conscious. AGI, on the other hand, is an artificial intelligence with an open-ended ability to learn, but is not necessarily conscious. That is why I talked about spiders, because they are programmed by nature to weave webs, and while they may learn to get better at web-building they don’t have the ability to learn to do the range of things that is possible for generally-intelligent beings. People, though, do have an open-ended learning ability, as evidenced by the huge variety of skills the species displays in comparison to other species. Whereas we can literally say that evolution programmed spiders to weave webs, we can only say evolution programmed people to knit sweaters in an extremely loose way. We evolved to learn, and we happened to turn that skill to knitting.

      A robot car would be like the spider; designed to do a particular task (in this case provide transport). It could be conscious of what it is doing, in other words be a strong AI, but it is physically restricted in its design to being a means of transport so that means no robot car can ever be an AGI, at least not without such extensive modifications to its body.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s