ARTIFICIAL INTELLIGENCE.

HOW FAR AWAY IS TRUE AI? HOW DO WE CREATE SUCH A THING?….

I SAY:

If you say flying machines are impossible and I show you a working helicopter, you have been proved wrong. Period. There is no point in raising objections like ‘oh, it is not really flying! It is an imitation of flight, a mere simulation’, because that would be silly and everyone who saw the helicopter launch itself into the air and fly off would know your objection is ludicrous.

But the objection can never be dismissed as ridiculous when it comes to something as subjective as intelligence. How do I know that a machine, no matter how intelligent it may seem to be, really understands? It could be all smoke and mirrors.

If someone invents a computer that plays championship-level chess, and this machine is quite capable of defeating anybody who challenges it to a game, what sense does it make to say ‘this machine does not really understand how to play chess’? Granted, we might concede that the human mind might make leaps of association and analogies and metaphores between the game and god knows what which the computer is unable to do, but I say that a machine which plays a brilliant game of chess understands how to play a brilliant game of chess. It does not understand everything its human rival understands, but I do not think that means we can say it does not understand anything.

Another example would be those automated telescopes that scan the skies, searching for something interesting to observe. If these robotic astronomers habitually focused on astronomical observations that humans would deem ‘interesting if they spotted it’, why can’t we say that the robotic telescopes understand the difference between an uninteresting and an interesting phenomenon? Again, the robot does not understand everything that the human astronomer understands, but it understands something. It is, in a narrow sense, intelligent.

PDC SAYS:

PDC0’s objection will be the evergreen objection that AI critics can level at technology.

 

Extropia
Maybe it won’t be evergreen, but as of now, unfortunately, it’s still a healthy shade of green 🙂

 

How do I know that a machine, no matter how intelligent it may seem to be, really understands? It could be all smoke and mirrors.

Isn’t that what the Turing Test is all about? When AI can pass that, it will be good evidence that a human level of understanding is present, unless something about how it’s known to work indicates otherwise. Right now, when it comes to any level of understanding at all, computers are not even at the starting block. 

 

but I say that a machine which plays a brilliant game of chess understands how to play a brilliant game of chess.

No, it’s programmed to play a brilliant game of chess, it doesn’t understand chess or anything else. 

 

why can’t we say that the robotic telescopes understand the difference between an uninteresting and an interesting phenomenon?

Because we know that the computer is programmed to do what it does and this is why it does it, not because it understands what it’s doing or anything else. Does a kettle work because it understands when the water is boiling and switches itself off, or does it work because it’s programmed to switch itself off when the water boils ?
I SAY:

I like the kettle analogy:)

I think it raises an interesting question, which is ‘how narrow can “narrow AI” become until it should not be considered intelligent in any way’? I agree that it would be stretching the concept of intelligence too far if we were to call kettles intelligent. But I also think you are setting too high a standard. It seems to me that you would not call something intelligent until it ‘can produce behaviour of a similar range and complexity to that of the human mind’. But if we think about this from the perspective of natural selection, which could never make a great leap from ‘zero intelligence’ to ‘humanlike intelligence’, we see this ‘all or nothing’ attitude is flawed. There has to be a fairly smooth ascent from something that merely responds to something that is unarguably intelligent. You seem to be saying ‘if something is not intelligent in every way, it is not intelligent in any way’. At least, that is the impression I get when I read such things as ‘when it comes to any level of understanding at all, computers are not even at the starting block’.

>Because we know that the computer is programmed to do what it does and this is why it does it<

So how do we ever move past this level? Surely any computer/software, no matter how broadly intelligent it may seem, can be dismissed as ‘merely doing it it is programmed to do’? Maybe one answer to that is something shows intelligence when it achieves a goal it was not explicitely programmed to carry out. I think there are robots that can kind of do that sort of thing. There are robots (I think they were called Darwins but googling does not bring up the robots I remembered so I could be wrong) that learn how to navigate a maze. They do not navigate the maze because they are running a program that commands them to ‘travel in a straight line 1 meter, turn 90 degrees to the left..’ Instead they have a simple brain- a neural net- that aquires the ability to navigate the maze as they wander around, making mistakes and learning from them. Are these robots ‘at the starting block’ of intelligent behaviour?

PDC SAYS:

But I also think you are setting too high a standard. It seems to me that you would not call something intelligent until it ‘can produce behaviour of a similar range and complexity to that of the human mind’.

 

Extropia
You might have a point, but I’m only interested in narrow AI if it can be seen as a step on the road to SAI, and nothing that now exists passes that test, I’d guess. I think that intelligence will have to be constructed in a completely different way from current devices.

 

At least, that is the impression I get when I read such things as ‘when it comes to any level of understanding at all, computers are not even at the starting block’.

Maybe it’s important to clarify what it means to understand. I think it comes down to what is being discussed in the thread about computers living in stringland. To understand a pattern is to experience that pattern as a coherent whole, to compare it to other patterns which are remembered and recalled also as coherent wholes , and to thereby discern whether/how it fits in the context of other remembered patterns. The important point I think is that the pattern must be experienced as a coherent whole. A computer that matches patterns by means of searching a database or bitmap for regularities is a totally different process, and I’d guess that no process that works like that could ever be truly intelligent, no matter what processing power is applied or no matter how many narrow competences are amalgamated. 

 

But if we think about this from the perspective of natural selection, which could never make a great leap from ‘zero intelligence’ to ‘humanlike intelligence’, we see this ‘all or nothing’ attitude is flawed. There has to be a fairly smooth ascent from something that merely responds to something that is unarguably intelligent.

Yes, but unlike with computer intelligence, it’s easy to see how animal intelligence was a precursor to human-level intelligence. Animal behaviour indicates that they experience whole patterns like we do. 

 

Instead they have a simple brain- a neural net- that aquires the ability to navigate the maze as they wander around, making mistakes and learning from them. Are these robots ‘at the starting block’ of intelligent behaviour?

I’m not sure if these might be a precursor to true intelligence. Is there reason to think that they experience whole patterns ?

ASSIMOV SAID:

It will take about 50 years for machine minds to become mature and fully trusted by society, we (society ) will let them off the leash 1 inch at a time. Eventually we will build machines that will become more trustworthy and reliable than humans. I fully expect that in the future human beings will be bending over backwards for their machine pals. We may even ask them to rule over us.

It is not beyond our comprehension or our technical ability to create a Dalek Emperor to rule the earth. One can imagine heads of states laying wreaths of flowers at its feet in celebration for another year without war or famine under the governance of its near infinite wisdom. In fact this is the ultimate goal of the ASIMOV1 project. To create machine minds smart enough to run complex businesses , national infrastructure and eventually the whole world. Humans are not too clever when it comes to running large organizations….see world history for details !…so there is a gap in the market for universal wisdom. Machine minds can fill that gap.

STRUVE SAID:

If you are inside, virtually everything you see and hear was first created in someone’s imagination and then that someone was motivated to bring what was imagined into reality. It may well be that the analytical capacity of humans will be surpassed by machines as in Ray Kurzweil’s “Singularity” prediction. To me, this is not much of a cause for concern since machines have no emotions and therefore no motivation to act independently.

The analytical capacity of machines is already extremely good – the SPICE flavor of programs can analyze electronic circuit behavior faster and more accurately than electrical engineers. When it comes to creating a new circuit to perform a function, however, the electrical engineer beats the machine by a wide margin. The genetic algorithms for electronic circuit generation are certainly not the way humans do it, and leave non-functional parts in the circuit.

Equating the “Singularity” to super-intelligence can be done if “intelligence” is limited to analytical intelligence. To me intelligence includes two things that we don’t have a clue about how to do with machines: creativity and motivation.

Ray, I and others have created programs to compose music and these may be considered machine “creativity”. I consider this type of creativity to be unlike the kind of creativity used to create the man-made things that you see around you. Both music and art are quite subjective and are subject to human interpretation rather than intellectual analysis. Both are non-functional except for human enjoyment (or disgust). Both serve the limited purpose of eliciting human emotions.

Motivation is the result of emotions, not some intellectual exercise. We want to know, want to solve problems, want to imagine and convert our imagined possibilities into reality. Emotions have an ancient biological base in the need to feed, grow, and procreate. Machines don’t “want” anything and have no emotions. Motivation dictates what problems to solve, what things to create, and when to walk the dog.

Machines will always be our slaves, not the other way around.

KNAACK SAID: If you look at component systems of the human body, each organ is constructed of substrates that exert basic mechanical influence collectively to acheive functionality as a whole. When you break down the human mind into concepts, you find each thought and relationship can be quantified to a specific process, object or metaphor.
Now, look at a large city from a distance. This city appears to be a living entity. It responds to it’s environment, displays complex ordered and disordered behavior, it grows and reproduces (mitosis). However, this is an illusion, as we know that the city itself is not alive, but it’s people are.
Now, apply the same reasoning to the human body and you will find a similiar pattern: The illusion of being alive. But when you break down the entity into components, you find a symphony of events, rather than a consolidated being.
Computers are becoming increasingly becoming internal and external componenture to the entity that is “man”. You can try to break “man” into components, but you will not find a clear delineation between what is “man” component, and what is machine. Individual functions and physical processes are not tagged with special designations… they simply exist in the manner and order in which they are arranged.
Creativity and motivation runs through the system as a whole, man and machine, function & process… Perhaps it is an epiphenomenal effect of being, I don’t know. But I would disagree with your premise that there is a clear line to be drawn between man and machine, with one side to be declared creative and motivated and the other not.

I SAID:

IBM once held a conference on the future of cognitive computing. One of the speakers (James Albus) was from DARPA, and he talked about a paper, ‘Understanding The Mechanisms of Mind’, which sets out to explain why it is not unreasonable to suppose we can ‘extend the frontiers of human knowledge to include a scientific understanding of the processes in the human brain that give rise to the phenomenon of mind’. He went on to explain that this is a feasible goal, because we have built up a foundation of knowledge and technologies in fields such as neuroscience, cognitive science, computer science, control theory, game theory, robotics and vizualization and all this could be brought together to collaborate on a concerted effort to understand how the mind works:

“The science and technology is ready. Certainly, the neurosciences have developed a pretty good idea about computational representation in the brain. There’s a great deal of work in cognitive modelling, use of representation and use of knowledge….Fundamental processes are understood, at least in principle, technology is emerging to conduct definitive experiments”- James Albus.

My point is that, among all the groups and individuals who built up our knowledge and tech to a point where it really is feasible to tackle artificial general intelligence, many (most?) were not explicitly working towards AGI. They probably never even thought about it. They were, instead, working on other problems and other goals, not all of which have obvious connections to artificial intelligence.

JABELAR SAYS: As I always post, the fantastic part of human intelligence is pattern matching. Computers are already better at many brute force tasks, mathematical calculation and strict logical analysis. However, a human can hear a faint song sung out of tune over the sound of traffic and immediately recognize it. No current computer pattern matching comes close to this. This also allows us a superior ability to accommodate errors. For exmple I cn gt rd of mny of the vwls in my sntnce and u can fgre out wht it means. Computers can’t do that unless they are specifically programmed for the type of pattern and the conditions in which it is to be detected.

Anyway, I think that if we could figure out the pattern matching, then we could achieve a powerful top-down AI. Coupling pattern matching with computer memory and logical deduction would be extremely powerful.

But since we still seem stumped on the pattern matching, my bet is that we’ll mostly have to do bottom’s-up replication of human brain for our first major generalized AI.

JABELAR SAYS: I think people do use the term “AI” too loosely, often applying to narrow AI that really has no intelligence at all. For example, if I had a computer chess “AI” that simply used brute force to calculate all possible games and then choose the path with highest likelihood of winning, is that really “intelligent” at all? I would say that it is “faked intelligence” — meaning using something that can get the same or better results than an intelligence but without using intelligence.

I understand the Turing idea (that if the result is indistinguishable, then it is intelligent). But I think you have to keep intelligence coupled with learning ability for a proper use of the word, since without learning you can’t pass Turing test. In fact a narrow AI could never pass a Turing test.

Here is an easier example: a computer can do math much better than a human, but would you consider a spreadsheet to even be “narrow AI”?

The term “AI” is used a lot related to video games. But despite many video games’ non-player characters being very effective, it is also hard to call them truly intelligent, and therefore hard to say they are truly AI.

Therefore, I tend to agree that most of what is called “narrow AI” is not really AI at all, but rather something that has results similar to an intelligence for limited situations, and in many cases is “fake AI”. I think the term “narrow AI” should be used for proper subsets of generalized AI, or alternatively for generalized AI that is only being applied to a limited subject.

While trying to make generalized AI, it is certainly possible that you’d make “weak generalized AI” that would appear “narrow”, but that again is different. Maybe that should be called “weak AI” or “dumb AI”.

I guess maybe generalized AI is just a collection of narrow AI (I don’t know one way or the other), however I think the term of AI should be mainly used when a real attempt at algorithms that could extend to general AI are used.

I don’t know the exact terms that we should use to be precise, but I suggest that some terms like following are distinct and useful clarifications:
– generalized AI = learning ability coupled with reasoning, planning, problem solving etc. capabilities that is similar in function to human intelligence (but may vary in terms of performance and implementation)
– narrow AI = a subset of generalized AI (e.g. includes learning, reasoning, planning, problem solving but applied to a limited field of thought)
– weak AI = generalized AI that doesn’t perform as well as human (instead might be like a dumb or disabled human or smart animal, etc.)
– fake AI = hard-coded algorithms that give results that look intelligent but are really just brute force
– super AI = generalized AI that performs better than human.

I SAY:

It is not possible to calculate all possible moves in chess, and pick out the best move. The reason why not is because the number of possible moves in chess is staggeringly large, so big that even if your computer could examine a billion moves per second, its opponent would have to wait 40 billion years for it to finish examining all possible moves.

So, the notion that all Deep Blue had in its favour was brute calculating power does not do justice. It had to ‘know’ how to prune the move/countermove tree in such a way as to ensure a win within a reasonable amount of time.

JABELAR SAYS:

The brute force that current computers can do in reasonable time (maybe seven steps ahead) is still enough to beat most humans. You’re right that to augment the limitation the programmers apply additional analysis to decide the best bath, but that is also technically brute force — you have the computer count up the value of the pieces and their positions. Perfect repetition of strict rules = brute force.

Humans also technically apply brute force (i.e. analyze all the possible moves according to their understanding of value) but are already outstripped in both number of levels and by being error prone. The only way a human can beat a computer today is by having a different understanding of the values of a position than the computer programmers. But if the person that beat the computer properly programmed in their understanding, then the computer would then be better than that person — it could apply the person’s own rules perfectly.

The marvel of humans is that we can play a game like chess and accumulate understanding that can help us compensate for our poor brute force ability.

Here is an interesting article on solving chess: http://rjlipton.wordpress.com/2010/05/12/can-we-solve-chess-one-day/

In that article, he makes a really interesting statement — even though storing all the moves is today practically impossible, in terms of “information” all those moves can be simply represented by the rules of chess. In other words, the rules of chess are sufficient to describe all the possible positions in chess. That is really an important concept. For example, all the “information” in the universe might be possible to be described by a small set of rules. In other words, complexity can arise from simplicity, which is a pretty cool concept and might be related to creation (whatever that might be).

EMPIRICAL FUTURE SAYS: AI applications will have an immensely wide value proposition, much as functional software does today. Pretty much any system running some kind of software today could benefit from the incorporation of intelligence, as well as many types of systems that do not yet exist. How “smart” that intelligence needs to be will be determined by the applications for which it was built, not by an arbitrary comparison to the human brain.
Some applications might need more intelligence than others, but all of them could benefit from a little, and all of them could benefit from a lot. Although the level of processing power for some applications that require it could some day approach that of a human brain, this is by no means de facto required in most cases to significantly enhance the applications for which it was built.
There is a lot of confusion between “narrow AI” and “general AI”. It’s extremely hard to discuss these since there is very little of either at the moment. Most of what is sometimes called AI in current systems is really just traditional functional software, not intelligent software per se. However, in all likelihood true “narrow AI” and true “general AI” will be deeply related, not entirely separate categories of technology. General AI will simply be the accumulation of many different narrow AI applications integrated into wider and wider capabilities and scopes of operation – a process that will likely take many decades at the minimum.
To bring a finer point to the distinction between functional software (the vast majority of all currently available software) and AI, narrow or otherwise, is in the adaptability of its behavior in response to changing inputs that cannot be predicted in advance.
In functional software, all responses to inputs are programmed in advance and follow pre-set rules – all of which, down to the minutest detail, are provided by human programmers and/or humans configuring the software.
In AI software, response patterns in software behavior are still based on rules, but those rules are of a fundamentally different nature – more like metric-based objectives. What those objectives and associated metrics are vary for each application, as does the degree of intelligence necessary to meet them. But the fact that there is very, very little of this kind of software around at the moment clearly demonstrates that we are at the beginning of what is almost certainly a very long road ahead in terms of achieving software that we as humans stand in awe at the intelligence of.
Summing up, the good news is that this problem is quite tractable in terms of defining it and distinguishing it from current types of available software, and it does not in any way, shape, or form depend on “modeling a human brain” or other projects of such Olympian magnitude.

EMPIRICAL FUTURE SAYS:

Kurzweil himself maintains that this is a last-ditch path to AI, if we simply cannot figure out how to engineer AI. But with the right application context, such as venues like the autonomous vehicle challenge, it is evident to even a skeptical mind such as my own that we are certainly intelligent enough to engineer very useful AI. Rather quickly, in fact.

A brain sim would be a black box for an AI application. It would be exactly like ripping your brain out of your head and dropping it into, say, a car, and saying “do nothing but drive this car forever”. An exact brain sim, with all of its biologically motivated plumbing, would be not only vast overkill and hence extremely wasteful (you don’t need 200,000 petaflops of brain power to drive a car), it would also be unbelievably dangerous, which for some reason is often overlooked. You, as an exact brain sim driving a car, might get pissed that somebody cut you off and start driving aggressively and cause a wreck. The first time something like that happened, that would be the end of the era of exact brain sims as AI applications. An engineered AI is a known entity, an exact brain sim is not. This, combined with its vast overkill in terms of necessary resources, and the proven ability of human beings to develop engineered AI given the right framing of the application context, makes exact brain sims as AI applications look extremely unlikely.

SET/AI SAYS: Turing machines are sets of simple processes mined from the computational universe from sets of the simplest axiomatic systems- Stephen Wolfram’s approach is actually searching the computational universe to discover non-human black-box programs that we may not even be able to figure out let alone create- software does not require intelligent engineering-

EMPIRICAL FUTURE SAYS: we are learning how to organize these increasingly formidable resources by reverse engineering the brain itself. By examining the brain in microscopic detail, we will be able to recreate and then vastly extend these processes”.

This is fundamentally incorrect. Brain research is useful, of course, but the implication of the above statement is that the main holdup in terms of creating AI is that we don’t understand our own brain well enough. We just need to study it harder.

Understanding the brain in “microscopic detail” really has nothing to do with whether we can build AI or not. We already understand the basic model of the human brain – it is a neural network. Neural networks are pattern recognition and response engines. We are very good examples of that, along with most every other animal on the planet with even a teeny tiny brain.

We not only understand neural networks, we have moved from theoretical knowledge to being able to engineer them, actually create them – and that was a quarter of a century ago. The problem is not that we can’t build them – it’s that we can’t program them.

At least, not program them well. What’s missing is the neural network equivalent of Boolean logic theory, which allows the framing of certain classes of problems (specifically, those that digital computers are good at) into an organized, consistent framework. This allows higher-orders of organization to be applied – such as general purpose programming languages, software methodologies, etc, to be developed and applied.

So what we are talking about is the need for fundamental advances in the theory of programming large neural networks, so pattern-recognition and -response software can be developed with the ease and consistency of traditional functional software for digital computers. Brain research may provide insights, but can not be considered the magic bullet for this problem.

This is not a matter of a magic cipher revealed by understanding the brain (or a neuron) to some arbitrarily high level of detail. It is extremely important to acknowledge that a neural network and a digital computer are two entirely different types of computing entity. Yes, they both process information, but unfortunately, there are almost no other similarities – how they process that information is almost entirely different in every single way. The same goes for quantum computers as well, btw – all 3 of these are entirely different computing entities.

It’s fascinating that we always imagine that it would be easy for our brains to be simulated by a digital computer, but we can turn this question around to illustrate its inherent contradictions. For example, the following scenarios are every bit as likely as being able to simulate our brains inside a computer:
1. Simulating a digital computer in a neural network
2. Simulating a digital computer in a quantum computer
3. Simulating a quantum computer in a neural network

And so on. Do any of these sound easy? They shouldn’t, if they are seriously pondered even for a moment. It’s not that any of these scenarios are inherently impossible (at least, to my knowledge). It’s that to attempt them would be inherently pointless. You would go through a Herculean effort to replicate the type of computing of the first in the other, that the first already does comparatively easily because of its very nature.

It is extremely important to understand that each of these types of computers, because of their fundamental nature, are well suited to different kinds of problems. There may be some overlap, but for the most part what one finds easy, another finds extremely challenging, and vice versa, in any combination we care to imagine.

The real power comes not with completely replacing one with another, but in leveraging their complementarity to solve all kinds of problems that just one kind cannot, or can only very poorly attempt. This is precisely the lesson that the evolution of digital computers have taught us, and also explains why humans are not being steadily obsolesced in any way, shape, or form.

There is really nothing to suggest that Kurzweil’s “perfect simulation of a human brain in a digital computer”, although not outright impossible, will ever prove feasible or affordable in anything other than a lab environment, and even then only for very specific reasons – such as learning about the brain. Much less as a great way to provide low-cost immortality and utopia for the masses.

Still, this idea fascinates us. Why?

Is it inherent contempt for our current platform? Ie, the quasi-racist, semi-perverted language such as “juicy meatsacks”, “mammalian”, “apes”, etc, all these pejoratives that if one person called someone else they would either get their ass kicked or would think the name-caller is a new kind of pervert? Along this thread, you can label them any way you want, but for what they do even a stupid human brain – or really, a stupid cockroach brain, go figure – is better than the smartest robot on the market.

This is not because we just haven’t made a supercomputer big or powerful enough – some of them fill the floors of whole buildings, and if you combined all the computer processing power in the world in one place would likely fill a city. But a single cockroach brain the size of a pinhead can outthink them all for what it needs to do. A huge part of this titanic mismatch must be attributed to the fact that a cockroach – and us – have a type of computer in our head that by its very nature, from the ground up, is completely different from a digital computer.

Or maybe it’s because of the immortality thing, a chip in silicon seems so permanent compared to our juicy, flabby meatsacks. But this is an illusion, no computer is as durable as a human body of average health, and throwing that over the fence to the wonders of tomorrow is no help. The higher technology gets, the more fragile and perishable it becomes. This is not a key to immortality – ever. There is absolutely no evidence of any kind that this will ever be the case.

Wherever this fancy is bred, it is not grounded in a realistic assessment of this problem space.

In a human brain there is most definitely software, but it is intimately commingled into the hardware itself, which will make abstracting it out so that flexibly programmed AI is possible all the more difficult. Another reason that brain sims on their own won’t get us to AI.

Even if we could build molecularly accurate neurons, we still could not build AI. We might be able to replicate a human brain, but we don’t need another human brain – we need AI, that can be tuned and optimized to predetermined tasks.

It is not just neurons – there are many different types of neurons, organized into many different subsystems, that communicate in higher and higher levels of intercommunication to produce a sentient being called us. Abstracting out useful AI applications from that mess will be much, much harder than developing a comprehensive theory of neural network programming.

It is extremely important to understand that AI is not a human brain in silicon, that’s not what AI is about.

MORGAINE DINOVA: “We don’t have AI yet, we never have had, of any variety, unless there has been some recent breakthrough I’ve not heard about. What we have are knowledge base query engines, some theorem provers for limited reasoning, and very elementary stimulus-response machinery. It’s hard to see it acting intelligently, ie. forming models dynamically and reacting to them”.

ZOMBIEFOOD: ”

there is no artificial anything. we are part of nature therefore everything we think we create is a part of nature. there is intelligence at every level of reality. the question is only what level”.

VIRGILIC: I dont think there are recognizable intermediate points on the way to building artificial general intelligence, except for the required hardware.

If intelligence is buildable then all that is needed is its mathematical description and the hardware to implement it. The Singularity is defined as the rise of greater-than-human intelligence, but lets leave that aside for a second and talk only about general intelligence, not necessarily greater-than-human.

If there is blueprint, a basic set of principles, that can function as generic general intelligence, then the problem is that nobody can predict how far or close a project is to build the artificial general intelligence before the first model is verified.
What I mean is that even if today someone publishes the entire model, nobody knows if it really works, before it is implemented and tested! We dont have a theoretical reference! We will only after the first one is built or perhaps after we understand which is the minimal configuration of neurons in human brains that allows an individual to be sentient (which will actually be the first AI model…)

Therefore, nobody will know if they completed a model of AGI, before actually implementing and chatting with it, and because of this talking about 25%, 85% or 99% on the way to AI is meaningless.

And for these same reasons, whining about the fact that supposedly nothing particulary interesting happened yet or 30 years ago is stupid. We actually dont know where we are, how close or how far, because we dont even know what it should look like, and if we knew what it should look like it would be implemented in a matter of days to a few years depending on the cost.

On the other hand one can imagine, that in a certain amount of years it is probable that the blueprint of a general intelligence will be created and proven to work upon implementation.

There is no “half-way” or “just around the corner” in this business, its either done or not. There is no “almost AGI”…

It was exactly the same with flying machines. Until the first glider flew, just looking at the blueprints one could have absolutely no ideea if the model is working or not and what else might be needed to make it fly. Because the principles of fligt were not formalized yet. Once they were, one could, in principle, say if a model will work, just by looing at the blue print. They also had no true referrence because of their lack of understanding of the basic principles.

In exactly the same way, we do not have the principles of AI formalized therefore we cannot know how far along the way we are.

The best way to judge this sort of things is to compare with how flight was achieved. That was one singularity that is the best model for judging the pre-singularity processes!

People had the same debates about its feasibility and schedule then as we have now….

Only after building it, will we be able to look back and asses which step drove us how close to the final achievement.

Advertisements
This entry was posted in Secondary Thoughts.. Bookmark the permalink.

One Response to ARTIFICIAL INTELLIGENCE.

  1. Hehe 🙂 Morgaine is referring to the usual split between “weak” AI and “strong” AI. We’re quite well off on weak AI models, but no breakthroughs. Strong AI is probably just a myth and most researchers are not even attempting that 😉

    I like the quote from “zombiefood” though! S/he has a great insight, one that is very closely aligned to my own thoughts as well. Every creation of our mind is “natural” because our mind is natural, so AI has to be natural. It makes a lot of sense to me!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s