Thursday, February 18, 2010

AI+ The Generally Narrow Approach to Intelligence



If you have not noticed, the field of "Artificial" intelligence has recently broken itself in two. The division separates the original field of AI, and its root goals, into two classes called "narrow" and "general" intelligence. Narrow artificial intelligence involves programs that may out-perform the human brain in solving direct and specific problems, while general intelligence seeks to be an all-purpose computer intelligence that can understand any problem it encounters. While the distinction seems necessary in the short-term, I do believe the concept of narrow and general will become meaningless to systems that grow more complex than the human mind - such as the modern Internet (considered as a single entity).



While both sub-fields of AI (narrow & general) continue to develop in their own unique ways, it is important to understand what their core belief systems share in common.

Limitations of Narrow Artificial Intelligence

Year by year this field is making leaps and bounds in making us believe that computers are actually "thinking" about the decisions they make and the actions they take. While it may solve the task for which it was created in a way that gives the illusion of intelligence, the truth of the matter is that everything the system is doing consists of pre-programmed and deterministic procedural functionality. What "narrow" Ai lacks is the ability to understand a problem beyond what it was originally programmed to solve. For instance, you can't ask a chess program to write an E-mail, nor can it learn how to prove a math theorem. This limitation makes it seem like narrow Ai will never be able to fully interact with humans at a level indistinguishable from humans; in other words, being able to pass the Turing Test.

Self-Modification and Learning

In a narrow Ai based system the concept of self learning outside of its preprogrammed instructions is currently non-existant. More precisely, a narrow based system may be created to "learn" new ways of doing that task at hand, but it will not learn things outside of this. For instance a chess computer can show the ability to learn from the players that it plays, but in the end it is still learning how to play Chess rather than Battleship or Go.

Attempting to teach a narrow-AI to play a game outside of what it was programmed to play will seldom yield useful or surprising outcomes. Except when the original problem is a superset of other problems, as in the case of evolutionary physics simulators like Critterding, in which survival in a 3D physical world is the general problem - and that problem is shared by most videogames.


Now when it comes to the task of self-modification, narrow Ai's aren't generally designed to abstractly or drastically rewrite their source code. Yet a proper evolutionary technique could be employed to give the illusion of self modification. Imagine a system in which one is able to selectively incorporate new elements into its core coding, somewhat like a lego-block, a narrow Ai system that absorbs new elements and features into its core operational system in the form of software plugins. You may say that this new functionality allows a system to continue growing in complexity with an end result similar to those desired by general artificial intelligence research yielding a self modifying ability.

Video Game AI as the Solution to Our Real Life Games

The largest application of so-called narrow Ai currently can be seen and tested in the increasingly expansive videogame market, as developers seek to make computer controlled characters and worlds that behave and play similiar to actual human competitors. Unlike its parent, now turned sibling, artificial intelligence - narrow videogame Ai is an extremely profitable industry. In fact the videogame industry is now grossing more than the movie industry. With extremely high budgets and a fiercely competitive environment that encourages the development of greater and greater advances in Ai programming, it is difficult to imagine that this particular sector ever falls behind the most advanced scientific artificial general intelligence development. Oddly enough, if general intelligence research ever advanced further than narrow Ai systems, it would actually amplify the the videogame sector as it will easily augment its profitable applications.


What is Automatic Intellect?

As the name suggest, Artificial General Intelligence is a form of artificial intelligence seeking to be something like a swiss-army knife software that can handle a variety of tasks thrown at it.

In contrast, Automatic Intellect is a term we have coined to mean:

Any type of automatic system that one considers to exhibit apparent intelligence
This includes all forms of individual or group, biological, alien, or software AI (narrow or general), and any in between or in combination.

The word "artificial" has the definitions:

  • humanly contrived often on a natural model
  • lacking in natural or spontaneous quality

Consider whether an extraterrestrial digital computer system exhibits "artificial intelligence" - it is not human-made, and if it is able to out-think a human brain, it will seem apparently spontaneous. Therefore I propose that the distinction between "artificial" and "natural" is a purely human-chauvanistic artifact and thus not helpful in any definitions.

Convergence of Narrow and General (Ai+) at the Singularity


It may become increasingly silly to percieve the field of Ai as two distinct entities in the near future, as both narrow and general intelligence may find themselves embedded in the creation of future cyber entities. Regardless of the path taken to reach the creation of an intelligence greater than our own, the final outcome is just that an intelligence greater than our own, and when that comes into creation the distinguishable features from narrow and general intelligence wont be possible to be recognized from the human standpoint.



We can test this assumption with a simple thought experiment. Imagine you present two identical that only have one internal difference, one is superhuman general intelligence while the other is a superhuman narrow intelligence. Now imagine you are given the task to identify the one system that is general and sentient from the the system that is narrow and simply a cold dead shell. At no point are you allowed to see what is under the hood i.e the source code, all you are allowed to do is interact with the system. Well I propose that this task is not computable from the standpoint of human intelligence, and thus rendering our current terms narrow and general meaningless.


This is rather an important concept to understand as we approach the singularity with both fields exponentially increasing towards the superhuman benchmark. The reason that this an important thing to take into concideration moving forward is that It is believed that only the general intelligence route could lead to an intelligence explosion. Although I believe other factors exist that need to be fully understood, for instance, whether a superhuman intelligence can be achieved through the general or the narrow route. So it may be possible that machine conciousness is not completely necessary for reaching the same goals that "artificial general intelligence" seeks to achieve.


In conclusion I believe that the question of machine consciousness, while seemingly inherently important, may not actually be neccessary on the path to superhuman intelligence, although I find my current analysis somewhat disturbing. While I'd rather not get into an overly extended conversation on why my instinct tells me that this route may not be the best path for the overall survival of the human race, I will say this:

seeAlso:

Tonini Complexity and Computational Consciousness - http://www.biolbull.org/cgi/content/abstract/215/3/216
Goertzel Block and Beads World - http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf
dANN http://wiki.syncleus.com/index.php/DANN
AI Game Developers http://aigamedev.com/
AGI 2010 Conference http://agi-conf.org/2010/conference-schedule/
CritterDing - http://critterding.sf.net
CritterGod - http://crittergod.sf.net
Serial Experiments Lain - http://en.wikipedia.org/wiki/Serial_Experiments_Lain
General Theory of Intelligence by Pei Wang - http://sites.google.com/site/narswang/EBook 

AI+ The Generally Narrow Approach to Intelligence



If you have not noticed, the field of "Artificial" intelligence has recently broken itself in two. The division separates the original field of AI, and its root goals, into two classes called "narrow" and "general" intelligence. Narrow artificial intelligence involves programs that may out-perform the human brain in solving direct and specific problems, while general intelligence seeks to be an all-purpose computer intelligence that can understand any problem it encounters. While the distinction seems necessary in the short-term, I do believe the concept of narrow and general will become meaningless to systems that grow more complex than the human mind - such as the modern Internet (considered as a single entity).



While both sub-fields of AI (narrow & general) continue to develop in their own unique ways, it is important to understand what their core belief systems share in common.

Limitations of Narrow Artificial Intelligence

Year by year this field is making leaps and bounds in making us believe that computers are actually "thinking" about the decisions they make and the actions they take. While it may solve the task for which it was created in a way that gives the illusion of intelligence, the truth of the matter is that everything the system is doing consists of pre-programmed and deterministic procedural functionality. What "narrow" Ai lacks is the ability to understand a problem beyond what it was originally programmed to solve. For instance, you can't ask a chess program to write an E-mail, nor can it learn how to prove a math theorem. This limitation makes it seem like narrow Ai will never be able to fully interact with humans at a level indistinguishable from humans; in other words, being able to pass the Turing Test.

Self-Modification and Learning

In a narrow Ai based system the concept of self learning outside of its preprogrammed instructions is currently non-existant. More precisely, a narrow based system may be created to "learn" new ways of doing that task at hand, but it will not learn things outside of this. For instance a chess computer can show the ability to learn from the players that it plays, but in the end it is still learning how to play Chess rather than Battleship or Go.

Attempting to teach a narrow-AI to play a game outside of what it was programmed to play will seldom yield useful or surprising outcomes. Except when the original problem is a superset of other problems, as in the case of evolutionary physics simulators like Critterding, in which survival in a 3D physical world is the general problem - and that problem is shared by most videogames.


Now when it comes to the task of self-modification, narrow Ai's aren't generally designed to abstractly or drastically rewrite their source code. Yet a proper evolutionary technique could be employed to give the illusion of self modification. Imagine a system in which one is able to selectively incorporate new elements into its core coding, somewhat like a lego-block, a narrow Ai system that absorbs new elements and features into its core operational system in the form of software plugins. You may say that this new functionality allows a system to continue growing in complexity with an end result similar to those desired by general artificial intelligence research yielding a self modifying ability.

Video Game AI as the Solution to Our Real Life Games

The largest application of so-called narrow Ai currently can be seen and tested in the increasingly expansive videogame market, as developers seek to make computer controlled characters and worlds that behave and play similiar to actual human competitors. Unlike its parent, now turned sibling, artificial intelligence - narrow videogame Ai is an extremely profitable industry. In fact the videogame industry is now grossing more than the movie industry. With extremely high budgets and a fiercely competitive environment that encourages the development of greater and greater advances in Ai programming, it is difficult to imagine that this particular sector ever falls behind the most advanced scientific artificial general intelligence development. Oddly enough, if general intelligence research ever advanced further than narrow Ai systems, it would actually amplify the the videogame sector as it will easily augment its profitable applications.


What is Automatic Intellect?

As the name suggest, Artificial General Intelligence is a form of artificial intelligence seeking to be something like a swiss-army knife software that can handle a variety of tasks thrown at it.

In contrast, Automatic Intellect is a term we have coined to mean:

Any type of automatic system that one considers to exhibit apparent intelligence
This includes all forms of individual or group, biological, alien, or software AI (narrow or general), and any in between or in combination.

The word "artificial" has the definitions:

  • humanly contrived often on a natural model
  • lacking in natural or spontaneous quality

Consider whether an extraterrestrial digital computer system exhibits "artificial intelligence" - it is not human-made, and if it is able to out-think a human brain, it will seem apparently spontaneous. Therefore I propose that the distinction between "artificial" and "natural" is a purely human-chauvanistic artifact and thus not helpful in any definitions.

Convergence of Narrow and General (Ai+) at the Singularity


It may become increasingly silly to percieve the field of Ai as two distinct entities in the near future, as both narrow and general intelligence may find themselves embedded in the creation of future cyber entities. Regardless of the path taken to reach the creation of an intelligence greater than our own, the final outcome is just that an intelligence greater than our own, and when that comes into creation the distinguishable features from narrow and general intelligence wont be possible to be recognized from the human standpoint.



We can test this assumption with a simple thought experiment. Imagine you present two identical that only have one internal difference, one is superhuman general intelligence while the other is a superhuman narrow intelligence. Now imagine you are given the task to identify the one system that is general and sentient from the the system that is narrow and simply a cold dead shell. At no point are you allowed to see what is under the hood i.e the source code, all you are allowed to do is interact with the system. Well I propose that this task is not computable from the standpoint of human intelligence, and thus rendering our current terms narrow and general meaningless.


This is rather an important concept to understand as we approach the singularity with both fields exponentially increasing towards the superhuman benchmark. The reason that this an important thing to take into concideration moving forward is that It is believed that only the general intelligence route could lead to an intelligence explosion. Although I believe other factors exist that need to be fully understood, for instance, whether a superhuman intelligence can be achieved through the general or the narrow route. So it may be possible that machine conciousness is not completely necessary for reaching the same goals that "artificial general intelligence" seeks to achieve.


In conclusion I believe that the question of machine consciousness, while seemingly inherently important, may not actually be neccessary on the path to superhuman intelligence, although I find my current analysis somewhat disturbing. While I'd rather not get into an overly extended conversation on why my instinct tells me that this route may not be the best path for the overall survival of the human race, I will say this:

seeAlso:

Tonini Complexity and Computational Consciousness - http://www.biolbull.org/cgi/content/abstract/215/3/216
Goertzel Block and Beads World - http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf
dANN http://wiki.syncleus.com/index.php/DANN
AI Game Developers http://aigamedev.com/
AGI 2010 Conference http://agi-conf.org/2010/conference-schedule/
CritterDing - http://critterding.sf.net
CritterGod - http://crittergod.sf.net
Serial Experiments Lain - http://en.wikipedia.org/wiki/Serial_Experiments_Lain
General Theory of Intelligence by Pei Wang - http://sites.google.com/site/narswang/EBook 

AI+ The Generally Narrow Approach to Intelligence



If you have not noticed, the field of "Artificial" intelligence has recently broken itself in two. The division separates the original field of AI, and its root goals, into two classes called "narrow" and "general" intelligence. Narrow artificial intelligence involves programs that may out-perform the human brain in solving direct and specific problems, while general intelligence seeks to be an all-purpose computer intelligence that can understand any problem it encounters. While the distinction seems necessary in the short-term, I do believe the concept of narrow and general will become meaningless to systems that grow more complex than the human mind - such as the modern Internet (considered as a single entity).



While both sub-fields of AI (narrow & general) continue to develop in their own unique ways, it is important to understand what their core belief systems share in common.

Limitations of Narrow Artificial Intelligence

Year by year this field is making leaps and bounds in making us believe that computers are actually "thinking" about the decisions they make and the actions they take. While it may solve the task for which it was created in a way that gives the illusion of intelligence, the truth of the matter is that everything the system is doing consists of pre-programmed and deterministic procedural functionality. What "narrow" Ai lacks is the ability to understand a problem beyond what it was originally programmed to solve. For instance, you can't ask a chess program to write an E-mail, nor can it learn how to prove a math theorem. This limitation makes it seem like narrow Ai will never be able to fully interact with humans at a level indistinguishable from humans; in other words, being able to pass the Turing Test.

Self-Modification and Learning

In a narrow Ai based system the concept of self learning outside of its preprogrammed instructions is currently non-existant. More precisely, a narrow based system may be created to "learn" new ways of doing that task at hand, but it will not learn things outside of this. For instance a chess computer can show the ability to learn from the players that it plays, but in the end it is still learning how to play Chess rather than Battleship or Go.

Attempting to teach a narrow-AI to play a game outside of what it was programmed to play will seldom yield useful or surprising outcomes. Except when the original problem is a superset of other problems, as in the case of evolutionary physics simulators like Critterding, in which survival in a 3D physical world is the general problem - and that problem is shared by most videogames.


Now when it comes to the task of self-modification, narrow Ai's aren't generally designed to abstractly or drastically rewrite their source code. Yet a proper evolutionary technique could be employed to give the illusion of self modification. Imagine a system in which one is able to selectively incorporate new elements into its core coding, somewhat like a lego-block, a narrow Ai system that absorbs new elements and features into its core operational system in the form of software plugins. You may say that this new functionality allows a system to continue growing in complexity with an end result similar to those desired by general artificial intelligence research yielding a self modifying ability.

Video Game AI as the Solution to Our Real Life Games

The largest application of so-called narrow Ai currently can be seen and tested in the increasingly expansive videogame market, as developers seek to make computer controlled characters and worlds that behave and play similiar to actual human competitors. Unlike its parent, now turned sibling, artificial intelligence - narrow videogame Ai is an extremely profitable industry. In fact the videogame industry is now grossing more than the movie industry. With extremely high budgets and a fiercely competitive environment that encourages the development of greater and greater advances in Ai programming, it is difficult to imagine that this particular sector ever falls behind the most advanced scientific artificial general intelligence development. Oddly enough, if general intelligence research ever advanced further than narrow Ai systems, it would actually amplify the the videogame sector as it will easily augment its profitable applications.


What is Automatic Intellect?

As the name suggest, Artificial General Intelligence is a form of artificial intelligence seeking to be something like a swiss-army knife software that can handle a variety of tasks thrown at it.

In contrast, Automatic Intellect is a term we have coined to mean:

Any type of automatic system that one considers to exhibit apparent intelligence
This includes all forms of individual or group, biological, alien, or software AI (narrow or general), and any in between or in combination.

The word "artificial" has the definitions:

  • humanly contrived often on a natural model
  • lacking in natural or spontaneous quality

Consider whether an extraterrestrial digital computer system exhibits "artificial intelligence" - it is not human-made, and if it is able to out-think a human brain, it will seem apparently spontaneous. Therefore I propose that the distinction between "artificial" and "natural" is a purely human-chauvanistic artifact and thus not helpful in any definitions.

Convergence of Narrow and General (Ai+) at the Singularity


It may become increasingly silly to percieve the field of Ai as two distinct entities in the near future, as both narrow and general intelligence may find themselves embedded in the creation of future cyber entities. Regardless of the path taken to reach the creation of an intelligence greater than our own, the final outcome is just that an intelligence greater than our own, and when that comes into creation the distinguishable features from narrow and general intelligence wont be possible to be recognized from the human standpoint.



We can test this assumption with a simple thought experiment. Imagine you present two identical that only have one internal difference, one is superhuman general intelligence while the other is a superhuman narrow intelligence. Now imagine you are given the task to identify the one system that is general and sentient from the the system that is narrow and simply a cold dead shell. At no point are you allowed to see what is under the hood i.e the source code, all you are allowed to do is interact with the system. Well I propose that this task is not computable from the standpoint of human intelligence, and thus rendering our current terms narrow and general meaningless.


This is rather an important concept to understand as we approach the singularity with both fields exponentially increasing towards the superhuman benchmark. The reason that this an important thing to take into concideration moving forward is that It is believed that only the general intelligence route could lead to an intelligence explosion. Although I believe other factors exist that need to be fully understood, for instance, whether a superhuman intelligence can be achieved through the general or the narrow route. So it may be possible that machine conciousness is not completely necessary for reaching the same goals that "artificial general intelligence" seeks to achieve.


In conclusion I believe that the question of machine consciousness, while seemingly inherently important, may not actually be neccessary on the path to superhuman intelligence, although I find my current analysis somewhat disturbing. While I'd rather not get into an overly extended conversation on why my instinct tells me that this route may not be the best path for the overall survival of the human race, I will say this:

seeAlso:

Tonini Complexity and Computational Consciousness - http://www.biolbull.org/cgi/content/abstract/215/3/216
Goertzel Block and Beads World - http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf
dANN http://wiki.syncleus.com/index.php/DANN
AI Game Developers http://aigamedev.com/
AGI 2010 Conference http://agi-conf.org/2010/conference-schedule/
CritterDing - http://critterding.sf.net
CritterGod - http://crittergod.sf.net
Serial Experiments Lain - http://en.wikipedia.org/wiki/Serial_Experiments_Lain
General Theory of Intelligence by Pei Wang - http://sites.google.com/site/narswang/EBook 

My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.