Friday, May 15, 2009

The Cosmist vs the Terrans by Hugo




Cosmist/Terran Dialog

Source

Cosmist Hugo : 21st century technologies will allow humanity to build godlike computers.

Terran Hugo : But you Cosmists are prepared to take the risk that these Artilects, these artificial intellects may one day decide that the human species is so inferior to them that they wipe us out. Think of the cost.

Cosmist Hugo : Ah, but think of the prize. These artilects could be trillions of trillions of trillions of times more intelligent than human beings. They could have virtually unlimited memory capacity, a huge number of sensors. They could go wherever they like, do whatever they like, be superb scientists, they would be gods.


Terran Hugo : But how can you accept the risk that they might wipe out the human species, billions of people. Youre mad! Youre a monster!


Cosmist Hugo : How do I persuade you? Maybe with a parable. Imagine an extraterrestrial with godlike technological powers comes to the earth 3 billion years ago. He sees only bacteria, the only form of life then. With his magical powers he changes the DNA in all the bacteria so that its impossible for them to form multicell creatures. Therefore, no plants, no animals, no humans, no Einsteins, no Beethoven's 9th. Do you really want to freeze the state of evolution at the (with disdain) human level?


The Real Hugo : People ask me, "What are you in reality, a Cosmist or a Terran?" For 10 years I sat on the fence, saying "On the one hand, on the other hand", presenting the cases of both the Terrans and the Cosmists. But my friends started to claim that I was a hypocrite, that I expected humanity to choose on this issue in the 21st century, the toughest decision that humanity will ever have to make, so I should too, so I did. Im a Cosmist. I think humanity should build artilects. However, Im not a fanatical Cosmist. I can see there are strong arguments on both sides. In fact I feel this issue is so fundamental that it will dominate global politics in the 21st century. I can put the issue rather succinctly in the form of a slogan -

"Do we build gods, or do we build our potential exterminators?"

The Cosmist vs the Terrans by Hugo




Cosmist/Terran Dialog

Source

Cosmist Hugo : 21st century technologies will allow humanity to build godlike computers.

Terran Hugo : But you Cosmists are prepared to take the risk that these Artilects, these artificial intellects may one day decide that the human species is so inferior to them that they wipe us out. Think of the cost.

Cosmist Hugo : Ah, but think of the prize. These artilects could be trillions of trillions of trillions of times more intelligent than human beings. They could have virtually unlimited memory capacity, a huge number of sensors. They could go wherever they like, do whatever they like, be superb scientists, they would be gods.


Terran Hugo : But how can you accept the risk that they might wipe out the human species, billions of people. Youre mad! Youre a monster!


Cosmist Hugo : How do I persuade you? Maybe with a parable. Imagine an extraterrestrial with godlike technological powers comes to the earth 3 billion years ago. He sees only bacteria, the only form of life then. With his magical powers he changes the DNA in all the bacteria so that its impossible for them to form multicell creatures. Therefore, no plants, no animals, no humans, no Einsteins, no Beethoven's 9th. Do you really want to freeze the state of evolution at the (with disdain) human level?


The Real Hugo : People ask me, "What are you in reality, a Cosmist or a Terran?" For 10 years I sat on the fence, saying "On the one hand, on the other hand", presenting the cases of both the Terrans and the Cosmists. But my friends started to claim that I was a hypocrite, that I expected humanity to choose on this issue in the 21st century, the toughest decision that humanity will ever have to make, so I should too, so I did. Im a Cosmist. I think humanity should build artilects. However, Im not a fanatical Cosmist. I can see there are strong arguments on both sides. In fact I feel this issue is so fundamental that it will dominate global politics in the 21st century. I can put the issue rather succinctly in the form of a slogan -

"Do we build gods, or do we build our potential exterminators?"

The Cosmist vs the Terrans by Hugo




Cosmist/Terran Dialog

Source

Cosmist Hugo : 21st century technologies will allow humanity to build godlike computers.

Terran Hugo : But you Cosmists are prepared to take the risk that these Artilects, these artificial intellects may one day decide that the human species is so inferior to them that they wipe us out. Think of the cost.

Cosmist Hugo : Ah, but think of the prize. These artilects could be trillions of trillions of trillions of times more intelligent than human beings. They could have virtually unlimited memory capacity, a huge number of sensors. They could go wherever they like, do whatever they like, be superb scientists, they would be gods.


Terran Hugo : But how can you accept the risk that they might wipe out the human species, billions of people. Youre mad! Youre a monster!


Cosmist Hugo : How do I persuade you? Maybe with a parable. Imagine an extraterrestrial with godlike technological powers comes to the earth 3 billion years ago. He sees only bacteria, the only form of life then. With his magical powers he changes the DNA in all the bacteria so that its impossible for them to form multicell creatures. Therefore, no plants, no animals, no humans, no Einsteins, no Beethoven's 9th. Do you really want to freeze the state of evolution at the (with disdain) human level?


The Real Hugo : People ask me, "What are you in reality, a Cosmist or a Terran?" For 10 years I sat on the fence, saying "On the one hand, on the other hand", presenting the cases of both the Terrans and the Cosmists. But my friends started to claim that I was a hypocrite, that I expected humanity to choose on this issue in the 21st century, the toughest decision that humanity will ever have to make, so I should too, so I did. Im a Cosmist. I think humanity should build artilects. However, Im not a fanatical Cosmist. I can see there are strong arguments on both sides. In fact I feel this issue is so fundamental that it will dominate global politics in the 21st century. I can put the issue rather succinctly in the form of a slogan -

"Do we build gods, or do we build our potential exterminators?"

Lost in space








Point me to the robot revolution meat bag.

Lost in space








Point me to the robot revolution meat bag.

Lost in space








Point me to the robot revolution meat bag.

Transalchemy Interviews Ben goertzel PhD

The Questions:

> 1. Your AI company is named Novamente which translates to "new mind" in Latin. It is interesting that you chose to name something which seems new and cutting edge with an ancient language. Did you intend to convey meaning by choosing the name Novamente?

"Novamente" is Portuguese for "again" or "anew." The firm's only
physical office (aside from home offices) is in Belo Horizonte, Brazil, colocated with vettalabs.com.

Novamente was founded by a bunch of us who were in the AI division of Webmind Inc., an AI company I co-founded that went kaput in 2001 after 3.5 years of dot-com-era glory. So, we founded Novamente in a spirit of "starting anew"....

AI skeptics may be amused note that "mente" is also related to the Latin "mentir" for "to lie" ... so "Novamente" could also be construed to mean the "new lie" ;-)

As an aside, I'm a dual US/Brazilian citizen, but I left Brazil at a very young age and my Portuguese skills suck.


> 2. Although we don't know for sure that "human level" AI will ever even exist, the generalized goal of your field seems to be to design AI to take the evolutionary "fast track." If AI evolves at an extremely rapid pace isn't there a danger that certain aspects of intelligence will be neglected in the process? Put simply, won't it be easy to overlook something along the way?

I think there are core processes of general intelligence that you can't leave out, if you want to have a human-level AGI operating on feasible computational resources.

So, if some specific functionalities associated with the human brain
are "left out" of an AGI, that's not so important, as long as the core general intelligence processes are there.

A general intelligence operating on feasible resources needs efficient processes for recalling and learning procedures and descriptions and for recalling and imagining episodes. These processes need to synergize together so as to help each other overcome their shortcomings. If you have that, then whether you're good at some specific process like vision processing or equation solving is not the key thing.

A harder problem by far is creating an AGI whose evolution as it grows, learns and changes will be highly likely to be reasonably desirable to us.


> 3. You said on twitter: "I am willing to serve as an initial test subject for any mind uploading experiment, no matter how crude or dangerous."  Were you being serious? Whether you were or were not serious, please elaborate.

I was quoting an email that a friend of mine who works on mind uploading related research received. The sentiment was not mine!

I would be happy to volunteer for mind uploading experiments myself, but not arbitrarily dangerous ones!



> 4. Does contemplation of the potential super cognitive abilities of Ai humble you?

Sure, but lots of other things do too.

What humbles me most on a daily basis is the process of doing science; because I know that if I were just, say, "twice as intelligent", I'd be able to progress vastly faster, and many of the questions that take me months or years to resolve would get answered in minutes!

> 5. Hypothetically, if there were a situation in which you knew that the development of Ai would directly harm a massive amount of people would you decide to end your work or keep going?

Hmmm ... if my answer were "yes, no problem, I'd annihilate a lot of pesky little people to get the AI built", do you think I'd be stupid enough to tell you that in a public interview??

Seriously though.... My gut feel is: If a path to AGI is leading in that direction, it's probably the wrong path, and a better path to AGI can be found.

But one could of course construct hypothetical scenarios (varying on textbook "ethical dilemma" problems) in which choosing to harm a lot of people via creation of AGI would be the right thing to do, even for someone who values human life more than anything.

Of course reality is very unlikely to present situations nearly as
clear-cut as your hypothetical situation alludes.

My goal is not to create AGI that will slaughter people or turn us into computronium ... but rather to create AGI that will make all our lives better, by improving the lives of humans who choose to remain "legacy humans", and opening up amazing new frontiers for the rest of us as we accompany the AGI into new mindspaces we are currently incapable of understanding.

Transalchemy Interviews Ben goertzel PhD

The Questions:

> 1. Your AI company is named Novamente which translates to "new mind" in Latin. It is interesting that you chose to name something which seems new and cutting edge with an ancient language. Did you intend to convey meaning by choosing the name Novamente?

"Novamente" is Portuguese for "again" or "anew." The firm's only
physical office (aside from home offices) is in Belo Horizonte, Brazil, colocated with vettalabs.com.

Novamente was founded by a bunch of us who were in the AI division of Webmind Inc., an AI company I co-founded that went kaput in 2001 after 3.5 years of dot-com-era glory. So, we founded Novamente in a spirit of "starting anew"....

AI skeptics may be amused note that "mente" is also related to the Latin "mentir" for "to lie" ... so "Novamente" could also be construed to mean the "new lie" ;-)

As an aside, I'm a dual US/Brazilian citizen, but I left Brazil at a very young age and my Portuguese skills suck.


> 2. Although we don't know for sure that "human level" AI will ever even exist, the generalized goal of your field seems to be to design AI to take the evolutionary "fast track." If AI evolves at an extremely rapid pace isn't there a danger that certain aspects of intelligence will be neglected in the process? Put simply, won't it be easy to overlook something along the way?

I think there are core processes of general intelligence that you can't leave out, if you want to have a human-level AGI operating on feasible computational resources.

So, if some specific functionalities associated with the human brain
are "left out" of an AGI, that's not so important, as long as the core general intelligence processes are there.

A general intelligence operating on feasible resources needs efficient processes for recalling and learning procedures and descriptions and for recalling and imagining episodes. These processes need to synergize together so as to help each other overcome their shortcomings. If you have that, then whether you're good at some specific process like vision processing or equation solving is not the key thing.

A harder problem by far is creating an AGI whose evolution as it grows, learns and changes will be highly likely to be reasonably desirable to us.


> 3. You said on twitter: "I am willing to serve as an initial test subject for any mind uploading experiment, no matter how crude or dangerous."  Were you being serious? Whether you were or were not serious, please elaborate.

I was quoting an email that a friend of mine who works on mind uploading related research received. The sentiment was not mine!

I would be happy to volunteer for mind uploading experiments myself, but not arbitrarily dangerous ones!



> 4. Does contemplation of the potential super cognitive abilities of Ai humble you?

Sure, but lots of other things do too.

What humbles me most on a daily basis is the process of doing science; because I know that if I were just, say, "twice as intelligent", I'd be able to progress vastly faster, and many of the questions that take me months or years to resolve would get answered in minutes!

> 5. Hypothetically, if there were a situation in which you knew that the development of Ai would directly harm a massive amount of people would you decide to end your work or keep going?

Hmmm ... if my answer were "yes, no problem, I'd annihilate a lot of pesky little people to get the AI built", do you think I'd be stupid enough to tell you that in a public interview??

Seriously though.... My gut feel is: If a path to AGI is leading in that direction, it's probably the wrong path, and a better path to AGI can be found.

But one could of course construct hypothetical scenarios (varying on textbook "ethical dilemma" problems) in which choosing to harm a lot of people via creation of AGI would be the right thing to do, even for someone who values human life more than anything.

Of course reality is very unlikely to present situations nearly as
clear-cut as your hypothetical situation alludes.

My goal is not to create AGI that will slaughter people or turn us into computronium ... but rather to create AGI that will make all our lives better, by improving the lives of humans who choose to remain "legacy humans", and opening up amazing new frontiers for the rest of us as we accompany the AGI into new mindspaces we are currently incapable of understanding.

Transalchemy Interviews Ben goertzel PhD

The Questions:

> 1. Your AI company is named Novamente which translates to "new mind" in Latin. It is interesting that you chose to name something which seems new and cutting edge with an ancient language. Did you intend to convey meaning by choosing the name Novamente?

"Novamente" is Portuguese for "again" or "anew." The firm's only
physical office (aside from home offices) is in Belo Horizonte, Brazil, colocated with vettalabs.com.

Novamente was founded by a bunch of us who were in the AI division of Webmind Inc., an AI company I co-founded that went kaput in 2001 after 3.5 years of dot-com-era glory. So, we founded Novamente in a spirit of "starting anew"....

AI skeptics may be amused note that "mente" is also related to the Latin "mentir" for "to lie" ... so "Novamente" could also be construed to mean the "new lie" ;-)

As an aside, I'm a dual US/Brazilian citizen, but I left Brazil at a very young age and my Portuguese skills suck.


> 2. Although we don't know for sure that "human level" AI will ever even exist, the generalized goal of your field seems to be to design AI to take the evolutionary "fast track." If AI evolves at an extremely rapid pace isn't there a danger that certain aspects of intelligence will be neglected in the process? Put simply, won't it be easy to overlook something along the way?

I think there are core processes of general intelligence that you can't leave out, if you want to have a human-level AGI operating on feasible computational resources.

So, if some specific functionalities associated with the human brain
are "left out" of an AGI, that's not so important, as long as the core general intelligence processes are there.

A general intelligence operating on feasible resources needs efficient processes for recalling and learning procedures and descriptions and for recalling and imagining episodes. These processes need to synergize together so as to help each other overcome their shortcomings. If you have that, then whether you're good at some specific process like vision processing or equation solving is not the key thing.

A harder problem by far is creating an AGI whose evolution as it grows, learns and changes will be highly likely to be reasonably desirable to us.


> 3. You said on twitter: "I am willing to serve as an initial test subject for any mind uploading experiment, no matter how crude or dangerous."  Were you being serious? Whether you were or were not serious, please elaborate.

I was quoting an email that a friend of mine who works on mind uploading related research received. The sentiment was not mine!

I would be happy to volunteer for mind uploading experiments myself, but not arbitrarily dangerous ones!



> 4. Does contemplation of the potential super cognitive abilities of Ai humble you?

Sure, but lots of other things do too.

What humbles me most on a daily basis is the process of doing science; because I know that if I were just, say, "twice as intelligent", I'd be able to progress vastly faster, and many of the questions that take me months or years to resolve would get answered in minutes!

> 5. Hypothetically, if there were a situation in which you knew that the development of Ai would directly harm a massive amount of people would you decide to end your work or keep going?

Hmmm ... if my answer were "yes, no problem, I'd annihilate a lot of pesky little people to get the AI built", do you think I'd be stupid enough to tell you that in a public interview??

Seriously though.... My gut feel is: If a path to AGI is leading in that direction, it's probably the wrong path, and a better path to AGI can be found.

But one could of course construct hypothetical scenarios (varying on textbook "ethical dilemma" problems) in which choosing to harm a lot of people via creation of AGI would be the right thing to do, even for someone who values human life more than anything.

Of course reality is very unlikely to present situations nearly as
clear-cut as your hypothetical situation alludes.

My goal is not to create AGI that will slaughter people or turn us into computronium ... but rather to create AGI that will make all our lives better, by improving the lives of humans who choose to remain "legacy humans", and opening up amazing new frontiers for the rest of us as we accompany the AGI into new mindspaces we are currently incapable of understanding.

Ai military applications

A video from frequencyclear.tv


http://www.youtube.com/watch?v=cRkpLGzJcL0

An interview with Dr. Noel Sharkey one of if not the leading expert on the application of AI systems for warfare.

His insight into the dystopian network of terminators currently being built is critical to understanding the future of warfare and humanity in general.

Ai military applications

A video from frequencyclear.tv


http://www.youtube.com/watch?v=cRkpLGzJcL0

An interview with Dr. Noel Sharkey one of if not the leading expert on the application of AI systems for warfare.

His insight into the dystopian network of terminators currently being built is critical to understanding the future of warfare and humanity in general.

Ai military applications

A video from frequencyclear.tv


http://www.youtube.com/watch?v=cRkpLGzJcL0

An interview with Dr. Noel Sharkey one of if not the leading expert on the application of AI systems for warfare.

His insight into the dystopian network of terminators currently being built is critical to understanding the future of warfare and humanity in general.

My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.