Wednesday, May 27, 2009

TransAlchemy Interviews Ben Goertzel Round 2





 1) Considering its virtually impossible to know what type of mind will arise from the possible mindspace should anti-AGi technologies be researched along side current AGi projects?

The best antidote to bad AGI is going to be good AGI.  All other
possible antidotes seem to be poisons in disguise ;-)

2) With the possibility of AGi being the last "tool" we create aren't we wasting time and effort researching cures studying the cosmos and conducting any  other intellectual quest. If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

"Wasting time and effort" is overly strong, but I'd say that we are
allocating our resources in a badly suboptimal way.  We should be
investing at least, say, $1 trillion per year into AGI research.


 If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

Yes, I think so.


3) Following question 2: You've mentioned in a lecture that if humanity ever found its self on the brink of a catastrophic disaster then it would be in our favor to build a God over night, yet in this scenario aren't we solving one problem by creating another potential problem? 

There is some risk involved in building a superhuman AGI (whether it's
a standalone AGI, a cyborg, a global brain, or whatever).

But the alternative is to have increasingly powerful other
technologies in the hands of humans with our unreliable, horribly
flawed "legacy" brains and motivational systems.

In my view the only fairly likely alternatives for our mid-term future are

-- return to the pretechnological age (only likely to happen via some
sort of mass destruction)

-- create beneficial superhuman AGI

-- destruction all around




4) On your blog you toy with the concept of AGi being the pope in what may become a synergy between science and religion, considering the sheer intelligence level that can be potentially achieved by our computers one day would that make them more qualified to rule / govern humans ?

Governance will have a rather different flavor once the problem of
material scarcity is solved.

But, in a word: yes


5) Concerning how professor Hugo de garis research is on the hardware side and yours is more on a software side, do you see either of this fields making significant leaps over the other,  basically how do you see the interactions of both of these approaches evolving separate and together in lets say the next 5-10 years?

Hugo is working in software too; he's not building novel hardware.

Powerful computing hardware is a necessary enabler for AGI.  But
without the right software, hardware will never be intelligent.

Computing hardware will keep evolving rapidly for reasons other than
AGI, and it will be one of the key enablers of AGI.


TransAlchemy Interviews Ben Goertzel Round 2





 1) Considering its virtually impossible to know what type of mind will arise from the possible mindspace should anti-AGi technologies be researched along side current AGi projects?

The best antidote to bad AGI is going to be good AGI.  All other
possible antidotes seem to be poisons in disguise ;-)

2) With the possibility of AGi being the last "tool" we create aren't we wasting time and effort researching cures studying the cosmos and conducting any  other intellectual quest. If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

"Wasting time and effort" is overly strong, but I'd say that we are
allocating our resources in a badly suboptimal way.  We should be
investing at least, say, $1 trillion per year into AGI research.


 If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

Yes, I think so.


3) Following question 2: You've mentioned in a lecture that if humanity ever found its self on the brink of a catastrophic disaster then it would be in our favor to build a God over night, yet in this scenario aren't we solving one problem by creating another potential problem? 

There is some risk involved in building a superhuman AGI (whether it's
a standalone AGI, a cyborg, a global brain, or whatever).

But the alternative is to have increasingly powerful other
technologies in the hands of humans with our unreliable, horribly
flawed "legacy" brains and motivational systems.

In my view the only fairly likely alternatives for our mid-term future are

-- return to the pretechnological age (only likely to happen via some
sort of mass destruction)

-- create beneficial superhuman AGI

-- destruction all around




4) On your blog you toy with the concept of AGi being the pope in what may become a synergy between science and religion, considering the sheer intelligence level that can be potentially achieved by our computers one day would that make them more qualified to rule / govern humans ?

Governance will have a rather different flavor once the problem of
material scarcity is solved.

But, in a word: yes


5) Concerning how professor Hugo de garis research is on the hardware side and yours is more on a software side, do you see either of this fields making significant leaps over the other,  basically how do you see the interactions of both of these approaches evolving separate and together in lets say the next 5-10 years?

Hugo is working in software too; he's not building novel hardware.

Powerful computing hardware is a necessary enabler for AGI.  But
without the right software, hardware will never be intelligent.

Computing hardware will keep evolving rapidly for reasons other than
AGI, and it will be one of the key enablers of AGI.


TransAlchemy Interviews Ben Goertzel Round 2





 1) Considering its virtually impossible to know what type of mind will arise from the possible mindspace should anti-AGi technologies be researched along side current AGi projects?

The best antidote to bad AGI is going to be good AGI.  All other
possible antidotes seem to be poisons in disguise ;-)

2) With the possibility of AGi being the last "tool" we create aren't we wasting time and effort researching cures studying the cosmos and conducting any  other intellectual quest. If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

"Wasting time and effort" is overly strong, but I'd say that we are
allocating our resources in a badly suboptimal way.  We should be
investing at least, say, $1 trillion per year into AGI research.


 If humanity at a whole realized that it is counter productive putting money into any type of research outside of AGi would this not lead towards fundamental breakthroughs in every field? 

Yes, I think so.


3) Following question 2: You've mentioned in a lecture that if humanity ever found its self on the brink of a catastrophic disaster then it would be in our favor to build a God over night, yet in this scenario aren't we solving one problem by creating another potential problem? 

There is some risk involved in building a superhuman AGI (whether it's
a standalone AGI, a cyborg, a global brain, or whatever).

But the alternative is to have increasingly powerful other
technologies in the hands of humans with our unreliable, horribly
flawed "legacy" brains and motivational systems.

In my view the only fairly likely alternatives for our mid-term future are

-- return to the pretechnological age (only likely to happen via some
sort of mass destruction)

-- create beneficial superhuman AGI

-- destruction all around




4) On your blog you toy with the concept of AGi being the pope in what may become a synergy between science and religion, considering the sheer intelligence level that can be potentially achieved by our computers one day would that make them more qualified to rule / govern humans ?

Governance will have a rather different flavor once the problem of
material scarcity is solved.

But, in a word: yes


5) Concerning how professor Hugo de garis research is on the hardware side and yours is more on a software side, do you see either of this fields making significant leaps over the other,  basically how do you see the interactions of both of these approaches evolving separate and together in lets say the next 5-10 years?

Hugo is working in software too; he's not building novel hardware.

Powerful computing hardware is a necessary enabler for AGI.  But
without the right software, hardware will never be intelligent.

Computing hardware will keep evolving rapidly for reasons other than
AGI, and it will be one of the key enablers of AGI.


My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.