Saturday, September 19, 2009

Nanotech + AGI + Brain = Singularity (Trinity Path)

Nanotech + AGI + Brain = Singularity (Trinity Path)

Nanotech + AGI + Brain = Singularity (Trinity Path)

Quote of the Day 9-19-09


"The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control". I. J. Good

Quote of the Day 9-19-09


"The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control". I. J. Good

Quote of the Day 9-19-09


"The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control". I. J. Good

The singularity and the Methuselarity: Commentary





Will the technological singularity, defined as I define it above, happen at all? Not if we merely proceed according to Moore’s law, because that does not predict infinite rates of progress at any point in the future. But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve?...given degree of improvement takes time X, the time taken to repeat that degree of improvement is X/2, then X/4 and so on.

I believe the possibility of an inverse singularity arising is not just very likely but an actual property of the singularity itself. The degree of change vs time would definitely increase from the reference point of a human or any intelligent agent that is behind the cusp of exponential self improvement. For instance when humans achieve the creation of AGI in X amount of time, AGI goes on to make AGI Alpha (second gen AGI) in Y amount of time. The amount of time for an AGI to achieve any given task compared to a human should be less than the amount of time a human takes to do the same task.

AGI time (Y) < Human time (X)

Therefore from a human perspective it would take (X*n) amount of time to equal Y
n= the disproportional velocity at which AGI surpass man in completing task.

Basically what Im saying here is that it only makes sense that it would take humanity longer to make AGI alpha than it would take an AGI. So from the human perspective, the degree of human advancements past the creation of the AGI does infact go into an inverse exponential growth as AGI advancements would take more and more human years to achieve, this temporal difference in progress will only increase till we reach a point that AGI would create something that is beyond the temporal scope of humanity ever achieving withing the timescale of the universe.

Or as I. J Good puts it "it will be our last true invention.

Human intelligence, I believe, will not exhibit a super-exponential rate of growth, because our cognitive hardware is incompatible with that.

Completely agree and it's important this point is driven home as some still associate the word "human" with the beings they wish to become. While at the same time Ray kurzweil points to this by saying Humans will merge with it's technology and become less biological. A statement I personally believe is an oxymoron as a less biological human is not a human at all.


Computers have hardware constraints too, of course, so the formal asymptotic limit of truly infinite rates of improvement (and, thus, truly infinite intelligence of such machines) will not be reached – but that is scant solace for those of us who have been superseded (which could, of course, mean “eliminated”)

A reasonable question to ask is, well, since even a super-exponentially self-improving AI will always have finite intelligence, might it not at some point create an even more rapidly self-improving system that could supersede it? Indeed it might (I think) – but, from our point of view, so what? If we have succeeded in creating a permanently friendly AI, we can be sure that any “next-generation” AI that it created would also be friendly, and thus (by the previous paragraph’s logic) largely invisible. Thus, from our perspective, there will only be one singularity.
I have been speculating on this myself for a while now as I truly believe the singularity can be seen happening at different periods of time based on the intelligence observing it. A greater than human intelligence would place it's singularity way further out in the time scale than we do. Abrey de grey's position here is very much human as he only perceives one relevant singularity. To quote Ben here.


"We've got to take life step by step. One Singularity at a time ;-D " Ben Goertzel
Having tantalised you for so long, I cannot further delay revealing what the Methuselarity actually is. It is the point in our progress against aging at which our rational expectation of the age to which we can expect to live without age-related physiological and cognitive decline goes from the low three digits to infinite(the expected life cycle of the universe). And my use here of the word “point” is almost accurate: this transition will, in my view, take no longer than a few years. Hence the – superficial – similarity to the singularity.


I have noted earlier in this essay that if we survive it at all (by virtue of having succeeded in making these ultra-powerful computers permanently friendly to us)

To conclude I believe based on this paper achieving the longevity escape velocity the primary cause if not the methuselarity itself is not entirely dependent on the singularity being fully achieved. Although I do believe the singularity would be needed to maintain LEV it may not be needed to achieve LEV. If this premise is correct it is very like the the methuselarity would be achieved prior to the singularity. One thing that still is rather perplexing to me, is whether full biological transcendence would be needed to maintain LEV or not.

I depart with this quote from my favorite cartoon growing up Reboot..

"No one knows for sure but I intend to find out Reboot!"




The singularity and the Methuselarity: Commentary





Will the technological singularity, defined as I define it above, happen at all? Not if we merely proceed according to Moore’s law, because that does not predict infinite rates of progress at any point in the future. But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve?...given degree of improvement takes time X, the time taken to repeat that degree of improvement is X/2, then X/4 and so on.

I believe the possibility of an inverse singularity arising is not just very likely but an actual property of the singularity itself. The degree of change vs time would definitely increase from the reference point of a human or any intelligent agent that is behind the cusp of exponential self improvement. For instance when humans achieve the creation of AGI in X amount of time, AGI goes on to make AGI Alpha (second gen AGI) in Y amount of time. The amount of time for an AGI to achieve any given task compared to a human should be less than the amount of time a human takes to do the same task.

AGI time (Y) < Human time (X)

Therefore from a human perspective it would take (X*n) amount of time to equal Y
n= the disproportional velocity at which AGI surpass man in completing task.

Basically what Im saying here is that it only makes sense that it would take humanity longer to make AGI alpha than it would take an AGI. So from the human perspective, the degree of human advancements past the creation of the AGI does infact go into an inverse exponential growth as AGI advancements would take more and more human years to achieve, this temporal difference in progress will only increase till we reach a point that AGI would create something that is beyond the temporal scope of humanity ever achieving withing the timescale of the universe.

Or as I. J Good puts it "it will be our last true invention.

Human intelligence, I believe, will not exhibit a super-exponential rate of growth, because our cognitive hardware is incompatible with that.

Completely agree and it's important this point is driven home as some still associate the word "human" with the beings they wish to become. While at the same time Ray kurzweil points to this by saying Humans will merge with it's technology and become less biological. A statement I personally believe is an oxymoron as a less biological human is not a human at all.


Computers have hardware constraints too, of course, so the formal asymptotic limit of truly infinite rates of improvement (and, thus, truly infinite intelligence of such machines) will not be reached – but that is scant solace for those of us who have been superseded (which could, of course, mean “eliminated”)

A reasonable question to ask is, well, since even a super-exponentially self-improving AI will always have finite intelligence, might it not at some point create an even more rapidly self-improving system that could supersede it? Indeed it might (I think) – but, from our point of view, so what? If we have succeeded in creating a permanently friendly AI, we can be sure that any “next-generation” AI that it created would also be friendly, and thus (by the previous paragraph’s logic) largely invisible. Thus, from our perspective, there will only be one singularity.
I have been speculating on this myself for a while now as I truly believe the singularity can be seen happening at different periods of time based on the intelligence observing it. A greater than human intelligence would place it's singularity way further out in the time scale than we do. Abrey de grey's position here is very much human as he only perceives one relevant singularity. To quote Ben here.


"We've got to take life step by step. One Singularity at a time ;-D " Ben Goertzel
Having tantalised you for so long, I cannot further delay revealing what the Methuselarity actually is. It is the point in our progress against aging at which our rational expectation of the age to which we can expect to live without age-related physiological and cognitive decline goes from the low three digits to infinite(the expected life cycle of the universe). And my use here of the word “point” is almost accurate: this transition will, in my view, take no longer than a few years. Hence the – superficial – similarity to the singularity.


I have noted earlier in this essay that if we survive it at all (by virtue of having succeeded in making these ultra-powerful computers permanently friendly to us)

To conclude I believe based on this paper achieving the longevity escape velocity the primary cause if not the methuselarity itself is not entirely dependent on the singularity being fully achieved. Although I do believe the singularity would be needed to maintain LEV it may not be needed to achieve LEV. If this premise is correct it is very like the the methuselarity would be achieved prior to the singularity. One thing that still is rather perplexing to me, is whether full biological transcendence would be needed to maintain LEV or not.

I depart with this quote from my favorite cartoon growing up Reboot..

"No one knows for sure but I intend to find out Reboot!"




The singularity and the Methuselarity: Commentary





Will the technological singularity, defined as I define it above, happen at all? Not if we merely proceed according to Moore’s law, because that does not predict infinite rates of progress at any point in the future. But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve?...given degree of improvement takes time X, the time taken to repeat that degree of improvement is X/2, then X/4 and so on.

I believe the possibility of an inverse singularity arising is not just very likely but an actual property of the singularity itself. The degree of change vs time would definitely increase from the reference point of a human or any intelligent agent that is behind the cusp of exponential self improvement. For instance when humans achieve the creation of AGI in X amount of time, AGI goes on to make AGI Alpha (second gen AGI) in Y amount of time. The amount of time for an AGI to achieve any given task compared to a human should be less than the amount of time a human takes to do the same task.

AGI time (Y) < Human time (X)

Therefore from a human perspective it would take (X*n) amount of time to equal Y
n= the disproportional velocity at which AGI surpass man in completing task.

Basically what Im saying here is that it only makes sense that it would take humanity longer to make AGI alpha than it would take an AGI. So from the human perspective, the degree of human advancements past the creation of the AGI does infact go into an inverse exponential growth as AGI advancements would take more and more human years to achieve, this temporal difference in progress will only increase till we reach a point that AGI would create something that is beyond the temporal scope of humanity ever achieving withing the timescale of the universe.

Or as I. J Good puts it "it will be our last true invention.

Human intelligence, I believe, will not exhibit a super-exponential rate of growth, because our cognitive hardware is incompatible with that.

Completely agree and it's important this point is driven home as some still associate the word "human" with the beings they wish to become. While at the same time Ray kurzweil points to this by saying Humans will merge with it's technology and become less biological. A statement I personally believe is an oxymoron as a less biological human is not a human at all.


Computers have hardware constraints too, of course, so the formal asymptotic limit of truly infinite rates of improvement (and, thus, truly infinite intelligence of such machines) will not be reached – but that is scant solace for those of us who have been superseded (which could, of course, mean “eliminated”)

A reasonable question to ask is, well, since even a super-exponentially self-improving AI will always have finite intelligence, might it not at some point create an even more rapidly self-improving system that could supersede it? Indeed it might (I think) – but, from our point of view, so what? If we have succeeded in creating a permanently friendly AI, we can be sure that any “next-generation” AI that it created would also be friendly, and thus (by the previous paragraph’s logic) largely invisible. Thus, from our perspective, there will only be one singularity.
I have been speculating on this myself for a while now as I truly believe the singularity can be seen happening at different periods of time based on the intelligence observing it. A greater than human intelligence would place it's singularity way further out in the time scale than we do. Abrey de grey's position here is very much human as he only perceives one relevant singularity. To quote Ben here.


"We've got to take life step by step. One Singularity at a time ;-D " Ben Goertzel
Having tantalised you for so long, I cannot further delay revealing what the Methuselarity actually is. It is the point in our progress against aging at which our rational expectation of the age to which we can expect to live without age-related physiological and cognitive decline goes from the low three digits to infinite(the expected life cycle of the universe). And my use here of the word “point” is almost accurate: this transition will, in my view, take no longer than a few years. Hence the – superficial – similarity to the singularity.


I have noted earlier in this essay that if we survive it at all (by virtue of having succeeded in making these ultra-powerful computers permanently friendly to us)

To conclude I believe based on this paper achieving the longevity escape velocity the primary cause if not the methuselarity itself is not entirely dependent on the singularity being fully achieved. Although I do believe the singularity would be needed to maintain LEV it may not be needed to achieve LEV. If this premise is correct it is very like the the methuselarity would be achieved prior to the singularity. One thing that still is rather perplexing to me, is whether full biological transcendence would be needed to maintain LEV or not.

I depart with this quote from my favorite cartoon growing up Reboot..

"No one knows for sure but I intend to find out Reboot!"




My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.