Monday, April 19, 2010

TransAlchemy Interviews Dr. Robert Geraci part 2

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).

-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

TransAlchemy Interviews Dr. Robert Geraci part 2

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).

-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

TransAlchemy Interviews Dr. Robert Geraci part 2

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).

-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.