Saturday, April 17, 2010

TransAlchemy Interviews Dr. Robert Geraci part 1

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).



What is the main point that you want the reader to take away from your book, Apocalyptic Ai?

There are two basic theses to the text, one of which is generally applicable to our culture and the other is more specific to my discipline, the study of religion, science, and technology. For the first, I hope that readers will be convinced that the ideas I'm calling "Apocalyptic AI" influence a wide array of cultural practices (robotics research, virtual world reception, theological and philosophical discourse). Apocalyptic AI is, in short, the belief that in the near future we will overcome the limitations of mortal life by uploading our minds into robots/virtual reality and living forever. That idea is one with considerable power in various cultural niches. The second thesis is that understanding Apocalyptic AI undermines a traditional way of studying religion and science. Starting in the mid-20th century, scholars of religion and science rejected the late 19th century conflict thesis (that religion and science are at war) and have favored the idea that religion and science can be integrated into one metaphysical worldview. Rejecting the conflict thesis is important but the thing is when you integrate religion and science (as has happened in Apocalyptic AI), you often get something that the liberal Christian theologians advocating reconciliation of religion and science would reject. Thus, in addition to recognizing the social significance of Apocalyptic AI, I'd also like readers to see how we must think more broadly and more anthropologically about religion and science without simply assuming that our own moral teleologies are necessarily the best method for that study.

-The Christian Rapture, New Age spiritual "ascension," and the idea of the technological Singularity all play to the theme of collective "evolution." A dissolution of the individual into heaven, godhead, the next dimension, a hive mind etc. Do you think these similar ideas all stem from the same root? If so, then what is it?
I definitely think that 20th century understandings of humanity were rooted in a theology of progress that influenced a wide swath of culture. That was, actually, the subject of my dissertation at UC-Santa Barbara. You can see the influence of that theology in studies of religion (i was dealing with liberation theology in the Americas), art, and science. The influence of the Neo-Darwinian synthesis cannot be ignored in intellectual circles, including the ways it was adopted by people like Pierre Teilhard de Chardin, who had a considerable impact in transhumanist and pop scientific circles. Fundamentally, that theology of progress is rooted in Judeo-Christian theology, which, no matter how you shake it down (and there are, of course, many different interpretations and traditions in both Judaism and Christianity) always provides faith in a divine providence that pushes humanity towards its own transcendence.

We have noticed an ongoing meme binding the construction of artificial intelligence with the creation of God. As this develops and Ais begin to emerge, do you see the possibility of an Ai-worshipping cult emerging?

That's an idea that Hugo de Garis has repeatedly expressed and, to be honest, I'm pretty skeptical about it. There is a sense in which we already treat machines like gods (consider how much time people sacrifice to their cell phones, facebook, etc.) but no one is prepared to explicitly relate to any machines as a personal deity and I don't think anyone would willingly enter such a relationship. Should machines become vastly more powerful and intelligent than humankind (which remains a highly speculative idea), I doubt very much we would think of them as actual gods who deserve worship. After all, it's hard to imagine what such machines would need from humanity and the reciprocity of a gift exchange is fundamental to religious systems (the believer offers food animals, prayer, study, etc. in return for shelter, food, power, etc.).

-Do you feel that there are some Singularitarians, Transhumanists, and Atheists that fall into a dogmatic mindset similar to fundamentalist Christians? Are there Transhuman Fundamentalists?
This is certainly the case, though I don't think anyone has done the legwork in figuring out what percentage of the population would fit into any particular category. Indeed, no one to my knowledge has even meaningfully theorized categories of transhumanism. That would be really valuable scholarly work and I'd love to see someone doing it!

Can transhumanism convert itself into a religion? What would be the potential dangers and or benefits if such a thing occurred?
Although my position on this is controversial, I believe that transhumanism is already a religion. There are transhumanists who find this very uncomfortable but I've even had one or two go from outright opposition to believing that I might be on to something after reading my book. When I say that transhumanism is a religion, I am not offering a value judgment; I don't believe that to be a good thing or a bad thing...it just is. Religion (borrowing from David Chidester) is the "negotiation of what it means to be human with respect to the superhuman or the subhuman." Transhumanism has a host of beliefs and practices associated with overcoming the human condition and attaining a state of superhumanity. This seems clearly religious to me. 

As a religious system, transhumanism can tap into the basic human desires for transcendence, meaningful purpose, a sense of community, and the hope for life after death. All of these are valuable assets and they are supplemented by transhumanism's focus upon technoscientific methods of achieving these ends. After all, technological solutions are possible for a wide variety of human problems; perhaps death is one of them. It does come with the risk, however, that ideological commitments can stand in the way of pluralism and civic progress. Freud was famous for, among other things, calling religion an illusion. What he meant is that it is something we believe in because we want it to be true (which says nothing about it's actual truth content). For example, a girl who believes a prince will come and marry her suffers from an illusion even though, technically, it might happen. It is just that she believes it because she wants it to be true, not because she has any statistical chance to realize this dream. Such a dream might interfere with more realistic prospects. Commitment to this illusion could prevent her from accepting a proposal from a perfectly reasonable choice. Likewise, faith that we will necessarily use technology to do things like resurrect the dead through computer simulation could lead one away from other, more practical satisfactions of human need. 

With technologies such as life extension on the horizon, how would religions that "sell" their interpretation of what happens to someone when they die adapt, as people may no longer choose to die?

If life extension technologies make tangible progress, then religious groups will probably split between those who feel that their gods desire human beings use our gifts to improve our lives and those who feel that doing so violates the divinely mandated order. This latter group has a tendency to retreat, however. For example, when in vitro fertilization was introduced in the United States, many protesters labeled it unnatural, monstrous, and a violation of divine rule. After a mere three decades, however, those concerns have largely dissipated. 

If some form of technological immortality emerges (through mind uploading or mind file backups downloaded into cloned bodies), then we'll see some serious argument over the nature of souls and the afterlife. Religions tend to be adaptable, however, and I expect that if such technologies were widely available then traditional religious groups would find ways to accommodate those beliefs. They'd have to...otherwise their adherents would all jump ship.

-Building a "superintelligent" AGI would be akin to incarnating a "god" on earth. Do you think that those who are working on AGI projects gain a sense of religious bliss/ satisfaction through their work?
I don't have sufficient experience here to comment. Among roboticists, I'd say no; but roboticists are not working toward AGI. In his recent book, _You Are Not a Gadget_, Jaron Lanier suggests that Singularity theories are rampant in AI circles but he doesn't offer any quantitative data or even personal anecdote to support this claim. The differences between roboticists and AI researchers should definitely be studied separately. Among those who have published in the area, though, it is definitely the case that someone can experience a religious satisfaction in this kind of work. Hugo de Garis says he gets something like that out of his work and, in a rather different way, Hans Moravec has expressed a kind of religious satisfaction also.

Is the concept of an upcoming apocalypse too deeply rooted in the sub-consciousness of man that we feel it's a necessary step to break free from the earthly human condition?
Apocalypticism is not a necessary precondition for belief in some kind of salvation. After all, apocalyptic theologies did not enter into Western religion until the 2nd century BCE. It is quite possible to believe, for example, that this world is a pretty good one and that, nevertheless, a better one awaits. Apocalypticism implies (among other things) a certain dissatisfaction with the world. In Apocalyptic AI (which does not describe every transhumanist or even necessarily everyone who wants to upload his or her mind into a machine), frustrations with the limits of the body drives a considerable amount of the ideological enterprise.


Too be continued...

Part 2 will be released 4/19/2010


part 2


-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

TransAlchemy Interviews Dr. Robert Geraci part 1

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).



What is the main point that you want the reader to take away from your book, Apocalyptic Ai?

There are two basic theses to the text, one of which is generally applicable to our culture and the other is more specific to my discipline, the study of religion, science, and technology. For the first, I hope that readers will be convinced that the ideas I'm calling "Apocalyptic AI" influence a wide array of cultural practices (robotics research, virtual world reception, theological and philosophical discourse). Apocalyptic AI is, in short, the belief that in the near future we will overcome the limitations of mortal life by uploading our minds into robots/virtual reality and living forever. That idea is one with considerable power in various cultural niches. The second thesis is that understanding Apocalyptic AI undermines a traditional way of studying religion and science. Starting in the mid-20th century, scholars of religion and science rejected the late 19th century conflict thesis (that religion and science are at war) and have favored the idea that religion and science can be integrated into one metaphysical worldview. Rejecting the conflict thesis is important but the thing is when you integrate religion and science (as has happened in Apocalyptic AI), you often get something that the liberal Christian theologians advocating reconciliation of religion and science would reject. Thus, in addition to recognizing the social significance of Apocalyptic AI, I'd also like readers to see how we must think more broadly and more anthropologically about religion and science without simply assuming that our own moral teleologies are necessarily the best method for that study.

-The Christian Rapture, New Age spiritual "ascension," and the idea of the technological Singularity all play to the theme of collective "evolution." A dissolution of the individual into heaven, godhead, the next dimension, a hive mind etc. Do you think these similar ideas all stem from the same root? If so, then what is it?
I definitely think that 20th century understandings of humanity were rooted in a theology of progress that influenced a wide swath of culture. That was, actually, the subject of my dissertation at UC-Santa Barbara. You can see the influence of that theology in studies of religion (i was dealing with liberation theology in the Americas), art, and science. The influence of the Neo-Darwinian synthesis cannot be ignored in intellectual circles, including the ways it was adopted by people like Pierre Teilhard de Chardin, who had a considerable impact in transhumanist and pop scientific circles. Fundamentally, that theology of progress is rooted in Judeo-Christian theology, which, no matter how you shake it down (and there are, of course, many different interpretations and traditions in both Judaism and Christianity) always provides faith in a divine providence that pushes humanity towards its own transcendence.

We have noticed an ongoing meme binding the construction of artificial intelligence with the creation of God. As this develops and Ais begin to emerge, do you see the possibility of an Ai-worshipping cult emerging?

That's an idea that Hugo de Garis has repeatedly expressed and, to be honest, I'm pretty skeptical about it. There is a sense in which we already treat machines like gods (consider how much time people sacrifice to their cell phones, facebook, etc.) but no one is prepared to explicitly relate to any machines as a personal deity and I don't think anyone would willingly enter such a relationship. Should machines become vastly more powerful and intelligent than humankind (which remains a highly speculative idea), I doubt very much we would think of them as actual gods who deserve worship. After all, it's hard to imagine what such machines would need from humanity and the reciprocity of a gift exchange is fundamental to religious systems (the believer offers food animals, prayer, study, etc. in return for shelter, food, power, etc.).

-Do you feel that there are some Singularitarians, Transhumanists, and Atheists that fall into a dogmatic mindset similar to fundamentalist Christians? Are there Transhuman Fundamentalists?
This is certainly the case, though I don't think anyone has done the legwork in figuring out what percentage of the population would fit into any particular category. Indeed, no one to my knowledge has even meaningfully theorized categories of transhumanism. That would be really valuable scholarly work and I'd love to see someone doing it!

Can transhumanism convert itself into a religion? What would be the potential dangers and or benefits if such a thing occurred?
Although my position on this is controversial, I believe that transhumanism is already a religion. There are transhumanists who find this very uncomfortable but I've even had one or two go from outright opposition to believing that I might be on to something after reading my book. When I say that transhumanism is a religion, I am not offering a value judgment; I don't believe that to be a good thing or a bad thing...it just is. Religion (borrowing from David Chidester) is the "negotiation of what it means to be human with respect to the superhuman or the subhuman." Transhumanism has a host of beliefs and practices associated with overcoming the human condition and attaining a state of superhumanity. This seems clearly religious to me. 

As a religious system, transhumanism can tap into the basic human desires for transcendence, meaningful purpose, a sense of community, and the hope for life after death. All of these are valuable assets and they are supplemented by transhumanism's focus upon technoscientific methods of achieving these ends. After all, technological solutions are possible for a wide variety of human problems; perhaps death is one of them. It does come with the risk, however, that ideological commitments can stand in the way of pluralism and civic progress. Freud was famous for, among other things, calling religion an illusion. What he meant is that it is something we believe in because we want it to be true (which says nothing about it's actual truth content). For example, a girl who believes a prince will come and marry her suffers from an illusion even though, technically, it might happen. It is just that she believes it because she wants it to be true, not because she has any statistical chance to realize this dream. Such a dream might interfere with more realistic prospects. Commitment to this illusion could prevent her from accepting a proposal from a perfectly reasonable choice. Likewise, faith that we will necessarily use technology to do things like resurrect the dead through computer simulation could lead one away from other, more practical satisfactions of human need. 

With technologies such as life extension on the horizon, how would religions that "sell" their interpretation of what happens to someone when they die adapt, as people may no longer choose to die?

If life extension technologies make tangible progress, then religious groups will probably split between those who feel that their gods desire human beings use our gifts to improve our lives and those who feel that doing so violates the divinely mandated order. This latter group has a tendency to retreat, however. For example, when in vitro fertilization was introduced in the United States, many protesters labeled it unnatural, monstrous, and a violation of divine rule. After a mere three decades, however, those concerns have largely dissipated. 

If some form of technological immortality emerges (through mind uploading or mind file backups downloaded into cloned bodies), then we'll see some serious argument over the nature of souls and the afterlife. Religions tend to be adaptable, however, and I expect that if such technologies were widely available then traditional religious groups would find ways to accommodate those beliefs. They'd have to...otherwise their adherents would all jump ship.

-Building a "superintelligent" AGI would be akin to incarnating a "god" on earth. Do you think that those who are working on AGI projects gain a sense of religious bliss/ satisfaction through their work?
I don't have sufficient experience here to comment. Among roboticists, I'd say no; but roboticists are not working toward AGI. In his recent book, _You Are Not a Gadget_, Jaron Lanier suggests that Singularity theories are rampant in AI circles but he doesn't offer any quantitative data or even personal anecdote to support this claim. The differences between roboticists and AI researchers should definitely be studied separately. Among those who have published in the area, though, it is definitely the case that someone can experience a religious satisfaction in this kind of work. Hugo de Garis says he gets something like that out of his work and, in a rather different way, Hans Moravec has expressed a kind of religious satisfaction also.

Is the concept of an upcoming apocalypse too deeply rooted in the sub-consciousness of man that we feel it's a necessary step to break free from the earthly human condition?
Apocalypticism is not a necessary precondition for belief in some kind of salvation. After all, apocalyptic theologies did not enter into Western religion until the 2nd century BCE. It is quite possible to believe, for example, that this world is a pretty good one and that, nevertheless, a better one awaits. Apocalypticism implies (among other things) a certain dissatisfaction with the world. In Apocalyptic AI (which does not describe every transhumanist or even necessarily everyone who wants to upload his or her mind into a machine), frustrations with the limits of the body drives a considerable amount of the ideological enterprise.


Too be continued...

Part 2 will be released 4/19/2010


part 2


-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

TransAlchemy Interviews Dr. Robert Geraci part 1

                               

Dr. Robert Geraci studies the power of religion in contemporary culture, particularly with regard to the interaction between religion and science.  Other interests include the history of science, anthropology of science, contemporary art, literature, Christian history, and economics.  Current research focuses upon the relationship between artificial intelligence (AI), robotics, online gaming, and religion (primarily Jewish and Christian apocalypticism but also Japanese Buddhism and Shinto).



What is the main point that you want the reader to take away from your book, Apocalyptic Ai?

There are two basic theses to the text, one of which is generally applicable to our culture and the other is more specific to my discipline, the study of religion, science, and technology. For the first, I hope that readers will be convinced that the ideas I'm calling "Apocalyptic AI" influence a wide array of cultural practices (robotics research, virtual world reception, theological and philosophical discourse). Apocalyptic AI is, in short, the belief that in the near future we will overcome the limitations of mortal life by uploading our minds into robots/virtual reality and living forever. That idea is one with considerable power in various cultural niches. The second thesis is that understanding Apocalyptic AI undermines a traditional way of studying religion and science. Starting in the mid-20th century, scholars of religion and science rejected the late 19th century conflict thesis (that religion and science are at war) and have favored the idea that religion and science can be integrated into one metaphysical worldview. Rejecting the conflict thesis is important but the thing is when you integrate religion and science (as has happened in Apocalyptic AI), you often get something that the liberal Christian theologians advocating reconciliation of religion and science would reject. Thus, in addition to recognizing the social significance of Apocalyptic AI, I'd also like readers to see how we must think more broadly and more anthropologically about religion and science without simply assuming that our own moral teleologies are necessarily the best method for that study.

-The Christian Rapture, New Age spiritual "ascension," and the idea of the technological Singularity all play to the theme of collective "evolution." A dissolution of the individual into heaven, godhead, the next dimension, a hive mind etc. Do you think these similar ideas all stem from the same root? If so, then what is it?
I definitely think that 20th century understandings of humanity were rooted in a theology of progress that influenced a wide swath of culture. That was, actually, the subject of my dissertation at UC-Santa Barbara. You can see the influence of that theology in studies of religion (i was dealing with liberation theology in the Americas), art, and science. The influence of the Neo-Darwinian synthesis cannot be ignored in intellectual circles, including the ways it was adopted by people like Pierre Teilhard de Chardin, who had a considerable impact in transhumanist and pop scientific circles. Fundamentally, that theology of progress is rooted in Judeo-Christian theology, which, no matter how you shake it down (and there are, of course, many different interpretations and traditions in both Judaism and Christianity) always provides faith in a divine providence that pushes humanity towards its own transcendence.

We have noticed an ongoing meme binding the construction of artificial intelligence with the creation of God. As this develops and Ais begin to emerge, do you see the possibility of an Ai-worshipping cult emerging?

That's an idea that Hugo de Garis has repeatedly expressed and, to be honest, I'm pretty skeptical about it. There is a sense in which we already treat machines like gods (consider how much time people sacrifice to their cell phones, facebook, etc.) but no one is prepared to explicitly relate to any machines as a personal deity and I don't think anyone would willingly enter such a relationship. Should machines become vastly more powerful and intelligent than humankind (which remains a highly speculative idea), I doubt very much we would think of them as actual gods who deserve worship. After all, it's hard to imagine what such machines would need from humanity and the reciprocity of a gift exchange is fundamental to religious systems (the believer offers food animals, prayer, study, etc. in return for shelter, food, power, etc.).

-Do you feel that there are some Singularitarians, Transhumanists, and Atheists that fall into a dogmatic mindset similar to fundamentalist Christians? Are there Transhuman Fundamentalists?
This is certainly the case, though I don't think anyone has done the legwork in figuring out what percentage of the population would fit into any particular category. Indeed, no one to my knowledge has even meaningfully theorized categories of transhumanism. That would be really valuable scholarly work and I'd love to see someone doing it!

Can transhumanism convert itself into a religion? What would be the potential dangers and or benefits if such a thing occurred?
Although my position on this is controversial, I believe that transhumanism is already a religion. There are transhumanists who find this very uncomfortable but I've even had one or two go from outright opposition to believing that I might be on to something after reading my book. When I say that transhumanism is a religion, I am not offering a value judgment; I don't believe that to be a good thing or a bad thing...it just is. Religion (borrowing from David Chidester) is the "negotiation of what it means to be human with respect to the superhuman or the subhuman." Transhumanism has a host of beliefs and practices associated with overcoming the human condition and attaining a state of superhumanity. This seems clearly religious to me. 

As a religious system, transhumanism can tap into the basic human desires for transcendence, meaningful purpose, a sense of community, and the hope for life after death. All of these are valuable assets and they are supplemented by transhumanism's focus upon technoscientific methods of achieving these ends. After all, technological solutions are possible for a wide variety of human problems; perhaps death is one of them. It does come with the risk, however, that ideological commitments can stand in the way of pluralism and civic progress. Freud was famous for, among other things, calling religion an illusion. What he meant is that it is something we believe in because we want it to be true (which says nothing about it's actual truth content). For example, a girl who believes a prince will come and marry her suffers from an illusion even though, technically, it might happen. It is just that she believes it because she wants it to be true, not because she has any statistical chance to realize this dream. Such a dream might interfere with more realistic prospects. Commitment to this illusion could prevent her from accepting a proposal from a perfectly reasonable choice. Likewise, faith that we will necessarily use technology to do things like resurrect the dead through computer simulation could lead one away from other, more practical satisfactions of human need. 

With technologies such as life extension on the horizon, how would religions that "sell" their interpretation of what happens to someone when they die adapt, as people may no longer choose to die?

If life extension technologies make tangible progress, then religious groups will probably split between those who feel that their gods desire human beings use our gifts to improve our lives and those who feel that doing so violates the divinely mandated order. This latter group has a tendency to retreat, however. For example, when in vitro fertilization was introduced in the United States, many protesters labeled it unnatural, monstrous, and a violation of divine rule. After a mere three decades, however, those concerns have largely dissipated. 

If some form of technological immortality emerges (through mind uploading or mind file backups downloaded into cloned bodies), then we'll see some serious argument over the nature of souls and the afterlife. Religions tend to be adaptable, however, and I expect that if such technologies were widely available then traditional religious groups would find ways to accommodate those beliefs. They'd have to...otherwise their adherents would all jump ship.

-Building a "superintelligent" AGI would be akin to incarnating a "god" on earth. Do you think that those who are working on AGI projects gain a sense of religious bliss/ satisfaction through their work?
I don't have sufficient experience here to comment. Among roboticists, I'd say no; but roboticists are not working toward AGI. In his recent book, _You Are Not a Gadget_, Jaron Lanier suggests that Singularity theories are rampant in AI circles but he doesn't offer any quantitative data or even personal anecdote to support this claim. The differences between roboticists and AI researchers should definitely be studied separately. Among those who have published in the area, though, it is definitely the case that someone can experience a religious satisfaction in this kind of work. Hugo de Garis says he gets something like that out of his work and, in a rather different way, Hans Moravec has expressed a kind of religious satisfaction also.

Is the concept of an upcoming apocalypse too deeply rooted in the sub-consciousness of man that we feel it's a necessary step to break free from the earthly human condition?
Apocalypticism is not a necessary precondition for belief in some kind of salvation. After all, apocalyptic theologies did not enter into Western religion until the 2nd century BCE. It is quite possible to believe, for example, that this world is a pretty good one and that, nevertheless, a better one awaits. Apocalypticism implies (among other things) a certain dissatisfaction with the world. In Apocalyptic AI (which does not describe every transhumanist or even necessarily everyone who wants to upload his or her mind into a machine), frustrations with the limits of the body drives a considerable amount of the ideological enterprise.


Too be continued...

Part 2 will be released 4/19/2010


part 2


-If superintelligence is created by Man, doesn't this put him in the position of a creator god? Could this be a problem? What would it mean that the creator's intellect is "inferior" to that of the creation?

Well there have certainly been religions in the past in which human beings could exercise superiority over the gods in limited ways. For example, the oft-cited Daedelus was possibly more clever than any of the Olympian host. I think human beings relish the thought of their own apotheosis (becoming gods) but that we don't really take those claims very seriously. In the end, we still lack pretty much all of the powers we traditionally ascribe to gods so we don't really think that we are divine. Creating an intelligent species (forget superior for the moment) would be a tremendous accomplishment but still wouldn't change the fact that we're going to die (for example). Should we develop machines who are vastly more intelligent than we, we'll be either hoping they can figure out the solutions to our problems or hoping they'll leave us alone. I doubt we'd be relishing our own godhood right around then!

-Oxford's "Oracle Ai" concept obviously conjures up visions of the ancient Greek Oracle at Delphi. Do you think that Ai is being created to "see" the future? Has omnipresence been the goal (of humans) from the start?

In all meaningful projects, I expect that AI is designed to either improve the human condition or entertain us and satisfy our curiosity. See the future? No, I don't think we're meaningfully likely to predict the future but I'm really not qualified to assess that. Of course, I doubt that anyone is. I don't think omnipresence is a first-order concern for humanity. Those are: food, water, shelter, reproduction, safety, and social status. After that, we worry about abstract things like love, companionship, self-esteem, curiosity, etc. The desire to build better machines is a subset of our desire to satisfy both first-order and second-order concerns. If omnipresence fits in there somewhere, it is either as a tertiary concern or as a way to address one of the other concerns, such as safety.

-Is religion/ our very conception of "god" a self-fulfilling prophecy? Is science a way of giving birth to our ideas?

Through religious faith and practices, we have ways of getting by in the world. Religions provide us with worldviews, with maps for understanding everything that happens around us and directions for acting within the world. Science is often a way of using those worldviews to develop new forms of power, which we call technology. There are meaningful ways in which different religious ideas lead to different ways of practicing science and thus to different kinds of technologies. In a couple of papers, I've considered the differences between U.S. and Japanese approaches to robotics/AI, which are partially grounded in the religious environments in the two countries. So, while science is definitely many things, one of them is that it is a way to empirically produce outcomes derived from our religious heritages.

-Does the lack of scientific "proof" of the existence of god, lead us to build a material deity out of some strange sense of necessity?

I think we do have a tendency to materialize our understanding of the sacred and it may be that trying to create transcendently powerful machines is a reflection of that. Mostly, however, I think people who desire to build godlike machines want to do so just to see if they can. It's not as though we stand to gain much by such a creation.

Do you see the concept of "God" as a purely human invention, or is it possible that machines may develop there own belief structure that builds on our current theosophical ideas.

I think it's entirely possible that machines will develop their own religious ideas. I don't actually think that we human beings (as a whole, not as individuals) will consider machines intelligent or conscious if they totally lack religious thinking. It is such an integral part of human cognition and culture and practice that any form of life lacking religion would seem deficient to us.

What might a "rite of passage" be in an AI religion?

I presume you mean for the machine. Perhaps engaging in a computation that will absorb all of the machines resources for a given period of time and at such a level that the machine risks damage? The very best rites of passage come with risks, so a machine that wants to really accomplish something, to elevate itself to a new condition, would have to find a way to simultaneously endanger itself in order to gain an understanding of what that condition means.

How would we be able to communicate with computer intelligences that become extraterrestrial to us?

Unless we can create machines who can communicate with us, we will never think much of them. If they have this ability in the beginning, I expect they would retain it. It's possible that we'd have a problem where the newer, smarter machine computation platform would leave behind the necessary protocols for communicating with us but, unless the machines plan to leave us entirely behind or they are owned by Microsoft, I don't see why that would happen.

Can you please compare and contrast the concept of the Rapture with uploading. 

Both rapture thinking and Apocalyptic AI mind uploading are apocalyptic theologies. This means they both include: 1) seeing the world dualistically (rapture: good v. evil/saints v. sinners/god v. devil ... aAI: good v. bad/machine v. body/virtual v. physical); 2) feeling alienated because of the dualism (the wrong side is winning: we're going to die, etc.); 3) a transcendent new world is on its way (heaven ... virtual reality/computation everywhere); and 4) we will occupy the new world in glorified new bodies (angelic ... robotic or virtual). It's important to keep in mind, however, that not everyone who believes in mind uploading is necessarily apocalyptic. My own book is specifically about people I do take to be apocalyptic and the influence that they have in our culture. Regarding the differences, the most glaring is that rapture theologians look to a god as the transcendent guarantor for the apocalyptic future while Apocalyptic AI advocates look to evolution as a transcendent guarantor (for example, Kurzweil's Law of Accelerating Returns). Because Apocalyptic AI deals in good v. bad rather than good v. evil, it also lacks the sense of sin and the presence of sinners that occupies so much of rapture theology. 

My Riddles

Dear Antz Particleion Is Hacking your Universe (live)

I will give your universe/Mind back to you if you answer my riddles.

Call your answers in!

(305) 735-9490

A) Is your universe real?

B) Are you real?

C) Who currently has {source}?

D) What is {Root}?

When you got the answer email it to

Key.universe@gmail.com

and I will give you back your universe assuming your right ;-)

Rules subject to change but will be posted.

`

! It will be Billions of years till I let you just have it... Till then I urge you try to get your key back.