Existential risk the alpha post is the beginning of a series of posts I will be making on the topic of existential risk. If by some means unknown the threat of human extinction vanishes there will be an existential risk omega post.
If at any time I'm killed in a global catastrophic event, consider that the omega post, and while I'm being a tad playful with the subject I am being serious over continuing this series of posts until one of those two events comes.
Ok, let's begin shall we?
Lets begin by defining what we will be exploring. Nick Bostrom, being the pioneer in this topic, has written extensively on it, and has put forth his own definition as to what existential risk means, therefore we shall begin here.
In his paper Nick categorizes existential risk with the following diagram. The top left being the most extreme case of potential dangers we can reach factoring in it's global impact, and intensity, or to put it in his words.
In his paper he is very clear to mention to you that derailing the development of posthumanity is an existential risk.
If you think I'm stretching this, take a look at his taxonomy.
classification of existential risks
We shall use the following four categories to classify existential risks:
Bangs – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.
Crunches – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.
Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.
Whimpers – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.
This is explored in greater detail in video below @ the 10 min 30 sec mark.
I can't help, but to see a flaw from the start!
If the Creation of Posthumanity possesses an existential risk to humanity greater than zero, while at the same time, diverting the creation of posthumanity is considered an existential risk to the transhumanist, then who's existence is more important, and what would be the greatest risk?
Hold that question in your mind while you watch a video presentation on this topic by Nick himself @ the Singularity Summit. By the way don't forget we are heading to the Singularity Summit this year and still need monetary help to seek out some answers to these questions and more. Grab a chair this presentation is 20 mins long, but well worth watching.
Returning to our question, the answer is easy, it depends if you're a human or a transhumanist. Transhumanists will see not creating posthumanity as an existential risk, while humans will see the creation of posthumanity as an existential risk. This is where the true problem lies, folks.