I think religions that provide hope in personal resurrection – either traditional religions based on the “supernatural” or modern, Cosmist religions based on science, might be our best protection from reckless pursuit of superintelligence and other risky technologies.
In “Superintelligence: Paths, Dangers, Strategies” (2014), Nick Bostrom shows that the extermination of humanity as we know, by future superintelligent entities, is a real existential risk that we should start considering very seriously. Superintelligence is smarter than you like you are smarter than a beetle, and its interests may be incompatible with the existence of humanity.
I find hope more interesting (and fun) than fear, and so I have never been too worried about existential risks, but I must admit that Nick has valid points. See my review of Superintelligence, where I conclude that “it seems reasonable to follow Nick’s advice and pursue very cautiously – if at all – research that could result in superintelligence.”
Not everyone agrees. On the contrary many brilliant and imaginative people wish to advance superintelligence research at all costs. Bostrom clearly identifies the problem:
We, as individuals, don’t want to die.
If nothing happens, the default outcome is that we are all dead in a few decades, but superintelligence could change things radically:
“[S]uperintelligence could almost certainly devise means to indefinitely prolong the lives of the then still-existing humans… or helping them shuffle off their mortal coils altogether by uploading their minds to a digital substrate.”
Today many imaginative scientists and science-literate laypersons, who could appreciate Nick’s arguments, believe that death is final. They feel doomed to the irreversible non-existence of certain death, unless the superintelligence explosion happens in their lifetime, and therefore they want to push forward recklessly, as fast as possible.
On the contrary, those who hope to be resurrected after death, by either supernatural agencies or future science and technology, do not feel the same urgency to accelerate at all costs (this is my case). Therefore I think religion, or forms of scientific spirituality that offer hope in personal resurrection and afterlife, can help.
It’s evident that belief in resurrection has survival value for societies. Those who believe that they will live again are less scared to die in battle. Similarly, those who believe that they will see their loved ones again after death are less likely to be permanently crippled by despair when a loved person dies, and more likely to continue giving a positive contribution to society. In general, faith in afterlife makes believers a bit less obsessed with immediate survival here-and-now, and a bit more likely to put the long-term interest of the community above their own immediate interest. I think it’s clear that these are the reasons why all societies developed religion.
Today, we know too much science to believe in traditional, revealed religions not based on (or in direct conflict with) science. Therefore, if we want to keep the benefits of religion, we must find ways to make it compatible with science. Reviving religion, in a transhumanist formulation that doesn’t ask believers to give up science or other desirable aspects of modern thinking, is the main focus of my work.
How can future science and technology resurrect the dead? Nobody knows, but there are promising indication of possible paths in today’s science and philosophy. I invite you to browse the turingchurch.com website, which is mainly focused on that.
Back to Bostrom and future superintelligence:
“[T]he superintelligence might be able to create relatively faithful simulations of some past people – simulations that would be conscious and that would resemble the original sufficiently to count as a form of survival (according to at least some people’s criteria). This would presumably be easier for people who have been placed in cryonic suspension; but perhaps for a superintelligence it would not be impossible to recreate something quite similar to the original person from other preserved records such as correspondence, publications, audiovisual materials and digital records, or the personal memories of other survivors. A superintelligence might also think of some possibilities that do not readily occur to us.”
The first part of the quote mentions cryonics and mindfiles (Bainbridge-Rothblatt “softcopy cryonics.”) Bainbridge and Rothblatt make a very persuasive case that future AI technology may be able to patch a person together from bits, snippets and traces stored in the cloud.
But what about those who died before the cloud?
In sober academic language, Bostrom says: “A superintelligence might also think of some possibilities that do not readily occur to us.”
My interpretation: future superintelligences might find ways to tweak and re-engineer space-time, develop time scanning and quantum archaeology means, and bring back the dead from the past in fulfillment of the Common Task of Fedorov and the Russian Cosmists.
I hope humanity and technology will co-evolve, with humans enhanced by synthetic biology and artificial intelligence, and artificial life powered by mind grafts from human uploads, blending more and more until it will be impossible – and pointless – to tell which is which. I hope our superintelligent hybrid mind children will spread to the stars, become masters of space and time, and pursue the Common Task – to bring us back. However, I realize that this is not an inevitable outcome but a best case scenario.
The best case scenario requires humanity to survive and pass our most cherished values – love and compassion for all sentient life – to our superintelligent hybrid mind children. We may need to pursue potentially dangerous research very, very cautiously, without the reckless urgency that comes from hopeless despair.
The same considerations apply to other advanced technologies with potential existential danger. For example, real Drexlerian molecular nanotechnology could permit radical biological life extension and mind uploading, but also the accidental development of replicant “grey goo” that eats the biosphere.
If you run too fast to catch the train of immortality, you may end up crushed on the rails. But if you believe that another train will come, you can find the selfless strength to work carefully and patiently toward a good future for everyone. Therefore I think religions that provide hope in personal resurrection – either traditional religions based on the “supernatural” or modern, Cosmist religions based on science, might be our best protection from reckless pursuit of superintelligence and other risky technologies.
Image: Flock, by Sven Geier