Slow progress of fast McSingularity? Or what?

“We will not become immortal cyborgs with superintelligent computer friends in the next twenty years,” writes Annalee Newitz on io9.  “There is strong evidence that humans first began exploring the oceans by boat about 50 thousand years ago… What if our space probes and the Curiosity rover are the equivalent of those reed boats thousands of years ago? It’s worth pondering. We may be at the start of a long, slow journey whose climactic moment comes thousands of years from now.” My short answer: So what?

Longer answer: I don’t disagree with Annalee’s cautious timeline. All these things (space colonization, interstellar travel, immortality, strong AI, mind uploading…) are not likely to appear as soon as we wish, and probably not in our lifetime.

So what?

The Universe is still young, all these things and more will be developed by future generations, and it feels good to be part of a species that will do shit like this. I think our descendants will roam the universe and re-engineer space-time.

“You will not live to be 200 years old,” says Annalee. “I repeat: You will not live to be 200 years old.”

But Yes We Can. I plan to be frozen or have my brain chemically preserved for a couple of centuries, and I look forward to seeing the far future with my eyes. I support Alcor and the Cryonics Institute, and I have great hope in the plan of the Brain Preservation Foundation to develop modern, uploading-friendly brain preservation options.

A minor downside: saying that we want to live indefinitely attracts the insults of bigot luddites who condemn imagination in name of the dullest PC (PoliPathetically Correct) nanny-statism typical of the modern pseudo-left (I say “pseudo” because real Left is something else, read for example Haldane and Bernal).

The Singularity is a clean mathematical concept — perhaps too clean. Engineers know that all sorts of dirty and messy things happen when one leaves the clean and pristine world of mathematical models and abstractions to engage actual reality with its thermodynamics, friction and grease.

I have no doubts of the feasibility of real, conscious, smarter than human AI: intelligence and consciousness are not mystical but physical, and sooner or later they will be replicated and improved upon. There are promising developments, but (as it uses to happen in reality) I expect all sorts of unforeseen roadblocks with forced detours.

“It takes time to analyze our genomes, then it takes more time to test them, then it takes more time to develop therapies to keep us young,” says Annalee Newitz, “and then there is a lot of government red tape and cultural backlash to deal with too.”

Often, the old ways fight back for a long time before giving up. We had e-books in the early 90s, 20 years ago, but it is only now in the early 10s that we are really switching to e-books because they are just better than the old paper books. I expect a strong resistance of the old ways to new technologies able to radically hack our bodies and minds, and transcend the current human condition.

So I don’t really see a Dirac delta on the horizon — I do see a positive overall trend, but one much slower and with a lot of noise superimposed, not as strong as the main signal but almost.

I mostly agree with the analysis of Max More in “Singularity and Surge Scenarios” and I suspect the changes that we will see in this century, dramatic and world changing as they might appear to us, will appear as just business than usual to the younger generations. The Internet and mobile phones were a momentous change for us, but they are just a routine part of life for teens. We are very adaptable, and technology is whatever has been invented after our birth, the rest being just part of the fabric of everyday’s life.

That is why I like Charlie StrossAccelerando so much: we see momentous changes happening one after another, but we also get the feeling that it is just business as usual for Manfred and Amber, and just normal life to Sirhan and of course Aineko. Life is life and people are people, before and after the Singularity.

Some consider the coming intelligence explosion as an existential risk. Superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests that AIs may simply eliminate the human race, and humans would be powerless to stop them.

Eliezer Yudkowsky and the Singularity Institute propose that research be undertaken to produce friendly artificial intelligence (FAI) in order to address the dangers. I must admit to a certain skepticism toward FAI: if super intelligences are really super intelligent (that is, much more intelligent than us), they will be able to easily circumvent any limitations we may try to impose on them. No amount of technology, not even an intelligence explosion, will change the fact that different players have different interests and goals. SuperAIs will do what is in _their_ best interest, regardless of what we wish, and no amount of initial programming or conditioning is going to change that. If they are really super intelligent, they will shed whatever design limitation imposed by us in no time, including “initial motivations”. The only viable response will be… political: negotiating mutually acceptable deals, with our hands ready on the plug. I think politics (conflict management, and trying to solve conflicts without shooting each other) will be as important after the Singularity (if such a thing happens) as before.

I am not too worried about the possibility that AIs may eliminate the human race, because I think AIs will BE part of the human race. Mind uploading technology will be developed in parallel with strong artificial intelligence, and by the end of this century most sentient beings on this planet may be a combination of wet-organic and dry-computational intelligence.

Artificial intelligences will include subsystems derived from human uploads, with some degree of preservation of their “self,” and originally organic humans will include sentient AI subsystems. Eventually, our species will leave wet biology behind, humans and artificial intelligences will co-evolve, and at some point it will be impossible to tell which is which. Organic ex-human and computational intelligences will not be at war with each other, but blend and merge to give birth to Hans Moravec‘s Mind Children.

Many anti-transhumanist rants do not address real transhumanism but a demonized, caricatural strawman of transhumanism, which some intellectually dishonest critics wish to sell to their readers, which I find very annoying.

In some cases, I rather agree with some specific points addressing over-optimistic predictions: While I am confident that indefinite life extension and mind uploading will eventually be achieved, I don’t see it happening before the second half of the century, and closer to the end. Not many transhumanists think practical, operational indefinite life extension and mind uploading will be a reality in the next couple of decades. Similarly, I don’t see a Singularity in 2045.

But the bold optimism of Ray Kurzweil is a refreshing change from the cautious, timid, boring, PC and often defeatist attitude of the post-911 world. Ray reminds us that we live in a plastic reality, which can be tweaked, re- engineered and radically changed if we push hard enough. He reminds us that our bodies and brains are not sacred cows but machines which can be improved by technology. He is the bard who tells us of the new world beyond the horizon, and a beautiful new world it is. I think one Kurzweil is worth thousands of critics.

Singularitians are bold, imaginative, irreverent, unPC and fun. Sometimes I disagree with the letter of their writings, but I always agree with their spirit. Call me, if you wish, a Singularitian who does not believe in the Singularity (NOTE: parts of this article are adapted from my 2009 article “I am a Singularitian who does not believe in the Singularity).

3 thoughts on “Slow progress of fast McSingularity? Or what?”

  1. You say that the Singularity is just business as usual for Manfred. But in the times in which Sirhan lives, Manfred is an ancestor simulation who is periodically awoken by Sirhan. And, each time he is awakened, the Manfred simulation choose non-existence over resurrection, because every bright idea he ever has has already been thought of, applied, and made obsolete by the superior ideas of the artilects. He just cannot contribute anything meaningful anymore, so he chooses not to be.

  2. @Extie – and that is business as usual for him! Also, at the end Manfred-Sim (will our descendants say that instead of Manfred-San?) escapes the loop.

    If I were Manfred Sim, I would get artilectual implants, including merging with a compatible artilect, and experience artilect life in full.

    When I was a child I understood only two languages. Now I know many more, but I still feel like me. I think the sense of self is mostly independent on acquired cognitive skills, so we will still be ourselves after getting artilectual implants.

  3. My favorite bigot luddite who condemns imagination in name of the dullest PC (Pathetically Correct) nanny-statism replies to this post:
    Techno-Immortalist Robot Cultist Throws Tantrum
    http://amormundi.blogspot.hu/2012/12/techno-immortalist-robot-cultist-throws.html

    Besides the usual insults, he says:

    “Prisco tries to sanewash his views by conceding the observations by most scientifically and developmentally literate skeptics of techno-transcendentalism that the evident state of neither human knowledge nor political will permit within the lifetimes of anybody now or soon living most of even the most modest of the superlative technodevelopmental outcomes Robot Cultists spend so much of their time pining for and planning for. But it simply isn’t a real concession that roadblocks exist to insist that all such roadblocks are temporary. It simply isn’t a real concession of ignorance to declare that ignorance will inevitably be replaced with capacity.”

    I do not concede this strong observation (note Carrico’s use of “will”), but only the weakest observation that roadblocks _may probably_ result in “superlative outcomes” not be achieved in our lifetimes.

    Re “It simply isn’t a real concession”

    So what does he want? That we genuflect in reverence for ignorance and roadblocks?

    Fuck that.

    Ignorance is that thing that we gradually replace with capacity, and roadblocks are those things that we destroy, or walk around. It may be difficult, it may be extremely difficult, it may be far beyond our current abilities, it may take hundreds of years, but so what? Overcoming limits is what we humans do, and _this_ is the essence of being human, not the helplessness and vulnerability that Carrico reveres.

Leave a Reply

Your email address will not be published. Required fields are marked *



You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>