Once upon a time

Incomplete mindfiles can be completed with information available in the cloud

According to Martine Rothblatt, our minds will be uploadable in good enough shape to satisfy most everyone by reconstructing them from information stored in software mindfiles. The reconstruction process will be iteratively achieved with AI software designed for this purpose, dubbed mindware. Mindfiles recorded with today’s not-so-advanced technology are necessarily incomplete, but an incomplete mindfile can be completed with information available in the cloud.

This is the Bainbridge-Rothblatt “soft” approach. Instead of (or besides) reading low-level information physically encoded in the brain by using “hard” brain readout technologies, we can write a lot of high-level information out of the brain as diaries, blogs, pictures, videos, answers to personality tests, etc. in such a way as to create over the years a large database of personal information (see the CybeRev and Lifenaut projects), hoping that some future technology may be able to bring the information in the database to life as a valid continuation (from both objective and subjective points of view) of the original person.

An interesting related concept is the “me-program” — a generic model of a human mind that can act as a lower level layer of firmware and system software for the higher-level personal information in a mindfile. To use a common and very simplified analogy, if this document that I am writing is a person, the software running on this PC (from low level firmware to Windows and Word) is the me-program. By analogy with the genome, it seems plausible that most of the information that constitutes a person may be in the me-program, and the actual “self” may constitute only a small part of the total information. The simplest assumption is that the me-program is the same, for all practical purposes, for different persons. It follows that, once we develop a suitable me-program, we can use it as a platform for reconstructing individual minds as higher-level plug-in modules. The recently funded Human Brain Project in Europe, and the much rumored Brain Activity Map project in the U.S., may develop knowledge and data for the me-program.

“Sideloading,” proposed by Greg Egan in Zendegi, consists of tweaking, fine-tuning and training a me-program, based on the information in a mindfile, until it behaves (and perhaps feels) like a specific person. I am confident that at least this limited form of mind uploading will be available by mid-century.

Even if a generic me-program can be re-used to bring back to life all mindfiles, building a sufficiently rich mindfile remains a daunting task. Often I have the impression that, with our current mind to computer interface technology (basically posting text, pictures, sound and video clips, social network updates, answering questionnaires), it would take more than a lifetime to post enough of the information that makes me me. My friend Fred Chamberlain, now resting in ice at Alcor waiting for revival, built what is probably the richest mindfile so far, but sometimes I fear that even his mindfile is not rich enough.

Or maybe it is?

Mindfiles recorded with today’s not-so-advanced technology are necessarily incomplete, but an incomplete mindfile can be completed with information available in the cloud. For example, I never write about my own childhood (I remember it fondly, but I guess it would be boring for others). But here is the address where I lived between age 4 and 12: “Via Orazio 10, Napoli, Italy.” This short string of letters and numbers doesn’t seem to contain too much information. But go there in Google Street View (embedded below) and you can do a virtual walk through all the places that were familiar to me when I was a child. This is the place where I was picked up by the primary school bus, and the house is right at the other side of the road. I don’t need to post pictures and I don’t need to remember the details, because this walk through the memory lane is out there in the cloud and everyone can experience it with Google Street View.

Future versions of Street View will be even more sophisticated, with the possibility to go inside buildings and see a place as it was in the past.

The street address permits guessing, with high probability, which languages I heard when I was a child. I will confirm it here: I heard Neapolitan and Italian. These are both well-known and documented languages with bazillions of bytes of audio recording in the cloud. What music did I hear when I was a child? My mother was a classic piano player and teacher, and my uncle was a well-known singer. What news did I hear and what did I watch on TV? Just use my date of birth, with historical record and Italian TV broadcasting archives. The information in this short paragraph permits assembling a huge lot of information about me. Future AI systems may be able to seamlessly parse incomplete mindfiles with information available in the cloud, with impressive results.