I’m an enthusiast for all things ancient as much as I am an enduring optimist for the future (except for the undying wheezing bloodless zombified remake/prequel/sequel/reboot machine that is Hollywood. I mean seriously, they remade The Mechanic. Why, for the love of Thoth???)
Inevitably, my interest in the future leads me to skate close to transhumanist circles. I’m a science-fiction writer and reader, after all, with Bester and Asimov and Clarke and Gibson and Heinlein and Herbert occupying prestigious places on my bookshelves.
Yet I’ll be upfront about it: I find many in the transhumanist community to be counterproductive when it comes to engaging the general population on matters of science. In a nation where roughly half the citizenry still thinks that a planet can form in six days and that being fashioned out of a spare rib is a good explanation for the origins of women, it doesn’t help when transhumanist enthusiasts (poetry!) start waxing lyrical about Jupiter brains and the Singularity. I say this regardless of whether history will confirm or refute such a projection. Certainly one of the chief goals of any skeptic has to be communication.
It is interesting, then, that Time magazine — not ordinarily a bastion of futurism — has featured transhumanism as their cover story, 2045: The Year Man Becomes Immortal:
Raymond Kurzweil takes up to 200 supplements and pills a day to maintain his health. Although he is 62 years old, he estimates that his intense regiment has made him physically 20 years younger. He’s not pursuing a good lifestyle to stay fit forever. He just wants to survive until the Singularity, the event where our technology far exceeds the limitations of our human knowledge, rendering us unable to predict our own machine run future.
… Kurzweil and his fellow Singularitarians believe this distant future could be as soon as 2045. With the progress we’re making with artificial intelligence, they hope that someday well be able to transfer our knowledge into the far sturdier computers and robots that can withstand more damage than our frail human frames.
The bolded word above is from me, to bring attention to the fact that transhumanism does run the risk of being a belief system. I have no doubt whatsoever that the computers of thirty years from now will be wildly more powerful than what we have today. Extrapolation is not the same thing as pure invention, if you implement it conservatively and along a reasoned methodology. There is no data that suggests that a planet can form in six days — our investigations into the universe suggest a rather lengthier process, and one which is more involved that a being snapping His fingers and presto! You’ve Got Planets!
Looking at the history of computing and medicine, by contrast, and estimating that certain trends may continue and systems may become more advanced, is certainly different than inventing things wholesale. In fact, after an enjoyable interview with Dr. Christoff Koch (professor of biology and engineering at the California Institute of Technology in Southern California, spent four years as a post-doctoral fellow at the Artificial Intelligence Laboratory and at the Brain and Cognitive Sciences department at MIT, and is visiting Professor at the Institute for Neuroinformatics at the University of Zurich) I published a thought-piece on the possibility of mind-uploading: as Koch and I discussed, if intelligence is truly a mechanistic phenomenon based on our neural configurations (and all evidence certainly points to that) then a sufficiently powerful computer might be capable of storing that data set.
The same might be said of human longevity extension, a subject I’ve written about on this very site. Aging appears to be a mechanical, not magical, phenomenon. It may likely have a mechanical solution, or set of solutions.
For me, these are possibilities which don’t seem to stray from factual data. It isn’t a belief system. It’s an estimation.
Nonetheless, the fervor of transhumanism makes it suspect. There is a balancing act which we are tasked with, if only for the sake of intellectual integrity, and that is to weigh reasonable expectations (i.e. we landed humans on the moon and therefore it is logical a permanent lunar base may be established) with outliers of supposition (i.e. SETI’s conviction that not only is there life elsewhere in the universe — a very reasonable supposition given the size and breadth and verifiable occurrence of life at least once in cosmic history — but that such theoretical life will be using radio and will be broadcasting in just the right corner of the galaxy at just the right period of history when we can listen in.)
Where does transhumanism fit? Thoughts?