Jan 04 2019

Asimov’s Predictions for 2019

Published by under General
Comments: 0

In 1984 science fiction writer Isaac Asimov wrote an article for the Toronto Star making predictions for 2019. I thought that was an odd date to pick, but as The Star explains, 1984 was 35 years from the publication of the book by that name, so they wanted to look 35 years into the future.

I am interested in futurism, which is notoriously difficult, but it is an excellent window onto the attitudes, assumptions, and biases of the people making the predictions. Asimov’s predictions are no exception, but they are particularly interesting coming from a professional futurist, and one with a reputation for being particularly prescient.

What did he get right, and what did he get wrong, and why? He focused on what he considered to be the three biggest issues for the future: “1. Nuclear war. 2. Computerization. 3. Space utilization.” I think this list itself reflects his bias as a science-fiction writer. They are reasonable, but he could have chosen medicine, agriculture, transportation, or other areas.

In any case, on nuclear war he was pessimistic in a way that was typical for the height of the cold war, and prior to the collapse of the Soviet Union. He said if we have a nuclear war, civilization is over, so not much more to say about that. Instead he just wrote:

“Let us, therefore, assume there will be no nuclear war — not necessarily a safe assumption — and carry on from there.”

He spent most of the article focusing on the impact of computers on society. This was a frequent topic of his fiction. He famously was correct in his prior visions of the future in the broad brushstrokes of – computers will get more powerful, more intelligent, and more important to civilization. But he also famously got the details wrong, imaging giant computers running things. He missed the trend toward smaller, ubiquitous, and embedded computers.

Likewise in his 1984 predictions he was pretty good on the broad brushstrokes. He correctly deduced (although I think this was the conventional wisdom, at least among nerds, at the time) that computers will become increasingly necessary for society in every aspect – government, industry, and education. He also correctly saw that computers would be a fantastic tool of education, and that anyone could learn pretty much anything they want by accessing their personal computer from home.

He overestimated the impact this would have on education, however – or perhaps just got the timeline wrong. He may still be correct in the next few decades. He predicted the role of teachers would be massively reduced, and that computers would be the focus of education. Human teachers are still critical. Rather, their role is shifting. Asimov predicted teachers would be limited to inspiring curiosity. Rather, what I see is that teachers are shifting their teaching to more workgroup discussion type interactions. There are fewer didactic lectures, which are more efficiently handled by multi-media.

But Asimov also envisions a more interactive role for computers – individualizing teaching to each student by learning their needs. I found this prediction especially interesting, because I thought the same thing also. I am, in fact, disappointed that we are not there yet. I think we have the technology, it is simply underutilized. We may get there yet, but we are not where we could be.

And this brings up one of the main themes of futurism – it is difficult to predict how technology will be used, much harder than predicting the technology itself. I think Asimov was completely correct in terms of the potential of computers, in fact undercalling it if anything. But he falsely assumed optimal utilization of the technology.

I am not sure why this is the case myself. It probably has something to do with economics, which I think is often where futurists get tripped up. What happens is not always what makes sense on paper, based on technology, efficiency, and need, but rather what is economically feasible or advantageous. Also – politics is a huge factor. We may have the ability to do something, but simply lack the political will. Industries may resist change for various reasons.

Regarding computers, robotics, and industry, again he was broadly on target. He correctly said that computers will be a disruptive technology (if you include the web in that prediction), displacing workers, requiring retraining and higher skills. He was extrapolating from the industrial revolution, and I think he was pretty much spot on.

He missed, however, the real impact of globalization. He did say that governments will need to work together more, but missed, I think, the economic focus of globalization. He also imagined pushback against computers (which didn’t really happen) but didn’t mention the pushback against globalization.

Asimov interestingly did not write about AI at all (interesting because it was featured in his fiction). Perhaps he thought we would not achieve AI by 2019. But I think he missed the AI revolution (admittedly – still relatively new) because of his vision of what it would be. In Asimov’s world, AI was all about general artificial intelligence – self-aware robots. His fiction, I think, also had a huge impact on public perception of AI. (Other writers, like Clarke, shared a similar vision.)

This has created a bias that persists in the public consciousness to this day – AI means self-awareness. The technology has taken a different path entirely, however. AI means smart computer algorithms that are dynamic, reactive, adaptive, and can learn. They can now also teach themselves.

The recent success of AlphaZero highlights the real path of AI. This is a computer “deep learning” algorithm that was able to teach itself how to play chess (it was only programmed with the basic rules) in four hours – becoming the best chess player on the planet, even better than any previous algorithm. What we are learning is that AI can do what we need it to, really well, without consciousness. This is already a huge revolution, and it going to get orders of magnitude more powerful and important (think self-driving cars, for example). Asimov seems to have missed this entirely.

Finally, on space, Asmiov was profoundly wrong (but again, probably just premature). He envisioned a colony on the Moon, with factories and solar power generation being beamed to the Earth. In reality, we haven’t even been back to the Moon (although NASA just announced plans to return within a decade). Again – this is an issue of politics and economics, not technology.

The overall pattern I am seeing with Asimov’s predictions is this. He was a student of history, was very thoughtful, and was able to extrapolate reasonably well into the future based on the lessons of the past. But – he was not able to predict important changes to the paths we were already on. Therefore, he may have been particularly good at standard futurism, but was not really significantly different in form. He was just as unable to predict game-changes as anyone else.

The other lesson, I think, is that 35 years is a long enough horizon that even skilled futurists cannot see to that point. All we can really reliably do is make short term predictions based on extrapolations of current trends. We are very bad at predicting changes to those trends, and 35 years is long enough that such changes render future predictions pretty hopeless.

The further question is – will this future horizon be coming closer and closer as the pace of advance increases? Some think the answer to this is a profound “yes.” In fact, that is essentially the idea behind the “Singularity” – the time horizon for our ability to predict the future will essentially shrink to zero.

Also, technology is not the only variable that changes current trends. Politics and economics also play significant roles, and they are even harder to predict.

It is also fun to think of all the things Asimov did not predict, or think important enough to mention. There was no mention of genetic manipulation, social media, anything similar to a smart phone, or the challenges of our energy infrastructure.

But even more interesting is this – what are the changes to future trends that will happen but we cannot predict today. In 35 years, when we look back, what will today’s futurists have gotten wrong, or simply totally missed?

No responses yet