Aug 18 2020

How Algorithms Affect Your Life

This is one of those things that futurists did not predict at all, but now seems obvious and unavoidable – the degree to which computer algorithms affect your life. It’s always hard to make negative statements, and they have to be qualified – but I am not aware of any pre-2000 science fiction or futurism that even discussed the role of social media algorithms or other informational algorithms on society and culture (as always, let me know if I’m missing something). But in a very short period of time they have become a major challenge for many societies. It also is now easy to imagine how computer algorithms will be a dominant topic in the future. People will likely debate their role, who controls them and who should control them, and what regulations, if any, should be put in place.

The worse outcome is if this doesn’t happen, meaning that people are not aware of the role of algorithms in their life and who controls them. That is essentially what is happening in China and other authoritarian nations. Social media algorithms are an authoritarian’s dream – they give them incredible power to control what people see, what information they get exposed to, and to some extent what they think. This is 1984 on steroids. Orwell imagined that in order to control what and how people think authoritarians would control language (double-plus good). Constrain language and you constrain thought. That was an interesting idea pre-web and pre-social media. Now computer algorithms can control the flow of information, and by extension what people know and think, seamlessly, invisibly, and powerfully to a scary degree.

Even in open democratic societies, however, the invisible hand of computer algorithms can wreak havoc. Social scientists studying this phenomenon are increasing sounding warning bells. A recent example is an anti-extremist group in the UK who now are warning, according to their research, that Facebook algorithms are actively promoting holocaust denial and other conspiracy theories. They found, unsurprisingly, that visitors to Facebook pages that deny the holocaust were then referred to other pages that also deny the holocaust. This in turn leads to other conspiracies that also refer to still other conspiracy content, and down the rabbit hole you go.

Why this happens is not a mystery. As one ex-Google engineer put it:

Social media algorithms continue to promote harmful content by ‘optimising watch time at any cost’, says Guillaume Chaslot.

Social media outlets tweak their algorithms to maximize their profits, which means maximizing the amount of time visitors stay on their site, which further means feeding them material that they appear to want and be interested in. This is an excellent example of unintended consequences – social media evolved in a competitive environment to survive by keeping surfers on their site as long as possible. As an epiphenomenon, however, algorithms designed to do this also tend to radicalize social media users by feeding them increasingly extreme content. If you move a little bit in one direction, social media sucks you all the way to the extreme, all to keep you viewing their pages as long as possible.

As a side note, this is how evolutionary forces work in general, even in biological evolution. Short term advantages sometimes lead down blind alleys, ensuring long term extinction. Peacocks, for example, “radicalized” their plumage to maximize their short term reproductive success, but almost certainly doomed their species to a much sooner extinction as a result. (The more highly specialized a species, the shorter their longevity, while generalists tend to survive longer.) This is because biological evolution is blind, it only knows short term survival and reproduction. There is no mechanism for long term strategy. But of course there are millions of species, and so by chance some accidentally hit upon good long term strategies and survive.

So, if we back up to the 60,000 foot view, how should we optimally design and run our society? The free market types argue that we need to leverage the power of bottom-up systems, allow billions of transactions to work themselves out organically. This process generates a tremendous amount of information that no committee can replicate. But at the other end of the spectrum, the socialist/regulation advocates, argue that total free systems like that lend themselves to exploitation. Inevitably more and more power and resources find their way into the hands of fewer and fewer people, until society breaks. Historically this is true, something like 100% of the time. Eventually people with power find a way to use their power to gain more power, to rig the system in their favor, and to pass on advantages to their children. Eventually you need regulation to re-level the playing field, or you get revolution that wipes the slate clean and starts all over.

Both ends of the spectrum are correct – evolutionary “free market” and democratic forces are powerful, and we would be wise to leverage them. There is also a certain amount of justice in portioning resources according to their inherent value, and to the value of work and creativity. But these evolutionary forces, whether they control markets or social media algorithms, are short-sighted and ultimately self-destructive. So we simultaneously need some top down long term strategy. This means regulation, to tweak the evolutionary forces so that they head in a non-self destructive direction. But we can’t let the system get too top-down, because this lends itself to other kinds of abuse, such as rent-seeking, cronyism, influence peddling, and the simple fact that those with a vested interest are likely to overpower those with only a general interest in terms of lobbying and other political activities.

Like many things, therefore, we need a careful balance to have the best of both worlds. Let freedom and democracy rule, but with guardrails to keep us from going over a cliff. And we need a system where multiple entities keep each other in check and there is no absolute or ultimate power, and there are mechanisms for self-correction.

We can apply all of this to social media algorithms. Social media is great for democracy due to the fact that it allows any private citizen to get their message out, and let the power of their message speak. The gate-keepers are gone, there is no longer an oligarchy of a few media giants.

But social media is also terrible for democracy, because instead of gatekeepers, who at least ostensibly had standards of quality control, we now have corporate giants who control the flow of information only to maximize their own profits. They are simply another kind of gatekeeper, but one whose business model is based even less on the quality of information provided and only on keeping eyes on screens. (Sure, this is true of traditional media also, leading to sensationalism and all the rest, but there is a difference of degree.) Further, this is all automated with computer algorithms.

This is all happening in the context of human psychology, cognitive biases, and the ways that people form beliefs and perceive information. The mix can be toxic, leading people down social media avenues toward increasingly insular and radicalized views. The result – flat eartherism, Qanon, a resurgence of anti-vaxxers, and the proliferation of countless conspiracy theories. Information is being buried in misinformation. Like the tribbles who starved to death in a vault full of grain (sorry, couldn’t resist), we as a society are starving for truth in an information ecosystem awash in information.

Even worse, this is what happens as a blind unintended consequence. But there are bad actors who now know they can exploit this system to deliberately maximize chaos and negative outcomes. We are running off a cliff, and they are pushing us from behind.

There are several ways to deal with this, and they are not mutually exclusive. The first is for social media users to be aware of how the platforms they use operate, and to demand better from the companies who run them. We can vote with our feet. But this is hard to coordinate, and it is hard to know (without required transparency) exactly what the algorithms on our social media platforms are doing. The second is oversight, transparency, and regulation, to treat social media giants like public services, and to require that end-users have more control and knowledge of how their information is used and how information is fed to them.

And of course we need to continue to educate the public to make them more media savvy. This is effective at an individual level, but not so much at a societal level because it is hard to have significant penetrance. But the more people there are who are media literate the better, and we should build in media literacy to our schooling curriculum. This should be in addition to requiring more transparency from social media giants, and considering careful regulation to assure “information meritocracy” – we want information to be prioritized by objective quality, not just its ability to mesmerize.

No responses yet