Jul 01 2019

Privacy in the Age of AI

How strong is the password you use for your e-mail account? When was the last time you changed it? Your e-mail is the gateway to pretty much the rest of your security – someone who has captured your e-mail can use it to reset many of your other passwords. Yet, the top 10 passwords that people use are: “123456, 123456789, qwerty, password, 111111, 12345678, abc123, 1234567, password1, and 12345.” There are several reasons for this, but one is that people will generally trade security for convenience.

In addition to this, network security experts don’t always appreciate this fact or its implications. I was in a meeting at work about this topic, and the IT guys actually stated that their password policy is – use a hard alphanumeric, don’t write it down anywhere, use a different password for every login, and change it every 30 days. This is literally impossible for the vast majority of the population. People will just shrug at these silly recommendations, and use “123456.” A better recommendation, meeting people half-way, is to use a password which consists of three independent words that are each memorable to you but someone else would not easily guess.

We are quickly moving beyond the age of passwords, however. Biometrics may save us from our own laziness, at least in part, because they key into something unique about ourselves. They are a biological key that we carry with us. This is partly why hackers are moving toward social engineering rather than hacking software or hardware to breech security. People are the weak link – you can still count on people’s inherent laziness.

The same is true of privacy – people will sacrifice a certain amount of privacy for convenience. Sometimes we have no choice. If we want to live and do business in the modern world, we have to put some of ourselves out there. (Unless you want to “live of the grid.”) If you want to get on a plane, you have to let someone rifle through your things, and may even have to submit to a whole body scan. So while we are individually willing to trade security for convenience, we are collectively willing to trade privacy for security.

I’m curious, though. If people were given the option of boarding a plane without security, would they do it? If you were given the option – go straight to your gate, no security. But then, there is no security on that flight.

In any case, it’s interesting to contemplate all this in light of the rapid advances in artificial intelligence (AI). The joke now is that AI can know more about you than you know about yourself. If you combine AI with big data, and the fact that we are constantly leaving digital traces of ourselves as we do our business online, we essentially have surrendered a huge portion of our privacy. This is only going to get worse. When you enter a website because it has information you desire, or because you are trying to buy something online, and a popup comes up that says, “This site uses cookies, click to agree,” – do you click it? I admit I have a threshold. If I’m just casually browsing and I don’t really care about the information, I will close the site and not accept the cookies. But if I’m shopping or need access to an article, then I have no choice. I also consider how reliable the website is likely to be – is it a known site, or something sketchy.

Even with a fair degree of caution, you are likely to collect cookies as you live online. Sure, there are ways to browse anonymously. There are ways to block or deactivate cookies. But how many people take the time to do this (again, convenience vs security)? There is a massive amount of data about ourselves available. Social media sites like Facebook essentially collect information about you and then sell it to advertisers. That is their business model.  Stores track your buying habits, which to an AI can say a lot about you. For example:

Target assigns every customer a Guest ID number, tied to their credit card, name, or email address that becomes a bucket that stores a history of everything they’ve bought and any demographic information Target has collected from them or bought from other sources. Using that, Pole looked at historical buying data for all the ladies who had signed up for Target baby registries in the past.

They were able to build a profile of the shopping habits of someone who is pregnant, and then use that to predict which of their customers is pregnant, and then target them with advertising. But this is already old news. We all know that once we go to Amazon to look for grilling equipment, we start getting targeted for ads for grilling equipment (even if we already made our desired purchase). There is every indication that this type of targeted marketing is just going to get more and more intrusive and personal.

But there is an entirely new layer to the use of AI to peer into our private inner lives. AI algorithms are learning how to predict people’s mood and risk for certain behaviors by studying facial expressions, word choices, vocal patterns, and unconscious behavior. At first these are being developed for things like medical diagnosis – diagnosing someone with depression based on speech patterns. There is research into diagnosis neurological disease by analyzing someone’s gait, or predict your risk of heart disease by looking at your eyes.

This is all good. It seems like we are on the cusp of transforming medical diagnosis through AI (although we will probably be slower to adopt this than we need to be). But once the basic AI technology exists, it will likely not remain confined to the doctors office or hospital. How much of an open book will we become to AI armed only with increasingly ubiquitous cameras and our online information? How much of this will we accept if we are told the technology will be used to sniff out terrorists and serial killers?

What if this technology is being used not to change our shopping habits but our voting habits? Then it literally becomes a mechanism to acquire power, or at least to alter the shape of our government or policy. It becomes power itself. This can easily go in very scary directions. I don’t think we can casually assume that it will simply not happen, because of norms, or the free press, or because our fellow citizens will make sufficiently informed choices. If liberal democracies have a dystopian future, in my opinion it is most likely to be at the hands of a technocracy of AI, neural networks, algorithms, deep learning, and big data. These are powerful tools that can eat away at our privacy and liberty bit by bit, and we have demonstrated a tendency to allow this to happen, as long as there is a bit of security or convenience in the bargain.

No responses yet