Dec 16 2019

Smartphones, AI, and Disease Management

As new technologies come online they often reverberate in other industries in unanticipated ways. New technologies may offer possibilities that did not previously exist. The smartphone is perhaps the best recent example of this. This was designed to be primarily a phone, including texting and video capabilities, but with access to the internet. So it is also a handheld computer. But it didn’t take long for app developers to realize that – hey, if people are carrying around an internet-connected computer at all times, that opens up a whole world of new possibilities.

Most smartphones also have three sensors in them, a microphone, a camera, and a vibration sensor. This allows for the convenient gathering of information from the user. Sure, this can be used for nefarious purposes, but also can be leveraged for things that can benefit the user. There are now, for example, apps that will monitor your sleep, or your daily exercise. Even simple things can be really useful. Patients, for example, can take pictures of themselves while having intermittent symptoms, to show their doctors later. The ability to take pictures pre-existed smartphones, but the fact that almost everyone now has a camera on them at all times, which produce digital pictures that are easily shared, is a game-changer.

This is all even without designing specific sensors optimized for medical applications. It does seem likely that the smartphone will evolve to some degree into a “tricorder” like medical sensory device, communicating information to your doctors in real time. Things like monitoring your pulse, heart sounds, breath sounds, retinal scan, and skin examination are already possible. Specialized plug-in or bluetooth devices could greatly expand this capacity, making some medical testing cheaper, more convenient, and also better in some ways. The big advantage is the ability to do long-term monitoring during normal life activities. Such applications also have the potential to expand modern medical testing into poor or developing areas that would otherwise lack it.

Another potential smartphone medical app is user-entered symptom data. For example, many of my patients use smartphone migraine diaries, that make it easy to record in real time migraine attacks and possible associated factors. This can provide critical information for monitoring response to treatments and potential triggers.

The ability to conveniently gather lots of information is great but can also become overwhelming. But we have that covered as well, at least potentially. That is where AI comes in. AI algorithms can sift through massive amounts of data looking for patterns. While people are really good at pattern recognition, we tend to over-detect patterns that are not really there, and have limits when dealing with massive amounts of data, or purely statistical patterns. Current AI is great at this, however.

One potential application is early disease detection. Many illnesses may be extraordinarily subtle at first, and difficult to distinguish from normal aging or other benign causes. As they slowly progress, the person with the illness may become slowly aware that something is not quite right. Even them, an exam by an expert may not be able to tell the difference between the normal range and early disease. This is often where laboratory testing of some type comes in, but we don’t have tests for everything, and even testing may not be definitive.

AI deep learning, however, it good and detecting subtle patterns, and using feedback to optimize their algorithms. Research is finding that often there are subtle statistical differences that can be detected in early Parkinson’s disease or Alzheimer’s disease, for example. These subtle early changes may detect disease even before the sufferer is aware. Early detection may matter when it comes to certain treatments, and may facilitate diagnostic procedures, saving health care resources.

We are just at the beginning of all of these things coming together – the ubiquity of smart phones, apps to monitor signs or symptoms, and deep learning AI to make sense of it all. An example of this is Medopad, who is developing such an app for the diagnosis and monitoring of Parkinson’s disease. This is currently in clinical trials, so we still have to see how well it will work. But I’m glad to see some companies working on this tech.

There does seem to be some low-hanging fruit ripe for picking with this technology. Some neurological diseases change how we move, how we talk, or our sleep patterns. Existing technology can monitor these things through smartphone apps, and we already know that this kind of information can be useful diagnostically. We really just need to close the loop, and then see how useful it is clinically. It’s always difficult to predict the future, but this seems like a high probability extrapolation of existing technology. I do predict that smartphone apps will play an increasing role in medical diagnosis and monitoring in the future, and that deep learning will become increasingly integrated into analyzing medical data.

No responses yet