It goes without saying that there are not enough doctors in the world to see everyone, every day, for all our health needs. Doctors will only see us if we go to their offices, and will only run complicated tests if they have a reason to do so. The situation is even worse for those living in rural areas and the developing world, as they may not even have a doctor nearby.

We are, and always will be, the first line of defense for our own health. We can figure out when something is wrong, like when a parent checks their child’s temperature using the back of their hand to see if they have a fever.

However, deciding whether an observation warrants raising concern can be difficult. A study in 1984 revealed that mothers checking the temperature of their children with their hand were correct only half of the time. Tools that allow people to make objective measurements of their health could help them get the treatment they need much sooner. This can improve their long-term outlook and reduce the cost of healthcare.

Managing cancer with a selfie

As a computer science PhD student, my research aims to make subjective medical observations more precise, using a technology already widely used by many: the smartphone. Two of my projects, BiliScreen and PupilScreen, rely on smartphone cameras. These can characterize visual symptoms that appear in the eye better than a human could.

The first, BiliScreen, measures the extent of jaundice in a person using a picture of their eyes. Jaundice is the yellowing of the sclera, or the whites of the eye, due to the buildup of a compound called bilirubin. Currently, bilirubin levels can only be measured with a blood draw after jaundice has become obvious. Catching signs of jaundice early could be an important indicator of diseases like pancreatic or liver cancer, and even help people monitor conditions after diagnosis. BiliScreen uses a machine learning model that learns how the color of the sclera varies with different bilirubin levels. The more examples the model sees, the smarter it gets and the better it performs.

Catching concussion early

The goal of PupilScreen is to help people diagnose concussions right after a collision, advising them to seek treatment if needed. This app works by measuring how our pupils change size in response to lights. When bright light is directed towards the eye, the pupil shrinks to reduce the amount of light that gets in. When a person has had a concussion, this reflex can be lessened or disappear altogether. While trained professionals can usually check this reflex unaided, PupilScreen take a precise measurement.

The Pupilscreen app uses a smartphone flash to shine a light into the patient’s eyes. The camera then records and measures the pupil’s response.  A ‘neural network’ quantifies how quickly the pupil changes size and by how much.

Both apps are in their early stages. So far, we have conducted modest clinical trials and are now working to test the apps on more people. We hope to finish these studies within a year. The next steps will be to work with the United States Food & Drug Administration to make the apps publicly available on app stores.

Smartphone cameras provide limitless potential for making visual observations more precise. Image sensors take the same visual information we see and encode it into a form that can be processed by a machine. In some cases, sensors can even pick up on differences that we cannot notice ourselves. None of this potential would be possible without the contributions of this year’s QEPrize winners.

Alex Mariakakis

Alex Mariakakis

Alex Mariakakis is a graduate student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington. He is advised by Dr. Shwetak Patel and Dr. Jacob O. Wobbrock. His research interests are in the development and deployment of mobile health applications as well as understanding the abilities of mobile phone users through their devices' built-in sensors.
Alex Mariakakis

Latest posts by Alex Mariakakis (see all)

     

    Comments