Enlisting Smartphones
Anthony Law, MD, PhD, an expert in machine learning at the University of Washington (UW) in Seattle, is part of a separate research team that already has succeeded in bringing the technology out of the lab and into the clinic, albeit with work needed to warrant more widespread use.
Explore This Issue
November 2020“Our focus has been on using machine learning to diagnose a laryngeal mass,” said Dr. Law, who recently completed his laryngology fellowship at UW and is now on the faculty at Emory University, in Atlanta. “But it’s important to note that, oftentimes, vocal changes are the first sign of an abnormal laryngeal growth, whether malignant or benign. It’s all intertwined.”
Clinician, otolaryngologist, and neurologist expertise cannot be replaced, but their clinical knowledge can be augmented by this tool—and frankly it needs to be, based on the delayed time to definitive diagnosis and treatment that’s held true for so long. —Kristina Simonyan, MD, PhD
Dr. Law’s approach, detailed in an ongoing study with Grace Wandell, MD, an R3 resident in otolaryngology, and Tanya Meyer, MD, a surgeon at UW Medicine’s Head and Neck Surgery Center, was to build a machine-learning algorithm that could be given to primary care physicians to use with smartphones. The system has the capability to analyze whether a patient’s vocal phonatory signal (as recorded on the smartphone) puts them in a high- or low-risk category for a vocal fold mass—and thus for laryngeal cancer—with the need for expedited specialty follow-up care.
Other colleagues on the UW project include Mark Whipple, MD, an associate professor and a bioinformatics researcher at UW Medicine, and Albert Merati, MD, a UW professor with research interests in the diagnostic testing and treatment of vocal fold paralysis.
“The results are preliminary, but we’ve made some real progress that suggests this could be an amazing tool for identifying an individual with a suspicious laryngeal mass sooner, particularly in primary care settings,” Dr. Law said. “We’ve seen in our own practice that there’s often a huge delay in care from the time a patient is seen by a primary care physician to eventually being diagnosed with a mass and evaluated by an otolaryngologist.”
Dr. Law acknowledged that some of the problems that have challenged other machine-learning projects cropped up with this one in practice, including a fall-off in sensitivity. “In the lab, we were great—our accuracy rates approached 90%,” he said. “But when we placed it in Dr. Meyer’s clinic [using smartphone-based telemedicine], it just didn’t generalize as well. We went from audio in the lab that was collected retrospectively on a strobe machine, with all of the usual artifacts in there, to data that were collected on an iPhone. I think something was just lost in the translation.”
Part of the challenge, Dr. Law added, was the dataset. “Like most machine-learning researchers, we’ve had difficulty finding a large enough one to match well to all patients. Still, it’s probably one of the cleanest datasets out there; it’s been independently reviewed by multiple specialists in speech pathology and laryngology. So, we’re confident this will eventually work in practice.”