Medicine has become more and more individualized since the days of leeches and humors, but in the last 15 years, an explosion of patient data in the form of genetic information and electronic health records (EHRs) has sharpened the doctor’s picture of the individual patient—and of treatments tailored to their precise needs. Such targeted care is referred to as precision medicine—drugs or treatments designed for small groups, rather than large populations, based on characteristics such as medical history, genetic makeup, and data recorded by wearable devices. In 2003, the completion of the Human Genome Project was attended by fanatic promises about the imminence of these treatments, but results have so far underwhelmed. Today, new technologies are revitalizing the promise. Precision medicine: drugs or treatments designed for small groups, rather than large populations. At organizations ranging from large corporations to university-led and government-funded research collectives, doctors are using artificial intelligence (AI) to develop precision treatments for complex diseases. Their central aim is to glean from increasingly massive and available data sets insight into what makes patients healthy at the individual level. Those insights could guide the development of new drugs, uncover new uses for old ones, suggest personalized combinations, and predict disease risk. Nearly 80% of respondents to a recent Oracle Health Sciences survey says they expect AI and machine learning to improve treatment recommendations, and in a 2017 paper, Dr. Bertalan Meskó, director of the Medical Futurist Institute, suggested that “there is no precision medicine without AI.” His point, albeit forward-looking, acknowledges that without AI to analyze it, patient data will remain severely untapped. Due to its rapid growth, genetic data has been at the center of discussions about individualizing treatments, but it’s only one course in the feast that will satisfy AI’s nutritional requirements. “The genome is not enough,” says Eric Topol, a geneticist, cardiologist, and director of the Scripps Research Translational Institute. “Much more can be gleaned, much more strides can be made once we start working with this multi-modal data.” By applying AI and machine learning to these multiple data sources—genetic data, EHRs, sensor data, environmental and lifestyle data—researchers are taking first steps toward developing personalized treatments for diseases from cancer to depression. Here’s a look at the biggest opportunities—and challenges. AI-Based Precision Medicine Is Still In Its Early Stages A variety of organizations are just starting to explore AI-based approaches to precision medicine. Deep Genomics, a Toronto startup, uses AI to reduce the amount of costly trial and error in drug discovery by analyzing large genomic databases, but its first clinical trial won’t be held until 2020. All of Us, a National Institutes of Health (NIH) research program, aims to collect data on 1 million patients to advance the study of precision medicine. It began enrolling participants in May 2018, and its goal is to create a massive database of patient information that research organizations, through various methods including AI, can analyze to develop precision treatments. In 2016, one major diagnostic platform using AI was reported to have diagnosed a woman’s cancer by analyzing her genetic data. But the software was later revealed to have grossly over-promised on capabilities. It also recommended unsafe treatment options. In one of Topol’s research projects, he uses deep learning techniques to study the genetic, cardiovascular, and microbiomic data of “special populations,” such as 90-year-olds, to discover patterns that make them healthy. Researchers might use these patterns to develop drugs that disable harmful genes; doctors might use them to predict who’s at risk of disease. “It’s not a clinical project at this juncture,” says Topol. “There are a lot of things sitting in the data, like treasures, that haven’t been discovered yet, because they haven’t had deep learning applied.” AI-based diagnostic tools, such as the FDA-approved imaging tool for diagnosing diabetic eye disease, have already entered hospitals, but AI-based treatments are still at the foundational stage, Topol says. Take the work of Dr. Elizabeth Krakow, who is using machine learning to develop precision cancer treatments. AI-based diagnostic tools have already entered hospitals, but AI-based treatments are still at the foundational stage. At the Fred Hutchinson Cancer Research Center in Seattle, Krakow treats and studies leukemia patients whose cancer has relapsed after a stem cell transplant. The treatments are sequential—for example, they might require another transplant, plus multiple rounds of chemotherapy and immunotherapy, each at a specific time—and because each step depends on the patient’s current condition, the sequence is always changing. “Past treatments, the current complexity of the disease, side effects—all that info needs to be integrated in order to intelligently choose the new treatment,” says Krakow. The problem was the medical record, which doctors rely on to make decisions, did not reflect the sequential, individualized nature of treatment. So, Krakow and her team assembled the medical data of 350 relapse patients, 1,000 pages per, and put it toward building a machine-learning algorithm that could predict the best treatment sequence for any patient, at any point in time. The algorithm works by simulating treatment options for patients in the study, and predicting outcomes by comparing their profiles with historical patient data. Krakow is now validating the data. Though her study’s immediate purpose is to create a tool that can inform which treatment sequence a doctor follows, it will also enable her future work by creating a gold-standard record that accounts for the sequential nature of cancer treatment, which clinical trials have failed to establish. “I hope that it becomes a historical control that shows how effective new [cancer] therapies are,” she says, including ones she and her colleagues at Hutchinson hope to pioneer. Collaboration Required AI-based precision medicine combines medicine, biology, statistics, and computing. The most promising research in the field is characterized by sustained collaboration across disciplines and institutions. “You do need to work across disciplines now,” says Dr. Liewei Wang of the Mayo Clinic. “It’s not so simple that any individual could do these studies themselves.” As detailed in a 2017 paper, Wang and far-flung colleagues developed a machine-learning algorithm to predict whether a psychiatrist should prescribe a certain drug to a depression patient based on that patient’s unique medical record. The model learned from the genetic data and medical records of more than 800 Mayo Clinic patients over a 10-year period, and could predict with 85% to 90% accuracy whether the drug would ease depressive symptoms. That’s compared to the 50% to 55% accuracy of a psychiatrist, says Wang, whose drug selection for their exhausted patients often devolves into a game of pill roulette. Wang’s study started 10 years ago with a grant from the National Institutes of Health and with a team that includes the Mayo Clinic Center for Individualized Medicine, the Duke Institute for Brain Sciences, and the department of computer science at the University of Illinois Urbana-Champaign. Though the study’s results have been replicated across multiple data sets, Wang is waiting to test them in a clinical setting. In Seattle, Krakow is part of a newly formed research collective called the Brotman Baty Institute for Precision Medicine, which pools the resources of the University of Washington (UW) School of Medicine, Fred Hutchinson, and the Seattle Children’s Hospital. The institute’s director, Jay Shendure, has described it as “a platform for enabling labs to collaborate between institutions, within institutions, and also access technologies they might not have otherwise had access to.” Founded in 2017, the institute has completed a study that revealed the effects of thousands of variants in a key breast cancer gene, and is one of threeresearch centers recruiting participants for the NIH’s All of Us project. Also part of the Brotman Baty Institute is Dr. Pamela Becker, a professor of hematology and medicine at UW. Becker has pursued interdisciplinary research for years, working closely with Dr. Su-In Lee to develop machine-learning algorithms for identifying effective cancer treatments. Most recently, they built an algorithm called MERGE that identifies genetic variants responsible for a patient’s reaction to treatments for acute myeloid leukemia (AML), a rare blood cancer. The algorithm learns from lab data about how genes have responded to AML drugs. It then predicts which drug will work best for a patient, based on their genetic makeup. Becker says that machine learning, and the expertise of collaborators like Lee, have changed the possibilities of this type of cancer research. Until around eight years ago, “we didn’t have a way of figuring out how 17,000 genes would be responsible for drug sensitivity or resistance,” she says, referring to the number of genes in the human genome, estimates for which range between that figure and 21,000. So far, her team has conducted two clinical trials using data from in vitro tests of 160 drugs on the variants, which was another component of the study. A third trial will incorporate the machine-learning algorithm’s predictions. Data Both Enables And Hinders Research There are still many barriers to AI in precision medicine, one being that the technology simply isn’t sophisticated enough. Dr. Peter Robinson, a computational biologist at the Jackson Laboratory in Farmington, Connecticut, says that while AI was able to master the structured games of chess and go by observing moves on a board, it cannot so easily observe the moves of doctors. If it could, it might be able to replicate that work, and using patient data, suggest better, more personalized treatments. Another barrier has to do with the data, especially electronic health records (EHRs). “The deficiencies of EHRs are major and a major stumbling block to applying AI or machine learning to medicine,” says Robinson. The average hospital in the U.S. uses 16 different EHR platforms. If an individual patient has data from a non-life-threatening hospital admission on one platform, and data from a cancer admission on another—each with different formats and permissions—an AI system may not have access to all the information it needs to suggest a personalized treatment. Even a universal EHR system might not solve the problem. Despite calls from researchers, EHR vendors don’t yet integrate genetic data into their records, and EHRs themselves are often incomplete, incorrect, or insufficiently detailed—and researchers are wary of training AI on faulty data. AI also struggles to digest the many formats that contain medical data, from charts to handwritten notes. A few years ago, Krakow was excited about the possibility of using IBM’s Watson to process both structured and unstructured patient data for developing her personalized treatments, but realized “it just wasn’t at that stage” when she learned of Watson’s failed implementation at the Memorial Sloan Kettering Cancer Center in New York. Without such a system, UW undergraduates working on Krakow’s post-transplant relapse study were tasked with sifting through the thousands of pages of documents. They broke down PDFs, TIFF files, clinical notes, emails, some handwritten notes, and placed the information in a database. Each patient took six hours. But such tedium might be a necessary step in the direction of AI that can erase it. “My project would be a wonderful test for some company if they wanted to develop AI for reading thousands of medical records,” says Krakow. For now, doctors, researchers, and patients must live with AI’s shortcomings, knowing it’s a nascent discipline that needs improvement. As Topol explains, researchers have the data they need to “find out how Mother Nature protects people.” They just need to get to work on building the applications and systems that can achieve that goal. This article was originally published in Forbes. Link to the original published article.