Predicting head and neck cancer treatment toxicities with machine learning
BY Meagan Raeke
September 16, 2019
Medically Reviewed | Last reviewed by an MD Anderson Cancer Center medical professional on September 16, 2019
MD Anderson researchers have developed the first machine learning algorithm to predict acute toxicities in patients receiving radiation therapy for head and neck cancers. The results of the study were presented today at the 61st Annual Meeting of the American Society for Radiation Oncology (ASTRO).
“With head and neck radiation, a lot of toxicity occurs, however it’s not always clear which patients will experience serious side effects,” says study lead Jay Reddy, M.D., Ph.D., assistant professor of Radiation Oncology.
Reddy’s team set out to develop algorithms that could predict significant weight loss (≥ 10% during radiation therapy), feeding tube placement and unplanned hospitalizations with three months of beginning radiation treatment.
“It’s virtually unheard of for these patients to not lose any weight at all, but many patients are able to complete treatment without a feeding tube. Thus, we don’t want to unnecessarily place one on a hunch. Prolonged time with a feeding tube can hamper efforts to rehabilitate swallowing muscles. ” Reddy says. “The challenge is to balance this concern with the knowledge that some patients can’t get through treatment without assistance, and their need for nutrition becomes dire. We need a useful tool to better identify patients in need of aggressive help early on.”
The researchers extracted hundreds of data points from 2,121 head and neck cancer patients treated at MD Anderson between May 2016 and August 2018 and worked with Oncora Medical, a precision radiation oncology software company, to develop predictive algorithms. The dataset included demographics, tumor characteristics, treatment and outcomes.
Predictive value for weight loss, feeding tube need
Machine-learning model performance was measured using a score called Area Under the ROC curve (or AUC). AUCs typically range from .50, meaning the model can't distinguish between patients who experienced an adverse outcome from those that didn't, to 1.0, meaning the model is perfectly able to distinguish between the two groups of patients. Models with AUC greater than 0.70 were considered clinically valid.
Reddy’s team used the data from 1,896 patients to develop the initial algorithm, then validated the model with data from another 225 patients. The algorithms to predict feeding tube placement and significant weight loss had clinically valid AUC vales of 0.755 and 0.751, respectively.
“Our algorithms are the first useful models for predicting these endpoints,” Reddy says.
With an AUC of 0.676, the unplanned hospitalization algorithm was not considered clinically useful, but the team is hopeful that with additional data, the model will improve. The next step is to perform further testing and validation within the setting of a clinical trial.
“Over time, we’ll get a better idea of what’s driving the model,” Reddy says. “It’s possible that in the near future, we’ll be able to put data in from a patient and get a prediction that there’s an X% chance this person will require a feeding tube, allowing us to identify those who are more likely to need early intervention.”
See the full list of co-authors on the study, “Applying a Machine Learning Approach to Predict Acute Radiation Toxicities for Head and Neck Cancer Patients” (Abstract 141). ASTRO abstracts can be found here after data are presented.