A new systemwide collaborative grant program, spearheaded by the University of Tennessee College of Medicine-Knoxville (UTCOM-K) is fueling groundbreaking research and scholarship across multiple institutions. This initiative, which includes teams from UTCOM-K, the University of Tennessee, Knoxville, and the University of Tennessee Health Science Center (UTHSC), aims to tackle critical health issues at both the state and national levels. The program awarded six grants, collectively exceeding $450K, to multidisciplinary teams of university experts and specialists dedicated to advancing innovations in healthcare delivery.
One specialized team is focusing on using machine learning-based processes to “improve disease management through personalized, evidence-based recommendations, reducing disparities in treatment and outcomes.” As a member of that group,[i] Wenjun Zhou, Lawson Professor of Business and director of the analytics PhD program in UT’ Haslam College of Business Department of Business Analytics and Statistics, recently shared more about the groundbreaking research.
What excites you about being part of this project?
Healthcare is such an important domain. It’s pertinent to everyone, and we all want to have successful health treatments whenever we need to see doctors. I was happy to be involved in this project, especially for the opportunity to work with healthcare domain experts. We can work on AI algorithms all day, and our field is proud of solutions that can solve a category of problems, but without tying the algorithm to something that’s driven by the domain, the solution may not be useful in practice. This is why I am always interested in understanding the data, learning from the domain experts and then designing an algorithm that’s targeted to a purpose — in this case, to disease management. With the help of healthcare domain experts, we can build and utilize AI as a tool to understand what’s going on and help interpret the outcome from the model.
What data will you draw from?
We are going to use the electronic health records (EHR) data set provided by the UTHSC. These are logs of all the encounters of patients going into the healthcare system — hospital visits, diagnoses, prescriptions, lab results, genetics and so on. There is a lot of data, so we can have a complete picture of when the patient gets in the system, what information was collected about their problems, what decisions were made to treat it and how the outcome was.
How will you use this data?
With a large data set collected and standardized this way, it’s more robust to distill statistical patterns so that we can understand what’s going on and make inferences to answer questions like, “You got this treatment. What if back then you got a different treatment? What would be an alternative outcome?” We do this by drawing data from similar cases to build a predictive model that allows for modeling the counterfactuals.
My goal is to build a foundational model where the data is represented in a more general way. Without going too technical, the rough idea is to use a graph structure to represent all kinds of entities in the healthcare system. Patients, diseases, doctors and hospitals are entities represented in it. Those entities are linked with edges, which represent their relationships.
What will you do with this model?
Using a graph allows us to study the relationships between the entities. My goal is to develop a good representation of all these entities and their relationships, so that we can make predictions in a more granular way. This foundational model can be used for a variety of use cases and maybe we will start from predicting readmission risk as a proof of concept.
Is it fair to say that, by using the model to predict readmissions, some of those can be avoided, and it can also be used to ensure patients get the appropriate treatments?
Yes, that is fair. A strong point of using algorithms and AI is that we can make predictions quickly and somewhat objectively. Of course, a lot can be done with model accuracy and interpretability. Using a model to assess the false positives and false negatives allows us to examine prior data, learn as much as we can and improve future decision making.
Are the six projects funded in this initiative connected?
These six projects are all separate at the moment. Each project is going to operate independently, but they’re working jointly towards a common goal: to improve Tennesseans’ healthcare.
We’re hoping to use this like a startup project, where we try to build a proof of concept. Then we will be better situated to work together and build a bigger grant proposal to address specific healthcare problems.
Is there anything else that you would like to share about this project?
I’m thankful for the UTHSC building this database so we are not starting from scratch. We can build on top of this data that has been collected and standardized, reusing what happened in the past to learn lessons and guide the future of health services.
CONTACT:
—
Scott McNutt, business writer/publicist, rmcnutt4@utk.edu
[1]Other members of the team are Qing “Charles” Cao, associate professor, UT Department of Electrical Engineering and Computer Science; Angela Pfammatter, senior methodologist, UT College of Education, Health and Human Sciences and associate professor of public health; Matthew Mihelic, associate professor of family medicine, UT Graduate School of Medicine; Agricola Odoi, professor of epidemiology and assistant dean for research and graduate studies, UT College of Veterinary Medicine; Nabil Alshurafa, associate professor of preventive medicine and computer science, Northwestern University; Jennifer Lord, assistant professor of veterinary public health and epidemiology, UT College of Veterinary Medicine; and Bob Davis, professor and director of the Center for Biomedical Informatics at UT Health Science Center.