Advanced Multi-Modality Learning in Electronic Health Records for Personalized Medical Recommendations

Objective

Electronic Health Records (EHRs) are rich repositories of patient information, containing structured tabular data (e.g., lab results, diagnoses) and unstructured textual data (e.g., discharge summaries, physician notes). Traditional models often focus on a single modality, missing out on the holistic view provided by combining these data types. Our project, based on real data, seeks to bridge this gap through multi-modality learning, which remains underexplored yet highly promising in healthcare. See [1] for an overview of the research domain.

Research Focus

The aim is to develop a personalized and interpretable recommender system that leverages both tabular and textual data from EHRs.

Ideal Candidate

  • Background in machine learning, data science, or related fields.
  • Experience or interest in working with healthcare data, particularly EHRs.
  • Strong programming skills (Python, PyTorch, Huggingface).
  • Enthusiastic about solving complex problems and contributing to impactful research.

Contact Person

Zhan Qu, zhan.qu@tu-dresden.de

[1] Wornow, M., Xu, Y., Thapa, R., Patel, B., Steinberg, E., Fleming, S., … & Shah, N. H. (2023). The shaky foundations of large language models and foundation models for electronic health records. npj Digital Medicine, 6(1), 135.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • a post with image galleries
  • Stock Market Predictions through Deep Learning
  • Designing and Executing a Large-Scale User Study on Scientific Text Simplification
  • Extending the RDF Knowledge Graph SemOpenAlex.org
  • Using Quantum Computing in Natural Language Processing