Prior to starting my Ph.D., I obtained my B.S/M.S. in Biomedical Engineering and Computer Science at Johns Hopkins University, where I worked with Nicholas Durr and Alan Yuille. I am fortunate to have also spent time working at worked at Apple Inc. in the Health Special Project and Applied Machine Learning Groups (with Belle Tseng and Andrew Trister), Microsoft Research in the BioML Group (with Rahul Gopalkrishnan), and at the National Institutes of Health in NIBIB (with Richard Leapman).
Multimodal Integration: Multimodal learning has emerged as an interdisciplinary field to solve many core problems in machine perception, human-computer interaction, and recently in biology & medicine, in which there is often an enormous wealth of multimodal data collected in parallel to study the same underlying disease. Since first starting out in research, I have a range of experiences working on multimodal learning for integrating: 1) multimodal sensor streams from the Apple Watch and iPhone Data to predict mild cognitive decline, 2) RGB and depth images for non-polyploidal lesion classification and SLAM in surgical robotics, and 3) pathology images and genomics for cancer prognosis.
Weakly-Supervised & Set-Based Deep Learning: Though deep learning has revolutionized computer vision in many disciplines, gigapixel whole-slide imaging (WSI) in computational pathology is a complex computer vision domain that renders traditional, Convnet-based supervised learning approaches infeasible. To address this issue, I have been working on interpretting large gigapixel images as permutation-invariant sets (or bags in MIL literature), and then developing set-based learning algorithms for weakly-supervised learning on WSIs.
Synthetic Data Generation & Domain Adaption: “What constitutes authenticity, and how would the lack of authenticity shape our perception of reality?” The science fiction American writer Philip K. Dick posited similar questions throughout his literary career and, in particular, in his 1972 essay “How to build a universe that doesn’t fall apart two days later”. I am interested in: using synthetic data for domain adaptation / generalization, developing synthetic environments for simulating challenging scenarios for neural networks, as well as the the policy challenges in training AI-SaMDs with synthetic data,
Please feel free to contact me through email if you have any questions and interest in collaborating!
|Aug, 2021||Excited to announce that the preprint for PORPOISE, our Pathology-Omic Research Platform for Integrated Survival Estimation, is out (with its associated demo)!|
|Jul, 2021||Two papers, Patch-GCN and Multimodal Co-Attention Transformers (MCAT), were accepted into MICCAI 2021 and ICCV 2021 respectively.|
|Jun, 2021||Joined Microsoft Research as an PhD Research Intern, working with Rahul Gopalkrishnan in the BioML Group.|
|Jun, 2021||Passed my Qualifying Exam, and am now officially a PhD Candidate!|
- Developing Measures of Cognitive Impairment in the Real World from Consumer-Grade Multimodal Sensor StreamsIn Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2019Oral Presentation & Best Paper Runner-Up
- Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and PrognosisIEEE Transactions on Medical Imaging 2020Top 5 Posters, NVIDIA GTC 2020
- Synthetic Data in Machine Learning for Medicine and HealthcareNature Biomedical Engineering 2021
- Pan-Cancer Integrative Histology-Genomic Analysis via Interpretable Multimodal Deep LearningarXiv preprint arXiv:2108.02278 2021