I am a 5th year Ph.D. Candidate (and NSF-GRFP Fellow) advised by Faisal Mahmood at Harvard University, and also within Brigham and Women’s Hospital, Dana-Farber Cancer Institute, and the Broad Institute.
Prior to starting my Ph.D., I obtained my B.S/M.S. in Biomedical Engineering and Computer Science at Johns Hopkins University, where I worked with Nicholas Durr and Alan Yuille. In industry, I have also worked at Apple Inc. in the Health Special Project and Applied Machine Learning Groups (with Belle Tseng and Andrew Trister), and at Microsoft Research in the BioML Group (with Rahul Gopalkrishnan).
Multimodal Integration: Multimodal learning has emerged as an interdisciplinary field to solve many core problems in machine perception, human-computer interaction, and recently in biology & medicine, in which there is often an enormous wealth of multimodal data collected in parallel to study the same underlying disease. I have worked on a range of problems in multimodal learning for integrating: 1) multimodal sensor streams from the Apple Watch and iPhone Data to predict mild cognitive decline, 2) RGB and depth images for non-polyploidal lesion classification and SLAM in surgical robotics, and 3) pathology images and genomics for cancer prognosis.
Representation Learning for Gigapixel Images: Though deep learning has revolutionized computer vision in many disciplines, gigapixel whole-slide imaging (WSI) in computational pathology is a complex computer vision domain that renders traditional, Convnet-based supervised learning approaches infeasible. To address this issue, I have been working on interpreting large gigapixel images as permutation-invariant sets (or bags in MIL literature), and then developing Transformer-based approaches for weakly-supervised learning and self-supervised learning in WSIs.
Generative AI & Healthcare Policy: “What constitutes authenticity, and how would the lack of authenticity shape our perception of reality?” The science fiction American writer Philip K. Dick posited similar questions throughout his literary career and, in particular, in his 1972 essay “How to build a universe that doesn’t fall apart two days later”. I am interested in: using synthetic data for domain adaptation / generalization, developing synthetic environments for simulating challenging scenarios for neural networks, as well as the the policy challenges in training AI-SaMDs with synthetic data,
|Excited to share our latest preprint on UNI, a general-purpose self-supervised model for computational pathology. In addition, my Master’s student, Tong Ding, is joining the Computer Science Ph.D. program at Harvard University (SEAS). Congratulations Tong!
|Our perspective on algorithm fairness in AI and medicine/healthcare was published in Nature BME. In addition, excited to share our latest preprint on CONCH (CONtrastive learning from Captions for Histopathology), a visual-language foundation model for computational pathology. Stay tuned!
|Our work on zero-shot slide classification with visual-language pretraining was published in CVPR. Code + pretrained model weights are made available.
|Our work on PORPOISE (Pathology-Omic Research Platform for Integrated Survival Estimation), and our review on multimodal learning for oncology were both published in Cancer Cell. See the associated demo!
|Our work on Hierarchical Image Pyramid Transformer (HIPT) is highlighted as an Oral Presentation in CVPR, and as a Spotlight Talk in the Transformers 4 Vision (T4V) CVPR Workshop. Code + pretrained model weights are made available. Lastly, my visiting student, Yicong Li, is joining the Computer Science Ph.D. program at Harvard University (SEAS). Congratulations Yicong!
|Our work on CRANE was published in Nature Medicine. Also, code + pretrained model weights are made available for our recent Self-Supervised ViT work in NeurIPSW LMRL 2021. Lastly, our work on federated learning for CPATH (HistoFL) was published in Medical Image Analysis.
|Joined Microsoft Research as an PhD Research Intern, working with Rahul Gopalkrishnan in the BioML Group. In press, our commentary on synthetic data for machine learning and healthcare was also published in Nature BME. Lastly, two papers, Patch-GCN and Multimodal Co-Attention Transformers (MCAT), were accepted into MICCAI and ICCV respectively.
- A General-Purpose Self-Supervised Model for Computational PathologyarXiv preprint arXiv:TBD 2023
- Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised LearningIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022Oral Presentation
- Developing Measures of Cognitive Impairment in the Real World from Consumer-Grade Multimodal Sensor StreamsIn Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2019Oral Presentation & Best Paper Runner-Up
- Pan-Cancer Integrative Histology-Genomic Analysis via Multimodal Deep LearningCancer Cell 2022Best Paper, Case Western Artificial Intelligence in Oncology Symposium, 2020. Cover Art of Cancer Cell (Volume 40 Issue 8).
- Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide ImagesIn Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021
- Algorithmic fairness in artificial intelligence for medicine and healthcareNature Biomedical Engineering 2023
- Synthetic Data in Machine Learning for Medicine and HealthcareNature Biomedical Engineering 2021
- Federated Learning for Computational Pathology on Gigapixel Whole Slide ImagesMedical Image Analysis 2022
- Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and PrognosisIEEE Transactions on Medical Imaging 2020Top 5 Posters, NVIDIA GTC 2020