cv

My work experience -- I specialize in diffusion model research, and have pre-trained medium-to-large scale models (>2B parameters) on larger datasets (>1T tokens / >1B images).

Education

  • Ph.D. in Applied Mathematics
    Yale University
    • Advised by Ronald Coifman and Yuval Kluger.
  • B.S. in Computer Science and Mathematics
    Yale University

Experience

  • 2024
    Research Intern
    ByteDance
    • Built multimodal image / language models on the AI Seed-Vision Team, with a focus on diffusion-based frameworks. Acquired large-scale datasets (100M+ images / text) and trained diffusion models for simultaneous text-to-image, image-to-text, and visual understanding. Results submitted to CVPR 2025. Mentors Heng Wang, Peng Wang, and Linjie Yang.
  • 2024
    Research Intern
    Elucid
    • Developed multimodal foundation models to aid in generating and augmenting arterial CT imagery and segmentations for improved fractional flow reserve (FFR) analysis and cardiologist report generation.
  • 2023
    Research Intern
    Bosch Center for Artificial Intelligence
    • Conducted research on robust training-free approaches to guided diffusion models using optimal control. Published at NeurIPS 2024. Mentor Marcus Pereira.
  • 2020
    Research Intern
    Center for Computational Mathematics, Flatiron Institute
    • Explored deep image prior-based techniques for enhancing phase retrieval in low-photon settings at the Center for Computational Mathematics (CCM) at Flatiron Institute
  • 2016
    Software Engineering Intern
    Amazon Lab126
    • Modified machine learning module (an n-gram Markov model) in FireOS to reduce memory usage by ~2x with no significant reduction in prediction quality