I am a third-year PhD student at CMU's Robotics Institute, advised by Prof. Kris Kitani. I previously received my bachelor's degree
in Computer Science at CMU in 2022, also working with Prof. Kris Kitani.
I had the opportunity to conduct research at the MSC Lab in UC Berkeley for two summers, advised by Prof. Masayoshi Tomizuka and Dr. Wei Zhan.
I previously interned at Meta Zurich and Meta Reality Labs working on 3D panoptic reconstruction and parametric human body modeling.
I'm broadly interested in computer vision, joint 2D/3D understanding, human motion modeling, and multi-modal learning. Much
of my research focuses on bridging 2D and 3D representations for a cohesive
understanding of the world.
Aligning predicted depth maps with observed depth points by propagating depth corrections improves
depth completion for sparse and varying input point densities.
Combining long-term, low-resolution and short-term, high-resolution matching for temporal stereo
yields efficient and performant camera-only 3D detectors.
Consistency between 2D and 3D pseudo-labels for joint 2D-3D semi-supervised learning stymies
single-modality error propagation and improves performance.
Multi-modal fusion with prediction consistency between privileged teacher and noisy student
alleivates collapse in difficult capture conditions and improves performance in ideal conditions.