Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations
ConVRT is an efficient INR framework for video-based turbulence mitigation that operates in test-time optimization manner
I am a PhD student in the Computer Science Department at the University of Maryland, College Park. I am working with Prof. Yiannis Aloimonos and Cornelia Fermüller at Perception and Robotics Group. I also work closely with Prof. Christopher Metzler. During my master’s study I worked with Prof. Pratap Tokekar on using Reinforcement Learning in Multi-agent System research.
My research interest lies at the intersection of computer vision, imaging, and robotics, focusing on generative models, neural representations for motion in videos, and 3D vision for robotics.
ConVRT is an efficient INR framework for video-based turbulence mitigation that operates in test-time optimization manner
TimeRewind synthesizes the video backward into the pre-capture time with image-and-events video diffusion.
We leverage radiance fields to imagine different human views to find the best drone pose for aerial cinematography.
Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture.
CodedEvents is a novel method for optimal point-spread-function engineering for 3D-tracking with event cameras.
We present a self-supervised occupancy prediction technique, ProxMaP, to predict the occupancy within the proximity of the robot to enable faster navigation.
We present a Multi-Agent Reinforcement Learning (MARL) algorithm for the Visibility-based Persistent Monitoring (VPM) problem.
Template from Keunhong Park