I am a second year PhD student at Mila (previously, Montreal Institute for Learning Algorithms), advised by Dr. Liam Paull. My interests are diverse; predominantly along the intersections of deep learning, computer vision, and autonomous robotics.

At present, I devote most of my time to develop techniques that enable embodied agents to not just see, but to understand. Vision sensors like monocular and RGB-D cameras only provide a robot information about what is visible to them. At a cognitive level though, algorithms must be able to leverage prior information about how the world works, and build useful representations. This is the hypothesis/motivation most of my current research hinges on. Indoor 3D mapping, scene understanding, and SLAM are some keywords I can classify my research under.

In the past, I worked on leveraging deep learning and tightly integrating it with classical methods for state estimation and prediction. Autonomous driving and 3D localization are two primary use-cases for my previous research.

When I’m not doing research, I love to spend time writing technical blogs, tutorials, and open-source code.