Pretraining for Medical Imaging

Recently, it was found for medical image classification tasks, simple models trained from scratch perform competitively with complex models pre-trained on ImageNet, which is counter-intuitive. I investigated why this is the case by forming conjectures and validating them through experiments on the CheXpert dataset for the task of detecting a Lung Disease. This was my project for NYU's Machine Learning for Healthcare course, taught by Prof. Rajesh Ranganath (NYU).

Autonomous Driving with Decoupled Representation Learning

The CARLA Challenge involves training an agent to drive a car in the CARLA simulator. We attempt to train an agent to solve this task without using an privelleged information provided by the simulator, and only using unsupervised pre-training techniques to develop an image feature extractor, as opposed to training it end-to-end. This was my project for NYU's Deep Reinforcement Learning course, taught by Prof. Lerrel Pinto (NYU)

Autonomous Driving in Dense Traffic using Offline Model-based Reinforcement Learning

I contributed to the project titled "Prediction and Policy learning Under Uncertainty", which involves training an autonomous driving agent to navigate through dense traffic, using Offline Model-based Reinforcement Learning. It was published in ICLR 2019, by Mikael Henaff (Facebook AI Research), Prof. Alfredo Canziani (NYU) and Prof. Yann Lecun (Facebook AI Research, NYU).
I improved the performance of the driving agent by modifying the World Model. I also implemented a program to test the agent.

Bird Eye View Estimation with limited data

I tackled the problem of Bird Eye View Prediction - a crucial task for autonomous vehicles - with very little annotated data, and a moderate amount of unannotated data. This was my project for NYU's Deep Learning course, taught by Prof. Yann Lecun (Facebook AI Research, NYU) and Prof. Alfredo Canziani (NYU).

Improve Unsupervised Pretraining with Adversarial Noise

I explored the possibility of improving unsupervised pretraining through self-supervised pre-text tasks using adversarial noise. The goal of this project was to contribute towards data efficient Deep Learning for Computer Vision tasks. This was my project for NYU's Computer Vision course, taught by Prof. Rob Fergus (Deepmind, NYU).

Google Summer of Code 2018

I participated as a Student Software Developer in Google Summer of Code 2018. I implemented high performance graph analysis algorithms for Julia's LightGraphs library.