Tech-savvy and highly motivated professional with strong educational background and Python experience managing software engineering and ML projects from computer vision to reinforcement learning. I recently graduated with a Master's in Artificial Intelligence from Imperial College London. Able to conduct research and turn state-of-the-art computer vision algorithms and ML models into working solutions with keen attention to detail. I'm proficient in Python, C++, and ML libraries (Pytorch, Tensorflow).
In this project I fuse both LiDAR and camera data using two techniques. First, early fusion: where I first fuse both pieces of data onto the image frame and then detect the obstacles. I leverage the camera data to detect the objects and use the LiDAR data to estimate their distance from the vehicle. An example of the output can be shown below. Second, late fusion: where I process both pieces of data separately using a YOLOv4 algorithm for the image data and a PointNet for the point cloud data and then fuse them together.
all_early_fusion.mp4
In this project I build on Pytorch PointNet presented in this paper. In order to achieve this I first have to preprocess the data. I then go on to build the model by developing the T-Net and then the classification and segmentation head. I test our model using a segmentation subset of the ShapeNetCore model, which contains 16 000 point clouds with 16 different shape categories. To download the subet please follow this link.
In this project I build an object tracking algorithm that makes use of DeepSORT using Yolov4 (implementation using Yolov5 in the works). I first detect the objects using the object detection algorithm Yolov4. For each of the detected objects we use DeepSORT to associate the deep convolutional features. I then make use of the Hungarian algorithm to associate the different matches. I provide two extensions in order to improve performance: I change the non-max suppresion formula and I introduce age to reduce the number of false positives and false negatives.
out_deepsort_nvidia.mp4
This is the most challenging project I've done, it was my master's thesis. In this project, I generated safety-critical scenarios in order to improve the safety of autonomous vehicles. The following video is an example of a scenario generated by the rl agents (the AV vehicle in and the adversarial rl-agents in ). We can see that the scenario generated is dangerous but at the same time realistic. Finding this perfect balance was the great challenge of this project. In this project we used the MADQN and the MADDPG to control the agents.
failmaker_DQN_three_npc_ego_attention.mp4
In this project I build the UNet from scratch using Keras with the objective of segmenting salt deposits beneath the Earth's surface using seismic images. It was done for the TGS Salt Identification Challenge, where the model was trained on a dataset of 4 000 seismic images and their respective ground truths. I achieved a validation loss (binary cross entropy loss) of 0.18702.
- Predicting the speed of a car using dashcam videos
- Built from scratch a VAE and a GAN.
- Designed a Residual Network to classify animals.
- Implemented from scratch the convolution, pooling, linear and batch norm layers in Python.
- Built an ML model for predicting ships crew size.
To build a similar Github profile page, check out this blog post.