I'm an engineer.
I'm a machine learning engineer at Amazon Web Services (AWS). As part of AWS's Annapurna Labs Neuron SDK team, I develop deep learning models, implement optimization techniques, and improve software pipelines to expand the applications of machine learning and software engineering. Below you'll find a selection of my professional experience and some of my previous projects that showcase the type of work that I do.
Professional Experience
Amazon Web Services (AWS)
I'm currently a member of AWS's Annapurna Lab's Neuron SDK team. I work on developing and deploying high performance deep learning models for large-scale computer vision and natural language processing applications.
Marvell Machine Learning Team
As an intern on Marvell Semiconductor's Machine Learning team, I used formal verification to test the functionality of a machine learning accelerator. I also worked with the software team to develop programs used to prepare, process, and analyze data to evaluate the perfomance of the neural networks used in the machine learning accelerator.
Saab Defense and Security
I interned at Saab Defense and Security during two summers as a mechanical engineer and then as a systems engineer. As a mechanical engineer, I assembled and debugged an engineering prototype of a shipboard radar antenna processor unit. As a systems engineer, I wrote qualification tests and analysis reports for the 3DELRR radar for Raytheon and the U.S. Department of Defense.
Non-linear Optics Research
As a research assistant at Skidmore's non-linear optics lab, I developed MATLAB and Python programs that used finite differential methods and nondimensionalization to model the hyperpolarizability and nonlinear response of organic molecules based on their conjugated paths. Finding molecules with high hyperpolarizability is important for many applications, including medical imaging and communications.
Projects
EEG Classification with RNNs
I implemented an end-to-end deep learning solution using recurrent neural networks (RNNs) to classify tasks that subjects are performing based on their time-series electroencephalography (EEG) data. I created a novel archticture using RNNs, CNNs, and preprocessing to achieve high accuracy on small sample sizes.
Parallelizing Inception Kernels for GPU
I developed a novel parallelization technique to increase the efficiency and scalability of the convolution and pooling layers of Google’s Inception image classification neural network architecture. My technique introduced optimized data streaming, thread allocation, and data reuse to achieve a 2.6x speedup over regular GPU methods.
Unrolled Optimization with Deep Learning
I combined convolutional neural networks and proximal gradient methods to perform image reconstruction on images with noise and blur. My technique of incorporating prior knowledge from classical optimization algorithms into deep learning networks achieved state of the art performance on various noise and blur kernels, including obtaining peak signal-to-noise ratios as high as 33dB.
Optimized Doubly Regularized SVM with ADMM
In order to solve the doubly regularized SVM (DrSVM) for large-scale optimization problems with small sample sizes and high numbers of features, I developed two methods using primal and dual alternating direction methods of multipliers (ADMM). These large-scale optimization techniques introduced improved runtimes and fewer iterations compared to standard DrSVM solvers.
Autonomous Vehicle Trajectory Planning
I implemented reinforcement learning algorithms and rapidly-exploring random trees to perform autonomous vehicle trajectory planning in complex environments. I demonstrated the robustness and efficiency of these algorithms in statistically uncertain situations.