Graphics & Computer Vision Engineer
I built deep neural networks for eye tracking, VR rendering for Google Earth, and 3DOF head tracking for the Google VR SDK. I enjoy taking cutting-edge technology from concept to launch.
- Google 10/2016 — Present
- Post-acquisition, I lead machine learning efforts for the team.
- Eyefluence 1/2015 — 10/2016
- I created Eyefluence's machine learning based eye tracking technology, putting the latest deep learning research into production. I contributed to Eyefluence's entire VR stack, including: UI/UX in Unity/C#, a Unity plugin for eye tracking data in C, GPU accelerated neural nets in Python, and MIPI CSI Linux kernel drivers setting camera registers over I²C.
- I instituted software development best practices, leading a migration to GitHub for source control and code review and implementing continuous integration with Travis CI and TeamCity.
- The system I built got rave reviews from TechCrunch, CNET, PC World, USA Today, and more. Google acquired us in Oct. 2016.
- Google 10/2010 — 12/2014
- I conceived and created the first version of Google Earth VR.
- I implemented the 3DOF head tracker used in Google Cardboard and the Google VR SDK.
- I shipped WebGL rendering for Google Maps, one of the first and widest deployments of WebGL in the world.
- Microsoft 4/2008 — 9/2010
- I shipped Silverlight Deep Zoom in the first release of Windows Phone.
- Northrop Grumman6/2004 — 3/2008
B.S. in Computer Science, Harvey Mudd College class of 2004