I have built deep neural networks for eye tracking, VR rendering for Google Earth, and 3DOF head tracking for the Google VR SDK. I enjoy taking cutting-edge technology all the way from concept to launch.
- Google Oct. 2016 — Present
- I have returned to Google with the acquisition of Eyefluence!
- Eyefluence Jan. 2015 — Oct. 2016
- I created Eyefluence's machine learning based eye tracking technology, putting the latest deep learning research into production. I contributed to Eyefluence's entire VR stack, from UI/UX in Unity, to real-time GPU accelerated neural nets, down to MIPI camera drivers in the Linux kernel.
- In the process I brought software development best practices to the company, leading a migration to GitHub for source control and code review and implementing continuous integration with Travis CI and TeamCity.
- The system we built got rave reviews from TechCrunch, CNET, PC World, USA Today, and more. Google acquired us in Oct. 2016.
- Google Oct. 2010 — Dec. 2014
- For the Google Cardboard launch I implemented improved head tracking by using a Kalman filter with latency compensation.
- I created the first demo of what became Google Earth VR.
- I helped ship WebGL rendering for Google Maps, one of the first and widest deployments of WebGL in the world.
- Microsoft Apr. 2008 — Sept. 2010
- I shipped Silverlight Deep Zoom in the first release of Windows Phone.
- Northrop Grumman Space Technology June 2004 — Mar. 2008