I worked with a team of researchers under a grant by Facebook Reality Labs to make a Machine Learning Algorithm for eye-tracking. I was chosen for my expertise in virtual 3D environments to make a large dataset of labeled eye images to train the algorithm.
We created a pipeline that took real-life 3D head scans and turned them into useable models with plica, eye-lashes, and fully rigged blinkers. We also placed an anatomically correct eyeball with replaceable textures for the iris and sclera. The cornea had varying sphericity and the iris had a contractable pupil. We used ray-casting for rendering so we could have the various parts of the eyeball bend light rays for the most realistic eye possible. We operated under incredibly high, scientific standards of realisticness.
Using the automatically processed head models, we made another pipeline that reads GIW eye-tracking data and recreates a real-life eye video in a virtual environment. This pipeline would go frame by frame and make keyframes for the rotation of the eyeball, armatures the blinkers, and the size of the pupil so they perfectly matched the real-life video. We would render out the images and make a side-by-side video of the real eye and the synthetic eye for comparison. A good portion of our job was ensuring that the entire pipeline ran without any human input or unexplained "magic numbers".