Your eyes can reveal more than you think, as researchers can now use computer vision technology to reconstruct 3D images of a scene from reflections in a person’s eyeballs.
Jia Bin Huang and colleagues at the University of Maryland, College Park, developed a computer vision model that takes between five and 15 digital photos from different angles of an individual’s face as they look at a scene and reconstructs that scene from reflections in their eyes. eyes.
The method adapts a technique called neural radiation fields (NeRF), which uses neural networks to determine the density and color of objects the computer “sees.” NeRF generally operates by looking directly at a scene, rather than seeing one reflected in a person’s eyeballs.
Huang’s version builds the scene by extrapolating a square of, on average, 20 by 20 pixels in each eye. The method can produce what the researchers call “reasonable” results by replicating real-life objects, although they are blurry due to the difficulty of representing the shape of the cornea, the transparent outer layer at the front of the eye.
When tested on clips from music videos of Miley Cyrus and Lady Gaga, the technique was able to identify the rough shape of objects in the singers’ eyes, but had trouble reconstructing details.
Huang and his colleagues declined to discuss this story, citing a policy from a conference to which the paper was submitted.
The work is based research conducted by Ko Nishino and Shree K. Nayar at Columbia University in New York in the mid-2000s. “That work caused a sensation by showing how the surface of the cornea could be used as an approximation of a curved mirror to create panoramic images.” , says serge belongie at the University of Copenhagen, Denmark.
“The new work extends this concept to the task of 3D reconstruction,” says Belongie. “The results are quite impressive and will make people, once again, think twice about what they are revealing when photographed by ever higher resolution cameras.”