A little-known quirk of the human eye could pave the way for better cameras for self-driving cars and even more effective smartphone photography.
As you read this, your eyes are mostly slowly scanning from left to right, but even when you're not reading or looking at a stationary object, your eyes are constantly moving, and it turns out that this is the key to the quality of human vision and how robots, self-driving cars, and maybe even smartphones could see more clearly.
A team of researchers at the University of Maryland has created a camera that mimics human eye movements. It’s called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV), and it uses a rotating round wedge prism (round, but one of its faces has a steep angle) that spins in front of an event camera—in this case, an Intel RealSense D435 camera—to move images around.
Although the movements are small, they are intended to mimic saccadic movements of the human eye. Saccadic movements describe three different levels of eye movement: rapid, small twitches, slower eye shifts, and microsaccades, which occur several times per second and are small enough to be imperceptible to the human eye.
This latter movement can help us see more clearly, especially moving objects, where our eye shifts to put the image against the best part of our retina, replacing the blur with shape and color.
Understanding how these micromovements aid human perception, the team equipped their camera with a rotating prism.
According to the paper's abstract, “Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining a low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event chamber to redirect light and trigger events.”
The researchers combined the hardware solution with software that could compensate for motion and combine the captured images into a stable, clear image.
According to a report by Science Daily, the experiments were so successful that cameras equipped with the AMI EV detected everything from fast-moving objects to the human pulse. Now that's some accurate vision.
If robotic eyes could see more like humans, it could not only create robots that could share our visual abilities, but also, for example, self-driving cars that could eventually distinguish between people and other objects. There is already evidence that self-driving cars have difficulty identifying some humans. A self-driving Tesla equipped with an AMI-EV camera might be able to distinguish between a bag flying by and a child running down the street.
Equipped with AMI EV cameras, mixed reality headsets, which use cameras to combine real and virtual worlds, could do a better job of combining them for a more realistic experience.
“…has many applications that much of the general public already interacts with, such as autonomous driving systems or even smartphone cameras. We believe our novel camera system is paving the way for more advanced and capable systems to emerge,” Yiannis Aloimonos, a professor of computer science at UMD and co-author of the study, told Science Daily.
We're in the early stages and the hardware seems more like something you'd put on a motor than the ultra-thin, tiny camera you might need for the ultimate smartphone.
Still, understanding that something we can't see happening is responsible for what we can see, and how that small but critical ability to see can be replicated in robotic cameras, is a significant step on the path to a future where robots match human visual perception.