According to Nitish Padmanaban, the problem in vision is deceivingly simple: as a person ages, the lenses in their eyes stiffen and become unable to change shape enough to focus on different objects in their environment.
As a result, a person might get reading or bifocal lenses to help them see what they need to.
As a fifth-year Ph.D. student at Stanford University, Padmanaban is working on a unique solution to this problem, which is a focused, tunable lens that replaces the functionality that’s lost by the eyes over time.
These autofocus function as a combination of two types of eye-tracking software and a depth camera, and are able to determine exactly which object a person is looking at, how far from the person the object is located, and change shape accordingly so that the person can seamlessly focus on anything in their line of vision.
In today’s episode, Padmanaban discusses the technical details of how the product works, how the technology of the lenses is based upon the technology behind virtual reality headsets, what they’re aiming to improve about the product in the future, and more.
Tune in to hear the full conversation and check out https://www.computationalimaging.org/ to learn more.