
An average, normal human is blessed with five senses one of them being the sense of sight or vision. It is one of the most important basic five senses as it enables us to navigate through the world. But unfortunately, around 285 million people all around the globe suffer from some kind of visual impairment, and out of these 285 million people, 39 million people are completely blind. 82% of all blind people are 50 years and older.
In the year 2010, visual impairment was termed as a major global health issue. The preventable causes of visual impairment are as high as 80% of the total global burden. Some of the major causes of visual impairment are uncorrected refractive errors and cataracts. The first cause of blindness is a cataract in 51% of cases.
All these visually impaired individuals deserve and need assistance to augment their sight. For this purpose, the best solution until now has been a large vision-boosting headset. But recently, a team of Harvard students developed a new Artificial Intelligence-powered navigation aid. It has soft robotic actuators inside a device that is worn like a normal vest. It turns camera input from a smartphone into localized sensations of force across the wearer’s torso. This AI-powered navigation aid uses a custom version of the computer vision artificial intelligence system called YOLO.
As stated by an article posted at www.venturebeat.com,
“It detects, classifies, and estimates the movement of objects surrounding the user, then uses the actuators to apply more or less pressure at various points depending on the user’s distance from objects. Even without vision, a user could distinguish between a mostly open path ahead, a wall to the left, and a person approaching from the front.”
This vision assisting technology does not merely provide helpful functionality but, also aids in maintaining the balance of the wearer. However, this technology relies on smartphones for the provision of core camera and computer vision for artificial intelligence functionality. The smartphone is to be placed in a central location below the wearer’s neck. You may ask why this specific location and nowhere else? Well, that’s because this position allows the device to evaluate the environment from the wearer’s perspective and then issue commands via a Bluetooth connection. This product supports a wide range of smartphones that fulfill the base level of required camera technology and artificial intelligence processing features.
This project is still in the works. The team of experts is still working on refining the software and sensor technology. They are working to ensure that environmental imaging is useful for the wearer. According to Foresight’s team, “The finished solution will be ‘another tool in their arsenal,’ rather than fully replacing other assistive navigation technologies. There’s no release date or pricing yet, but the team is working with the Harvard Innovation Lab’s Venture Incubation Program to commercialize the design.”
If this product hits the commercial market, it will revolutionize the world of optics and visual aid assistive technology.
