In the 15th episode “Secondhand Spoke” of the 12th season family Guy, Teenage son Chris Griffin is being bullied. With Chris unable to come up with an answer to his classmates’ verbal Gibbs, his smartest baby brother, Stevie, hops in a backpack, so that Chris can carry him around. Inspired by Stevie, Chris not only manages to return to Bailey, but even winds up being nominated for class president for his troubles.
That family Guy The B-plot is similar only to a new project undertaken by Intel and the University of Georgia. Nevertheless, it is a tricky one: a smart backpack capable of helping its wearer better navigate any environment without problems – all through the power of speech.
The research that Jagdish Mahendran and team have done is an AI-powered, voice-activated backpack designed to help its wearer experience the world around them. To do this, a backpack – which can be particularly useful as an option to guide dogs for visually impaired users – has an attached camera and fanny pack (the former wearing a vest jacket, the latter containing a battery pack) Uses, together with the computing unit so that it can respond to voice commands by describing the world around the wearer.
This means being able to detect visual information about traffic signals, traffic conditions, elevation changes, and croswax, along with location information, and then being able to convert it into useful details delivered via Bluetooth earphones.
A useful accessory
According to Mahendran Digital Trends, “The idea to develop an AI-based visual-assist system came eight years ago in 2013 during my master’s.” “But I couldn’t make much progress then [a] Some reasons: I was new to the field and deep learning was not mainstream in computer terms. However, the real inspiration came to me last year when I met my blind friend. As she was explaining her daily challenges, I was surrounded by this irony: As a perception and AI engineer I’ve been teaching robots for years how to look, while there are people who can’t see. This enabled me to use my expertise, and create a perception system that could help. “
The system incorporates some impressive technology, including a Luxonis OAK-D spatial AI camera that deeply leverages OpenCV’s Artificial Intelligence Kit, powered by Intel. It is capable of running advanced intensive learning neural networks, while providing high-level computer vision functionality, with a real-time depth map, color information, and more.
“The success of the project is that we are able to run many complex AI models on a simple and small form factor setup and are cost effective, thanks to the OAK-D camera kit powered by Intel’s Movidius VPU, an Intel OpenVino AI chip with software, ”said Mahendran. “In addition to AI, I have used many technologies such as GPS, point cloud processing, and voice recognition.”
Currently in the testing phase
As with any wearable device, a major challenge involves making it something that people would really like to wear. Nobody wants to look like a science-fiction cyberbug outside Comic-Con.
Fortunately, Mahendran’s AI vest does well under these criteria. This conforms to the standards that the late Xerox PARC computer scientist Mark Weser described as essential for ubiquitous computing: recreating in the background without drawing attention to itself. The components are all hidden away from view, even with the camera (which, by design, must be visible to record the required images) looking at the world through three small holes in the vest.
Mahendran said, “The system is simple, wearable and unobtrusive so that the user does not get unnecessary attention from other pedestrians.”
Currently, the project is in the testing phase. “I did the initial work [tests myself] In the city of Monrovia, California, ”said Mahendran. “The system is robust, and can run in real time.”
Mahendran said that, in addition to detecting outdoor obstacles – from bikes to overhanging tree branches – it can also be useful for indoor settings, such as faux kitchen cabinet doors and the like. In the future, he hopes that members of the public will need the kind of equipment that can try it out for themselves.
“We have already formed a team called Meera, which is a group of volunteers from different backgrounds, including blind people,” Mahendran said. “We are stepping up the project with the mission of providing an open-source, AI-based visual support system for free. We are currently in the process of raising funds for our initial phase of testing. “