Tiny camera lets blind people read with their fingers

Tiny camera lets blind people read with their fingers

For more technology stories, visit newscientist.com/technology NO BRAILLE? No problem. A tiny camera worn on the fingertip could help blind people re...

879KB Sizes 3 Downloads 90 Views

For more technology stories, visit newscientist.com/technology

NO BRAILLE? No problem. A tiny camera worn on the fingertip could help blind people read more easily. To read printed material, many visually impaired people use apps that translate text to speech. Snap a picture and the app reads the page aloud. But users can find it difficult to photograph texts, and these apps have trouble parsing complex layouts, such as a newspaper or menu. Jon Froehlich and his colleagues at the University of Maryland have developed a device, nicknamed HandSight, that uses a tiny camera originally developed for endoscopies. Measuring just one millimetre across, the camera sits on the tip of the finger. As the wearer follows a line of text with their finger, a computer reads it out. Audio cues or haptic buzzes help guide them, for example changing pitch or gently vibrating to nudge their finger into the correct position (doi.org/bsvn). Down the line, the creators of HandSight imagine a smartwatch-like device that blind people could use to discern other visual characteristics, like colours and patterns. “They’re already using fingers all of the time to explore the physical world,” says Froehlich. His team hopes that HandSight will “augment the fingers with vision to allow visually impaired users to get a sense of the non-tactile world”. Aviva Rutkin n

Francesca Russell/getty

Tiny camera lets blind people read with their fingers

–Learning is child’s play for AI–

Google’s DeepMind AI discovers physics

Lee Stearns

PUSH it, pull it, break it, maybe them before making a choice. even give it a lick. Children The second experiment also experiment this way to learn featured blocks, but this time about the physical world. Now, they were arranged in a tower. artificial intelligence trained by The AI had to work out how many researchers at Google’s DeepMind distinct blocks there were, again and the University of California, receiving positive or negative Berkeley, is taking its own baby feedback depending on its answer. steps in this area. Over time, the AI learned it had “Many aspects of the world, to interact with the tower – like ‘Can I sit on this?’ or ‘Is it essentially pulling it apart – to squishy?’ are best understood work out the correct answer. through experimentation,” says DeepMind’s Misha Denil. He and “This kind of learning is similar to how animals or his colleagues have trained an humans are able to solve AI to learn about the physical problems” properties of objects by interacting with them in two different virtual environments It’s not the first time AI has (arxiv.org/abs/1611.01843v1). been given blocks to play with. In the first, the AI was faced Earlier this year, Facebook used with five blocks that were the simulations of blocks to teach same size but had a random neural networks how to predict mass that changed each time the whether a tower would fall over. experiment was run. The AI was The technique of training rewarded if it correctly identified computers using positive and the heaviest block and given negative feedback is called deep negative feedback if it was wrong. reinforcement learning, an By repeating the experiment, approach that DeepMind is well the AI worked out that the only known for. In 2014, it used the way to determine the heaviest method to train an AI to play Atari –Make a point of reading– block was to interact with all of video games better than humans.

The company was subsequently bought by Google. “Reinforcement learning allows solving tasks without specific instructions, similar to how animals or humans are able to solve problems,” says Eleni Vasilaki at the University of Sheffield, UK. “As such, it can lead to the discovery of ingenious new ways to deal with known problems, or to finding solutions when clear instructions are not available.” The virtual world in the research is only very basic. The AI has a small set of possible interactions and doesn’t have to deal with the distractions or imperfections of the real world. But it is still able to solve the problems without any prior knowledge of the objects’ physical properties, or the laws of physics. Ultimately, this work will be useful in robotics, says Jiajun Wu at the Massachusetts Institute of Technology. For example, this method of learning could help a robot figure out how to navigate precarious terrains. “I think right now concrete applications are still a long way off, but in theory, any application where machines need an understanding of the world that goes beyond passive perception could benefit from this work,” says Denil. Timothy Revell n 19 November 2016 | NewScientist | 25