Robots read the writing on the wall

Robots read the writing on the wall

TECHNOLOGY Robots read the writing on the wall A new breed of robot is using text-spotting software, dictionaries and internet access to learn to rea...

386KB Sizes 5 Downloads 73 Views

TECHNOLOGY

Robots read the writing on the wall A new breed of robot is using text-spotting software, dictionaries and internet access to learn to read anything, anywhere recognition (OCR) software packages already exist. These automatically turn scanned images of books into text, and many researchers are using them to turn robots’ attention towards posters and signs on city streets. Last year, for example, Google

INGMAR POSNER wants to develop robots that can see through walls. Not by equipping them with X-ray specs or specialised radar technology, but simply by teaching them to read. “By reading a label on a closed door you can sometimes get a good idea of what can be found behind it,” says Posner, a roboticist at the University of Oxford. “Reading can help you detect things you cannot directly see.” Roboticists have spent years teaching their robots a range of skills to help them get by in the real world. Robots have learned to map their surroundings and to pick up and manipulate cumbersome objects. Some have even shown signs of becoming self-aware. But, remarkably, they remain illiterate.

SM/AIUEO/getty

Colin Barras

“Current ‘reading’ software tries to force everything a robot sees into text, such as walls and chimneys” With the written word so prevalent in the human world – from road signs to shop names – a non-reading robot trying to prove its worth is placed at a severe disadvantage, says Paul Newman, who works alongside Posner. Along with Peter Corke at the Queensland University of Technology in Brisbane, Australia, the team are trying to help robots level the playing field. Teaching robots to read should, in principle, be relatively simple. After all, optical character –Coke or water?– 22 | NewScientist | 27 November 2010

launched Goggles, a smartphone application for just that task. Since May, Goggles has been able to translate languages, helping tourists work out what to order from a menu, for instance. Good OCR software is only a partial solution, however. Goggles

relies on the user to recognise text and point a phone’s camera at the words before the OCR software kicks in. Robots will not have the luxury of human help, and researchers have found that OCR software cannot pick out words embedded in a busy scene by itself. “The OCR software doesn’t cater for the fact that it might not be seeing text,” says Posner. “It tries its level best to force everything into text – brick walls, chimney stacks, everything.” The result is a nonsensical muddle. To get round this problem, the team developed text-spotting software. This relies on the fact that there is often a horizontal area of uniform colour just above

and below text on a sign, but lots of two-tone colour variation within the text itself. Once the software has identified text, an image of it is passed onto the OCR software to read. Even then, the results returned by the OCR software are often error-strewn. So the team has loaded their test robot, Marge, with a dictionary and spellchecker. This allows it to work out that “roodbond” is most likely a misreading of “broadband”, while “nqkio” should be read as “nokia”. To understand names it reads in its environment, Marge turns to news websites, such as The New York Times and BBC Online. The robot trawls the sites for appearances of the word it has

read, and analyses how often keywords like “restaurant” or “bank” appear in the same stories. This allows it to make strong semantic connections between frequent matches. Using this approach, Marge has learned that Strada is a UK restaurant chain, and that Barclays is a UK bank. With those systems in place, Marge is now ready to read and exploit text in the world in the same way a human does – a “seriously exciting” prospect, says Posner. The work was presented at the International Conference on Intelligent Robots and Systems in Taipei, Taiwan, last month. One potential problem is identifying words that are difficult to read because of the viewing perspective, says Majid Mirmehdi at the University of Bristol, UK, whose team has developed its own software to help robots to read. Words printed on a curved surface can appear distorted, making them tricky for a robot to understand. Mirmehdi’s team is working on improving their software to overcome this, so a humanoid robot with dextrous hands can manipulate objects – like cylindrical paint cans – to read them more easily. Posner hopes his team’s work will allow mobile robots to carry out tasks more easily by following signs the same way a human can. For example, a search-and-rescue robot in a building wouldn’t need to gradually build its own map of the building – it could read any available signs to find its way around. While such end results are still a long way off, other roboticists agree that reading projects are worth pursuing. The work is “refreshingly original in the robotics context”, says Gregory Dudek at McGill University in Montreal, Canada. “I personally believe that exploiting OCR methods in a mobile robotics context makes a lot of sense,” he adds. “In fact, once you reflect on it, there is no doubt it will be highly useful.” n

ALI AL-SAADI/AFP/Getty

For daily technology stories, visit www.NewScientist.com/technology

–Checkpoints are at constant risk–

Could radar pick out a suicide bomber in a crowd? THE radar guns police use to spot speeding motorists have inspired a version that aims to identify a wouldbe suicide bomber in a crowd. A radar gun fires microwave pulses at a car and measures the Doppler shift of the reflected signal to calculate its velocity. However, the strength of the reflected signal – the “radar cross section” – can provide additional information about the size and shape of the reflecting object and the material it is made from. William Fox of the Naval Postgraduate School in Monterey, California, and John Vesecky of the University of California, Santa Cruz, wondered whether the wiring in a suicide vest would alter the radar cross section of a suicide bomber enough to allow a radar gun to pick him or her out in a crowd. The pair used software to simulate how radar signals at 1 gigahertz and 10 gigahertz would be reflected by the most common wiring arrangements used by bombers. They found that the clearest reflected signals were in the 10 gigahertz range. Together with colleague Kenneth Laws, they then fired low-power 10 gigahertz radar pulses at groups of volunteers, some wearing vests wired up like suicide vests. About 85 per cent of the time, the reflected signals allowed them to correctly identify a “bomber” up to 10 metres

away (Journal of Defense Modeling and Simulation, DOI: 10.1177/ 1548512910384604). The team hopes the US army will fund further development of the system, including refinements to avoid false alarms being triggered by metal in underwired bras, jewellery and earphone leads. Overcoming false alarms is a major challenge, says Sam Pumphrey of the UK-based research and development company Cambridge Consultants, which is developing a radar system to detect explosives that may have been concealed within the walls of buildings as they were constructed. He thinks a bomb detection system

“The system has to avoid false alarms from metal in underwired bras, jewellery and earphone leads” that relies on radar guns alone might well be prone to errors. Fox agrees. He says that radar can be used in combination with other technologies, including smart surveillance cameras that can identify suspicious behaviour, and infrared imaging, which exploits the fact that explosives belts are often cooler than the body. Such a system could help security staff spot bombers from afar and discreetly begin an evacuation. Paul Marks n 27 November 2010 | NewScientist | 23