Technology
Tom Zahavy, Nir Ben Zrihem and Shie Mannor
The mind of a machine
dot is a “game state”, a snapshot of a single game at a moment in time. Different colours show how well the AI was doing at that point in the game. With Breakout, for example – Deep-learning algorithms are enormously successful, but we don’t where the player must knock a always know why. Aviva Rutkin peers inside an artificial brain hole through a wall of brightly coloured blocks with a paddle A PENNY for ‘em? Knowing what box, says Nir Ben Zrihem at the allow the researchers to track and a ball – the team was able to someone is thinking is crucial for Israel Institute of Technology in different stages of the neural identify a clear banana-shaped understanding their behaviour. Haifa. “If it works, great. If it network’s progress, including region in one map showing It’s the same with artificial doesn’t, you’re screwed.” dead ends. every time the algorithm tried intelligences. A new technique Neural networks are more than To get the images, the team tunnelling through the blocks for taking snapshots of neural the sum of their parts. They are set a neural network the task of to force the ball to the top of the networks as they crunch built from many very simple playing three classic Atari 2600 wall, a winning tactic that the through a problem will help us components – the artificial games: Breakout, Seaquest and neural network had figured out by fathom how they work, leading neurons. “You can’t point to a Pac-Man. They collected 120,000 itself. Mapping the playthroughs to AIs that work better – and are specific area in the network and snapshots of the deep-learning let the team trace how the more trustworthy. say all of the intelligence resides algorithm successfully applied it “A deep-learning neural In the last few years, deepthere,” says Zrihem. But the in successive games. network is a black box. learning algorithms built on complexity of the connections Seaquest, where the player has neural networks – multiple means that it can be impossible to If it works, great. If it to avoid, collect or destroy various doesn’t, you’re screwed” layers of interconnected retrace the steps a deep-learning items and pick up underwater artificial neurons – have driven algorithm took to reach a given divers, is harder for AIs to tackle. breakthroughs in many areas of result. In such cases, the machine algorithm as it played each of Using the maps, the team artificial intelligence, including acts as an oracle and its results the games. They then mapped unravelled numerous failed natural language processing, are taken on trust. the data using a technique that approaches, like waiting too long image recognition, medical To address this, Zrihem and allowed them to compare the to rescue errant divers. The details diagnoses and beating a his colleagues created images of same moment in repeated could be useful when retraining professional human player at deep learning in action. The attempts at a game. the algorithm, says Zrihem. the game Go. technique, they say, is like an The results look a lot like Building the perfect game The trouble is that we don’t fMRI for computers, capturing scans of real brains (pictured strategy is fun, but scans like these always know how they do it. an algorithm’s activity as it works below: Seaquest on the left, and could help us hone algorithms A deep-learning system is a black through a problem. The images Pac-Man). But in this case, each designed to solve real problems, says Jeff Clane at the University of Wyoming in Laramie. Clane’s own studies of the inner workings of image-recognition algorithms have led him to create “illusions” that can trick a neural network into thinking something is there when it’s not. For example, a security algorithm might have a flaw that means it’s easily fooled in certain situations, or an algorithm designed to decide if someone gets a bank loan might be prejudiced against people of a particular race or gender. “If you’re deploying this technology in the real world, you want to understand how it works and where it might fail,” says Clane. “If we can understand neural networks better, then we can understand their weaknesses –Artificial brains on games– and improve their strengths.” n 22 | NewScientist | 20 February 2016