Machines see differently

Machines see differently

News Artificial intelligence A simple experiment could help find alien life on Europa Jonathan O’Callaghan FINDING amino acids on other worlds could ...

533KB Sizes 0 Downloads 34 Views

News Artificial intelligence

A simple experiment could help find alien life on Europa Jonathan O’Callaghan FINDING amino acids on other worlds could be a sign of life there, but only if we can be sure they were produced by living things. The subsurface oceans of icy moons such as Jupiter’s Europa and Saturn’s Enceladus might contain amino acids. But they can also be made non-biologically. How can we tell them apart? One thing that would help is to rule out the presence of primordial amino acids created when the solar system formed, when life wasn’t around. These have been found on comets. “Amino acids are the building blocks of protein, and the chemistry of life as we know it,” says Ngoc Truong at Cornell University in New York. “If amino acids can be found in the ocean of Europa, could they be the relics of primordial synthesis processes?”

NASA

Warm oceans lurk under the icy surface of moons like Europa

In laboratory experiments, Truong and his colleagues found some amino acids, aspartic acid and threonine in particular, could only be in warm, hydrothermally active oceans of icy moons if they were produced in the past million years (Icarus, doi.org/c5mt). This means that if any are detected, they would probably have been produced recently, cosmically speaking, which means a biological origin is possible. “This helps us understand which biosignatures we may wish to target for missions to Enceladus or Europa,” says Morgan Cable at NASA’s Jet Propulsion Laboratory in California. ❚ 12 | New Scientist | 18 May 2019

Machines see differently We are beginning to grasp why AI is so easily fooled and how to stop it from happening, finds Linda Geddes WHY did the machine think the turtle was a rifle? No, this isn’t a bad joke, but one of many recent examples of machines being tricked into seeing things that aren’t there. Artificial intelligence can be easily confused by so-called adversarial images, which contain seemingly innocuous changes that don’t affect what people see. Like many others, Aleksander Mądry at the Massachusetts Institute of Technology thought this was a bug that would vanish with better algorithms or ways to train these systems. But he and his colleagues have discovered that adversarial images seem to arise from features that we can’t perceive, but machines can. Early indications are that, by understanding these features, it may be possible to stop such alterations causing havoc. Most examples of adversarial images can seem baffling to an onlooker, with two pictures that look identical being interpreted in different ways by an AI. For instance, with two apparently identical images of a cat, an AI will insist one of them is a dog. As a research experiment, fooling AIs can be amusing, but if a medical AI misses a clearly obvious tumour in a medical scan, the results could be tragic. Now Mądry and his team appear to have confirmed a long-held suspicion that AIs don’t view images in a similar way to humans. Rather than relying on details like ear shape or nose length to classify images of animals, say, they use features that are imperceptible to us. “We don’t actually know what these features are – they may be big, or small – but the human brain doesn’t pick up on them,”

ANDREI SPIRACHE/GETTY

Astrobiology

says Mądry. The team calls them non-robust features because they seem to leave AIs particularly vulnerable to adversarial images, in which these features have presumably been disrupted to some degree. To identify that non-robust features were part of the problem, Mądry’s team took a standard collection of images of cats and dogs and generated a series of adversarial examples, which involves tweaking pixels

“With two apparently identical images of a cat, an AI will insist one of them is a dog” in each picture. They used these to train an AI. Rather than resulting in something completely useless, when the AI was shown nonadversarial examples, it got them right, meaning the nonrobust features did help it to correctly identify cats and dogs, just not in adversarial images.

Where we see a cat and a dog, an AI may not

The researchers then used an AI to point out which parts of an image it would focus on. They then took these out of images before training another AI on these altered pictures, in effect forcing it to use more humanlike methods to make its choice. The result of this was an AI with improved resistance to adversarial images, reaching a level usually only seen when painstaking effort is used to correct an AI’s mistakes (arxiv. org/abs/1905.02175). Understanding this is a good step towards AIs that can be safely deployed in the real world, says Pushmeet Kohli at research firm DeepMind. “The concept of these features is helpful, and it goes some way towards explaining the phenomenon,” says Marta Kwiatkowska at the University of Oxford. ❚