News Archaeology
Prehistoric puppy
Frozen canine may be an ancestor of dogs and wolves THIS 18,000-year-old puppy, found in the Siberian permafrost,...
Frozen canine may be an ancestor of dogs and wolves THIS 18,000-year-old puppy, found in the Siberian permafrost, is remarkably well preserved – it still has its nose, fur, teeth and whiskers. A team at the Centre for Palaeogenetics in Sweden has been analysing the animal’s rib bone, after the puppy was found last year at a site near Yakutsk in eastern Siberia. So far, the researchers have determined that the animal is male and estimate that he was 2 months old. The puppy is now named Dogor, a Yakutian word for “friend”. Yet DNA tests to determine if it was a dog or a wolf have come up blank. If it was a dog, it may be the oldest found. But one of the team thinks it may be a common ancestor of both dogs and wolves. “It’s normally relatively easy to tell the difference between the two,” David Stanton told CNN. “The fact that we can’t might suggest that it’s from a population that was ancestral to both – to dogs and wolves.” ❚
SERGEY FEDOROV
Jessica Hamzelou
Artificial intelligence
Firms must explain AI decision-making BUSINESSES and other organisations could face multimillion-pound fines if they are unable to explain decisions made by artificial intelligence, under plans put forward by the UK’s data watchdog this week. The Information Commissioner’s Office (ICO) said its new guidance was vital because the UK is at a tipping point where many firms are using AI to inform decisions for the first time. This could include human resources departments using machine learning to shortlist job applicants based on analysis of their CVs. The regulator says it is the first in the world to put forward rules on explaining choices taken by AI. 10 | New Scientist | 7 December 2019
About two-thirds of UK financial service companies are using AI to make decisions, including insurance firms to manage claims, and an ICO survey shows that about half of the UK public are concerned about algorithms making decisions that humans would usually explain. AI researchers are already being called on to do more to unpack the “black box” nature of how machine learning arrives at results. Simon McDougall at the ICO says: “This is purely about explainability. It does touch on the whole issue of black box explainability, but it’s really driving at what rights do people
have to an explanation. How do you make an explanation about an AI decision transparent, fair, understandable and accountable to the individual?” The guidance, which is now out for consultation,
“A survey shows that about half the public are concerned about decisions being made by algorithms” tells organisations how to communicate explanations to people in a form that they will understand. Failure to do so could, in extreme cases, result in a fine of up to 4 per cent of a company’s global
turnover, under the European Union’s data protection law. Not having enough money or time to explain AI decisions won’t be an acceptable excuse, says McDougall. “They have to be accountable for their actions. If they don’t have the resources to properly think through how they are going to use AI to make decisions, then they should be reflecting on whether they should be using it all.” He also hopes the step will result in firms that buy-in AI systems rather than building their own asking more questions of how they work. The guidance is expected to take effect in 2020. ❚ Adam Vaughan