Troll hunters

Troll hunters

ARTUR DEBAT/GETTY TECHNOLOGY –Bots to watch over us- Troll hunters What can we do to stop online harassment? Bots could help where humans have fail...

811KB Sizes 1 Downloads 40 Views

ARTUR DEBAT/GETTY

TECHNOLOGY

–Bots to watch over us-

Troll hunters What can we do to stop online harassment? Bots could help where humans have failed, finds Sally Adee WHEN Ghostbusters actor Leslie Jones was hounded off Twitter last month, having braved several days of racist and misogynistic abuse, many people decried the social network’s failure to enforce its policies. If a star of a Hollywood blockbuster can be treated like that, what hope is there for the rest of us? For some, such cases show it’s time for a new approach to dealing with online abuse. “Social media is a shitshow,” says Libby Hemphill, an online communication researcher at Illinois Institute of Technology. The statistics may feel familiar: 20 | NewScientist | 6 August 2016

a 2014 study by Pew Research sucked at it for years,” he told in Washington DC showed that employees in an internal memo. 40 per cent of internet users have But the problems haven’t been harassed and 66 per cent stopped. That’s because social of those said the most recent networks are too large to police by instance was on social media. hand and their approach tends to Since then, despite many be reactive rather than proactive. promises made by internet So an increasing number of companies, efforts to curb people are turning to another online harassment using human solution: bots. moderation have fallen flat. The simplest way bots can help Last year, Twitter’s then CEO is with block lists, which specify Dick Costolo took “personal responsibility” for the continuing “If a star of a Hollywood blockbuster can be treated abuse problems on his site. “We like that, what hope is suck at dealing with abuse and there for the rest of us?” trolls on the platform and we’ve

the accounts you don’t want to see in your feed. You can block accounts yourself. But reporting them to prevent harassment of others is a hassle. For example, a form must be filled out for each abusive tweet. Apart from being slow, it’s also unpleasant – someone may have to trawl through hundreds of personal slurs, reporting each individually. It would be better if you didn’t receive abusive messages in the first place – something bots could help with by managing block lists automatically. Subscribe to a blockbot, which continually updates a list of accounts blocked by other users, and you should receive less invective. But that approach only works if someone adds an abusive account to the block list. Is it possible to automate the detection of harassment? Hemphill and her colleagues tried to do this by first asking

For more technology stories, visit newscientist.com/technology

people on crowdsourcing platform Mechanical Turk to identify instances of abuse. But they hit a snag: there was less agreement between crowdworkers than they would have liked. “Humans don’t agree on what constitutes harassment,” she says. “So it’s really hard to train computers to detect it.”

Spot the famous face A million celebrities are helping to train facial recognition software. A team at Microsoft Research in Redmond, Washington, has built the largest ever publicly available training set of faces – consisting of 10 million images – by using photos of famous people of all races taken from the web. Celebs are useful because there are many photos of their faces from different angles.

“Democratizing information has never been more vital, and Wikileaks has helped. But their hostility to even modest curation is a mistake.” Whistleblower Edward Snowden clashes with Wikileaks over its release of documents that included people's personal information

This is not a car seat Will we be fazed by cars with no one behind the wheel? A team led by Dirk Rothenbuecher at Stanford University has tried to answer the question using a car driven by a person disguised as a car seat. The team found that pedestrians weren’t too bothered – until the car drove on to a pedestrian crossing as they were about to cross.

6 August 2016 | NewScientist | 21

BERTRAND LANGLOIS/AFP/GETTY IMAGES

This squares with research bots. It was unnerving, he says. on online abuse. Sometimes it “The most common response is malicious, intentional and was ‘kill yourself’.” But something directed against specific seems to have sunk in. After a minorities. But other times it short-term increase in racist functions more as a way to signal language, he found that abusers group affiliation and affinity. who were admonished by a bot “Abuse is less of a problem for that appeared to be a high-status niche sites than a catch-all white male reduced their use of platform like Twitter,” says racist slurs. In the month after Hemphill. “Group dynamics the intervention, these people and norms of behaviour have tweeted the n-word 186 fewer Feed the trolls already been established there.” times on average than those Enter the argue-bots. These So Munger wondered if he sanctioned by a bot that appeared distract trolls from their human could create a bot to manipulate to be a low-status white male or a victims by drawing their attention a troll’s sense of group dynamics black male. and engaging with them, often online. The idea was to create bots “It’s useful to know that such with entertaining results. that would admonish people who an impersonator-bot can reduce One, called @Assbot, tweeted racist comments – by the use of slurs,” says Hemphill. recombined tweets from its impersonating a higher-status “I’m surprised the effects didn’t human creator’s archive into individual from their in-group. decay faster.” random statements and then First he found his racists. He “Abusers sanctioned by a used these to respond to tweets identified Twitter accounts that coming from Donald Trump. had recently issued a racist tweet, bot impersonating a highstatus white male reduced The result was a torrent of angry then combed through their their use of racist slurs” Trump supporters engaging with previous 1000 tweets to check a bot spouting nonsense. that the user met his standards for @Assbot simply deployed a abuse and racism. “I hand-coded It doesn’t work on everybody, mishmash of existing tweets. But all of them to make sure I didn’t however. “The committed racists what if it had been smarter? Kevin have false positives,” he says. didn’t stop being racist,” says Munger at New York University He then created four bot Munger. Another problem, is interested in group identity on accounts, each with a different as Hemphill found, is that the internet. Offline, we signal identity: white male with many identifying abuse is hard. Munger which social groups we belong to followers, white male with few wanted to target misogyny as well with things like in-jokes, insider followers, black male with many as racism but gave up when he knowledge, clothes, mannerisms followers and black male with few found words like “bitch” and and so on. When we communicate followers. To make his automated “whore” were so widespread that online, all of that collapses into account look legit, he bought it was impossible to distinguish what we type. “Basically, the only dummy followers from a website. genuine abuse from casual chat. way to affiliate yourself is with “They were $1 for 500,” he says. There is also an inherent the words you use,” says Munger. At first, people turned on the weakness in the system. “The more people become aware that these are out there, the less WHO CONTROLS THE BOTS? effective they’ll be,” says Munger. Bots may be set to tackle harassment Bots might also be attractive to For now, Hemphill thinks it’s online (see main story), but methods authorities that want to change the best we can do. “Nothing else to deflect abuse and manipulate behaviour online in their favour, is working,” she says. “We may as behaviour could themselves be especially in light of recent well start using bots.” But Munger abused. Who should control them? crackdowns. Turkey’s government has doesn’t wants bots to be the For Libby Hemphill at Illinois been accused of monitoring Twitter endgame. “I don’t envision an Institute of Technology, the best for thoughtcrimes. According to New army of bots telling people to answer is to put them in the hands York University researcher Zeynep behave themselves,” he says. “I’m of Twitter or Facebook so they can Tufecki, there are now cases against looking for what works, so we can police their own communities. Yet about 2000 people for insulting figure out what kind of language she has misgivings about the ethics the president online. And after the and moral reasoning works best of manipulating people’s behaviour Dallas police shootings, four men to stop a racist troll.” in this way, especially when it is done were arrested in Detroit for making Munger is now looking at with a bot masquerading as a human. anti-police comments on social media. politics-based abuse. He has his work cut out. n

ONE PER CENT