arguing-on-the-internet-police-academy

 

It was February 2012, and the then-22-year-old student learned that photos showing her naked or partially clothed were circulating on the Internet. The culprit was an ex-boyfriend she’d dated on and off for four years and had known since childhood.

Photos she’d sent him during their long-distance relationship were soon posted on more than 300 websites, including Tumblr, Flickr and Facebook, and her friends, family and neighbors were invited to view them. Some of the posts gave her name, address and phone number. Strangers were coming by her house.

Online harassment isn’t new. From the earliest message boards to the newest social apps, if there’s a way for people to say something, you can bet someone will say something awful. But it’s gotten even worse. Those operating in the shadows can now connect to billions of users through Facebook, Twitter and Reddit, and disseminate racist and hate-filled messages. Some publish disturbing images of murder, child exploitation and sexual abuse while others resort to so-called revenge porn to humiliate former lovers. Perhaps most distressing: A few threaten rape and other forms of violence, then release their victims’ addresses and phone numbers so strangers can terrorize their targets even further.

“Dangerous people are everywhere, but when they have the power of anonymity behind them and the power of distance, they become more dangerous,” says Karen Riggs, a professor of media arts and studies at Ohio University. “It’s part of human nature: We have people who will be abusive and lurid.”

Policing the wild frontier

Never before have so many people come together as they have on Twitter and Facebook. More than 300 million people use via Twitter every month, while more than 1.4 billion users sign on to Facebook.

For the past decade, the social networks have been working behind the scenes to police their sites.

“Dangerous people are everywhere, but when they have the power of anonymity behind them and the power of distance, they become more dangerous. -Karen Riggs, professor at Ohio University-

Monika Bickert, head of Facebook’s product policy around the world, and Ellen Silver, who runs global operations, help the world’s largest social network fight against a barrage of abusive, pornographic and racist posts. Bickert’s team sets the rules about the types of comments, photos and videos Facebook won’t allow. Silver’s team eliminates the offensive content. All are offered counseling to help them cope with the worst parts of the Internet they face each day.

The policing requires human intervention because Facebook’s systems are only trained to spot and automatically eliminate images showing child exploitation.

For everything else, Facebook’s teams wait for alerts to come to them. Users can register complaints and call out spam, harassment, hate speech or sexually explicit content. Because it only takes two clicks to begin a report, users frequently point out bad behavior. “It’s one of the reasons we make it so easy to report,” says Silver.

Facebook processes about 1 million legitimate complaints every week — a sliver of the site’s posts. It’s not perfect, and the company doesn’t identify everything.

“It’s hard, and at scale, it’s impossible,” says Danielle Citron, a law professor at the University of Maryland and author of “Hate Crimes in Cyberspace.”

Global scale

Silver’s and Bickert’s teams sit next to each other at the company’s headquarters in Menlo Park, Calif., as well as other offices, including Austin, Texas; Dublin, Ireland; and Hyderabad, India. All are on constant alert for abusive trends. They also prioritize posts that signal problems in the real world, like self-harm and bullying.

If a newsworthy event happens, such as the August beheading of photojournalist James Foley, the team sends alerts to one another about possible images showing up on the site.

Even so, they’re always a step behind because they’re largely reactive, not proactive.

“We are relying on the community to tell us when things are going wrong,” says Bickert, who spent more than a decade as an assistant US attorney. “I think of it like a Neighborhood Watch program. When you’re in your neighborhood, you are the person who knows if there is something going wrong.”

Twitter also works to stop users from behaving badly — suspending accounts and making it easier for family members to ask for the removal of images of deceased loved ones. But it faces an extra wrinkle: Unlike Facebook, Twitter’s users can choose to be anonymous.

And Twitter doesn’t hunt for bad content; users have to report it first. Clamping down on abusers may work for individual cases of bullying, where several users might verbally attack another. It’s not so easy when victims are harangued by a nameless, faceless mob.

Feminist activist Caroline Criado-Perez started getting anonymous tweets threatening rape and murder after kicking off a campaign two years ago to put author Jane Austen’s image on Great Britain’s banknotes. Robin Williams’ daughter Zelda suffered harassment after her father stunned fans by committing suicide. And Anita Sarkeesian, whose YouTube series explores the treatment of women in video games, was driven from her home last year after anonymous tweeters threatened her with violence and told the world where she lived.

Calling in the professionals

What’s someone like Vora, whose private photos were spread around the Internet, to do?

She contacted Facebook and others, asking them to remove the photos, but soon realized she needed professional help. So she turned to DMCA Defender, in Kansas City, Mo. Co-founded by LaMonica Wallace and Regina Moore, the company helps revenge-porn victims remove online photos.

DMCA’s software scans search engines for offending images and records their associated websites. The consultation scan, as Wallace calls it, is free. The team charges about $6 per site, though their fees are negotiable. “You’re not paying for a service that you wanted,” Wallace says. “These are victims.”

Scrubbing stuff off the Web typically takes a month, though some sites don’t respond immediately, and not all comply. Sometimes, DMCA appeals to Google to remove the site from search results.

Vora paid about $300 for the initial service. The first year, her ex-boyfriend was forced to pay the bill, in addition to being jailed for 180 days. Vora’s pictures are still circulating two years later.

“For a while, for every one site they took down, 20 more popped up,” says Vora, who realizes the problem may never go away. “There’s still stuff up there. It’s the Internet.”