Terrorists Are Still Recruiting on Facebook, Despite Zuckerberg’s Reassurances

10.05.2018

Illustration- Félix Decombat

Illustration- Félix Decombat

- Searches quickly turned up evidence of extremist activity in plain sight.

 

In the past month, Mark Zuckerberg has boasted to Congress and investors that Facebook Inc.’s artificial intelligence programs are turning the tide against extremism on his site. “One thing that I’m proud of is our AI tools that help us take down ISIS and al-Qaeda-related terror content, with 99 percent of that content being removed before any person flags it to us,” the chief executive said on April’s earnings call. Facebook executives repeated that number onstage at early May’s annual developer conference. But it applies only to posts by those two groups. Many others seem able to recruit more or less as they please from the site’s audience of 2.2 billion.

At least a dozen U.S.-designated terror groups maintain a presence on Facebook, a review by Bloomberg Businessweek shows. That includes Hamas and Hezbollah in the Middle East, Boko Haram in West Africa, and the Revolutionary Armed Forces of Colombia (FARC). The terror groups are rallying supporters with everything from gruesome photos of death caused by their enemies to quotidian news about social services they offer. Several can be found simply by typing their names into Facebook’s search bar in English or, in some cases, in Arabic or Spanish. Some of the groups proudly link to their Facebook pages on their home websites, too.

“There is no place for terrorists or content that promotes terrorism on Facebook, and we remove it as soon as we become aware of it,” the company said in a statement. “We know we can do more, and we’ve been making major investments.” Facebook appeared to shut down several pages after being asked about them, including those for Al-Aqsa Martyrs Brigade and Hamas’s Al-Qassam Brigades.

Hezbollah’s Al-Manar TV, a “specially designated global terrorist entity” banned in the U.S., has flaunted its Facebook use, regularly reposting stories from Hezbollah pages that Facebook has shut down. In March, Al-Manar bragged that Hezbollah staff quickly set up a new election page after Facebook took one down. “Resistance supporters refollowed it, which stresses that the Resistance voice can never be silenced,” an English-language version of the story said.

“They should be transparent about the laws or regulations that they’re using to underpin their policies, but they’re unfortunately not”

For years, Facebook has tried to take down pages associated with U.S.-designated terrorist groups. In 2014, within hours of Bloomberg Businessweek inquiring about pages for Hezbollah, Facebook removed those for Al-Manar, Hezbollah news site Al-Ahed, and the Islamic Resistance in Lebanon, a charity associated with Hezbollah. All three, however, quickly reappeared with tweaks to make them seem new. At the end of April, Al-Ahed’s website linked to an Arabic Facebook page with more than 33,000 followers. Content on the page included a video of masked snipers targeting Israeli soldiers. Another Al-Ahed Facebook page had more than 47,000 followers, and one in English had 5,000.

Facebook’s policies prohibit material that supports or advances terrorism. The company’s definition of the term, published last month for the first time, includes a ban on nongovernmental organizations that use violence to achieve political, religious, or ideological aims. It specifies that such groups include religious extremists, white supremacists, and militant environmental groups. Facebook also says content that violates its policies is “not allowed” on the site.

The company only recently began scanning more actively for content from Islamic State and al-Qaeda after pressure from governments and is training its artificial intelligence systems to get better at flagging bad posts. Meanwhile, journalists and researchers frequently find supposedly banned content just by searching for it. A report in the New York Times in April uncovered hundreds of fake accounts on Facebook and Instagram posing as Zuckerberg and Chief Operating Officer Sheryl Sandberg. A day earlier, science and tech publication Motherboard noted that some pages on Facebook store stolen data, including social security numbers.

When asked about that story on a conference call, Sandberg said Facebook takes down such information as soon as employees become aware of it. “Posts containing information like social security numbers or credit cards are not allowed on our site,” she said.

To help prune out the worst offenders, Facebook has added content reviewers. It has 7,500, up 40 percent from the year before. They work in about 40 languages; the company plans to add staff fluent in the languages that require the most attention.

Terrorists’ enthusiastic embrace of social media has long caused angst at Facebook and its global competitors. Like Twitter Inc. and Google’s YouTube LLC, Facebook has historically put the onus on users to flag content to moderators.

When pressed by Congress about the failures to respond quickly in those instances, Zuckerberg spoke of how, when starting the company in his Harvard dorm, he simply didn’t have the resources to vet everything. Having users speak up about horrors was the easiest way to get things off Facebook.

That strategy had support from Section 230 of the Communications Decency Act, which limits websites’ liability for what users post. That protection is being gradually weakened; last month, President Trump approved an exception that allows prosecutors to go after online platforms if they’re being used for sex trafficking. Zuckerberg now says he considers Facebook responsible for what’s posted on the site. That doesn’t necessarily mean legal responsibility. Instead, the company has tried to frame its attempts to clean itself up as a public service.

While Facebook has made its guidelines public, it hasn’t been clear how they evolved, and some view them as open to interpretation. “They should be transparent about the laws or regulations that they’re using to underpin their policies, but they’re unfortunately not,” says Jillian York, the Electronic Frontier Foundation’s director for international freedom of expression. York, who’s based in Berlin, says Facebook risks meddling in local politics by picking and choosing which groups are terrorist. One could argue that blocking material from Hezbollah, which is also a party with seats in Lebanon’s Parliament, can hand its political competitors an advantage, she says.

It’s also sometimes difficult to determine who’s behind a Facebook page, even if it sports the logos and content of known terrorist groups. In the case of Boko Haram, the Nigerian group loyal to Islamic State, research published by the Jamestown Foundation in December said the group went by the name “Khairul Huda” on Facebook. A profile under that name exists, featuring plenty of photos of friends holding rifles or wearing balaclavas. Among them: a Facebook member who posted an appeal in December for volunteers to fight in Jerusalem “to raise the banner of God” and liberate the city. “Will you join me?” he wrote. “Inbox us.”

Once Facebook kicks these groups off, it doesn’t appear to use sophisticated means to prevent them from coming back. In April nine Hezbollah-related Facebook pages disappeared after the nonprofit Counter Extremism Project publicized links, including a tribute page to martyrs; it had more than 60,000 followers. Within two weeks, a replacement popped up. Bloomberg Businessweek found it by searching on Facebook for the website that had been listed on the original page. All that had changed was the language of the word “martyr,” from English to Persian. 

--

BOTTOM LINE - Facebook has reduced ISIS- and al-Qaeda-related material, but posts from similar groups with thousands of followers don’t seem to have suffered the same crackdowns.

 

Source: LINK