CEP On Facebook Senate Hearing: Zuckerberg’s Claims About Extremist Content Removal Are False

11.04.2018

Counter Extremism Project

Counter Extremism Project

CEP Finds Extremist Content On Facebook

New York, NYCounter Extremism Project (CEP) Executive Director David Ibsen today released the following statement regarding Facebook CEO Mark Zuckerberg’s testimony before members of the U.S. Senate Committees on the Judiciary and Commerce:

“Despite Mr. Zuckerberg’s attempt to claim success on the issue of removing dangerous extremist content, the truth is that CEP finds examples of extremist content and hate speech on Facebook on a regular basis. Clearly, Mr. Zuckerberg’s claims that artificial intelligence is successfully removing instances of extremist content and similarly dangerous materials from Facebook are absolutely false. Today’s hearing made clear that Facebook is unprepared to stop the proliferation of fake news, which studies consistently show spreads much faster than actual news content, and poses a potential threat to our national security. The entire tech sector routinely promises to ‘do better,’ but fails to permanently remove the kind of content that leads to radicalization, recruitment, and terrorist violence. Members of the U.S. House Committee on Energy and Commerce must hold Mr. Zuckerberg accountable on these issues when they have the opportunity to question him tomorrow.”

In advance of this week’s hearings with Facebook CEO Mark Zuckerberg, CEP encouraged federal lawmakers, whose top priority is public safety, to hold tech companies accountable for their actions, including their failure to effectively and transparently take action to identify and permanently remove content from their sites that has been implicated in terrorist attacks, as well as allowing fake news in the form of doctored audio and video content to persist on their platforms.  Today, several members of the Senate questioned Mr. Zuckerberg about Facebook’s failure implement policies to address these issues, as outlined in transcripts of today’s hearing in the background section below.

BACKGROUND

SENATOR TED CRUZ: Does Facebook consider itself to be a neutral public forum?

MARK ZUCKERBERG: Senator, we consider ourselves to be a platform for all ideas.

CRUZ: Let me ask the question again: Does Facebook consider itself to be a neutral public forum? And representatives of your company have given mixed answers on this. Are you a first amendment speaker expressing your views or are you a neutral public forum allowing everyone to speak?

ZUCKERBERG: Senator, here is how we think about this. I do not believe, there is certain content that clearly we do not allow.  Hate speech, terrorist content, nudity anything that makes people feel unsafe in the community. Which is why we generally try to refer to what we do as a platform for all ideas. (“Facebook, Social Media Privacy, and the Use and Abuse of Data”, Senate Committee on Commerce, Science and Transportation and Senate Committee on the Judiciary, 4/10/18)

---

SENATOR LINDSEY GRAHAM: Are you familiar with Andrew Bosworth? He said: “So we connect people. Maybe somebody dies in a terrorist attack, coordinated on our tools. The ugly truth is that we believe in connecting people so deeply. So, anything that allows us to connect people more often is de facto good.” Do you agree with that?

ZUCKERBERG: I do not agree with that statement. He wrote that as an internal note. I disagreed with it at the time he wrote it and if you look at the commentary on it, a lot of people internally did to. We try to run our company to allow people to be able to express their opinions freely.

GRAHAM: What do we tell our constituents as to why we should let you self-regulate?

ZUCKERBERG: My position is not that there shouldn’t be regulation. I think the right question is: “What is the right regulation?”

GRAHAM: Will you work with us to create regulations? Would you provide us with a regulatory proposal?

Zuckerberg: Yes, and I will have my team follow up with you on that to have those discussions. (“Facebook, Social Media Privacy, and the Use and Abuse of Data”, Senate Committee on Commerce, Science and Transportation and Senate Committee on the Judiciary, 4/10/18)

--- 

SENATOR PATRICK LEAHY: Recently UN investigators blamed Facebook for playing a role in inciting a possible genocide in Myanmar and there has been genocide there. You say you used AI to find this, the type of content referring.  It calls for the death of a Muslim journalist. Now that threat went straight through your detection systems and spread very quickly.  And it took attempt after attempt after and involvement of civil society groups to get you to remove it.  Why couldn’t it be removed within 24 hours?

ZUCKERBERG: What’s happening in Myanmar is a terrible tragedy.

LEAHY: We all agree with that, but UN investigators have blamed you, blamed Facebook for playing a role in that genocide. We all agree that’s terrible. How can you dedicate, and will you dedicate the resources to make sure such hate speech is taken down within 24 hours?

ZUCKERBERG: Yes, we are working on this. And there are three specific things we are doing.  One is we are hiring dozens of more Burmese language content reviewers because hate speech is very language specific it’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically. Second, we are working with civil society in Myanmar to identify specific hate figures so we can take down their accounts rather than specific pieces of content. Third, is we are standing up a product team to do specific products changes in Myanmar and other countries that may have similar issues in the future to prevent this from happening. (“Facebook, Social Media Privacy, and the Use and Abuse of Data”, Senate Committee on Commerce, Science and Transportation and Senate Committee on the Judiciary, 4/10/18)

---

SENATOR JOHN THUNE: Can you discuss what steps Facebook takes where you may draw the line as to what is hate speech?

ZUCKERBERG: From the beginning of the company in 2004, I started in my dorm room, we didn’t have AI tech that could look at content people are sharing, so we basically had to enforce our content policies reactively. People could share what they wanted, and if someone found it offensive or against our policies, they’d flag it, and we’d look at it reactively. Now increasingly we are developing AI tools that can identify certain classes of bad activity proactive and flag it for our team. We’re going to have more than 20,000 people looking at all those things. We are working on security and content review, so when content gets flagged, people look at it, and if it violates then it takes it down. Some problems lend themselves more easily to AI solutions than others. Hate speech is one of the hardest. Determining what is hate speech is nuanced. Contrast that with an area in terrorist propaganda, which we have already done a good job handline with AI tools already. 99 percent of the al Qaeda content we take down on Facebook our AI systems flags it before anyone can see it. That’s a success in terms of AI tools that can proactively police and protect safety across the community. Hate speech I am optimistic that over a five- to 10-year period we will have AI tools that can get into the linguistic nuances of different types of content to be more accurate in flagging things for our system. A lot of this is still reactive with people flagging things for us and we look at it. Until we get better tools I am not happy with the level we are operating at right now.

-

ZUCKERBERG: Thank you [Chairman Thune]. So, we have made a lot of mistakes in running the company.  I think it’s pretty much impossible to start a company in your dorm room and then grow it to be the scale we are at now without making some mistakes. And because our service is about helping people connect and information, those mistakes have been different. We try not to make the same mistakes multiple times. In general, the mistakes are around how people connect with each other given the nature of the service. Overall, I would say we are going through a broader philosophical shift in how we approach our responsibility as a company. For the first 10-12 years of our company I viewed our responsibility as primarily building tools that if we could put those tools in people’s hands, that would empower people to do good things. What I think we’ve learned now across a number of issues, not just data privacy but with also fakes news and foreign interference in elections is that we need to take a more proactive role and broader view of our responsibility.  It’s not enough to just build tools we need to make sure they are used for good. That means we need to now take a more active view in policing the ecosystem and also watching and looking out and making sure all members are using these tools in a way that is good and healthy. So, At the end of the day this is going to be something where people are going measure us by our results on this. It’s not that I expect anything I say here today to necessary change people’s view and I believe that over the coming years as we fully work all these solutions through, people will see a big difference. (“Facebook, Social Media Privacy, and the Use and Abuse of Data”, Senate Committee on Commerce, Science and Transportation and Senate Committee on the Judiciary, 4/10/18)

Source: CEP LINK (10.april 2018.)