Can Facebook’s CSO be Held Liable for Atrocity Crimes?

Something like this image representing weaponized social media may be the next addition to The Atlantic “Brief Visual History of Weapons”.

New legal research moves us closer towards holding social media executives criminally liable for the Rohingya crisis and other global security failures under their watch:

…this paper argues that it may be more productive to conceptualise social media’s role in atrocity crimes through the lens of complicity, drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon, which, under certain circumstances, ought to face accountability under international criminal law.

The Guardian gave a scathing report of how Facebook was used in genocide:

Hate speech exploded on Facebook at the start of the Rohingya crisis in Myanmar last year, analysis has revealed, with experts blaming the social network for creating “chaos” in the country. […] Digital researcher and analyst Raymond Serrato examined about 15,000 Facebook posts from supporters of the hardline nationalist Ma Ba Tha group. The earliest posts dated from June 2016 and spiked on 24 and 25 August 2017, when ARSA Rohingya militants attacked government forces, prompting the security forces to launch the “clearance operation” that sent hundreds of thousands of Rohingya pouring over the border. […] The revelations come to light as Facebook is struggling to respond to criticism over the leaking of users’ private data and concern about the spread of fake news and hate speech on the platform.

The New Republic referred to Facebook’s lack of security controls at this time as a boon for dictatorships:

[U.N. Myanmar] Investigator Yanghee Lee went further, describing Facebook as a vital tool for connecting the state with the public. “Everything is done through Facebook in Myanmar,” Lee told reporters…what’s clear in Myanmar is that the government sees social media as an instrument for propaganda and inciting violence—and that non-government actors are also using Facebook to advance a genocide. Seven years after the Arab Spring, Facebook isn’t bringing democracy to the oppressed. In fact…if you want to preserve a dictatorship, give them the internet.

Frontline reported it as well:

[United Nations investigators September report] called for the Myanmar army top brass to be prosecuted for genocide, labeled Facebook’s response as “slow and ineffective.” As FRONTLINE has reported, Facebook representatives were warned as early as 2015 about the potential for a dangerous situation in the nascent democracy. In November, Facebook executive Alex Warokfa admitted in a blog post that the company did not do enough to prevent the platform “from being used to foment division and incite offline violence”…

Bloomberg also around this time suggested Facebook was operating as a mass weapon by its own design, serving dictatorship.

And the UK House of Commons in 2018 reported how Facebook was classified by the UN as “a determining role in stirring up hatred against the Rohingya Muslim minority” with the UN Myanmar investigator calling it the “‘beast’ that helped to spread vitriol”.

The CTO of Facebook, Mike Schroepfer described the situation in Burma as “awful”, yet Facebook cannot show us that it has done anything to stop the spread of disinformation against the Rohingya minority. […] Facebook is releasing a product that is dangerous to consumers and deeply unethical.

It seems important when looking back at this time-frame to note that a key Facebook executive at the head of decisions about user safety was in just his second year ever as a “chief” of security.

He infamously had taken his first ever Chief Security Officer (CSO) job at Yahoo in 2014, only to leave that post abruptly and in chaos in 2015 (without disclosing some of the largest privacy breaches in history) to join Facebook.

August 2017 was the peak period of risk, according to the analysis above. The Facebook CSO launched a “hit back” PR campaign two months later in October to silence the growing criticisms:

Stamos was particularly concerned with what he saw as attacks on Facebook for not doing enough to police rampant misinformation spreading on the platform, saying journalists largely underestimate the difficulty of filtering content for the site’s billions of users and deride their employees as out-of-touch tech bros. He added the company should not become a “Ministry of Truth,” a reference to the totalitarian propaganda bureau in George Orwell’s 1984.

His talking points read like a sort of libertarian screed, as if he thought journalists are ignorant and would foolishly push everyone straight into totalitarianism with their probing for basic regulation, such as better editorial practices and the protection of vulnerable populations from harms.

Think of it like this: the chief of security says it is hard to block Internet traffic with a firewall because it would lead straight to shutting down the business. That doesn’t sound like a security leader, it sounds like a technologist that puts making money above user safety (e.g. what the Afghanistan Papers call profitability of war).

Indeed, a PR firm was hired by Facebook to peddle antisemitic narratives to discredit critics – dangerous propaganda methods were used to undermine those reporting facilitation of dangerous propaganda.

It was so obviously unethical and insecure that people at Facebook who cared…quit and said they would no longer be associated with the security team.

…Binkowski said she tried to raise concerns about misuse of the platform abroad, such as the explosion of hate speech and misinformation during the Rohingya crisis in Myanmar and other violent propaganda. “I was bringing up Myanmar over and over and over,” she said. “They were absolutely resistant.” Binkowski, who previously reported on immigration and refugees, said Facebook largely ignored her: “I strongly believe that they are spreading fake news on behalf of hostile foreign powers and authoritarian governments as part of their business model.”

Facebook’s top-leadership was rejecting experienced voices of reason, instead rolling out angry “shame” statements to reject any criticisms about lack of progress on safety.

Stamos appeared to be expressing that for him to do anything more than what he saw as sufficient in that crucial time would be so hard that journalists (ironically the most prominent defenders of free speech, the people who drive transparency) couldn’t even understand if they saw it.

Take for example one of the many “hit back” Tweets posted by Facebook’s CSO:

My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

To me that reads like the CSO saying his staff suffer when they have to work hard, calling journalists stupid for not talking with anyone.

Such a patronizing and tone-deaf argument is hard to witness. It’s truly incredible to read, especially when you consider nearly 800,000 Rohingya fleeing for their lives while a Facebook executive says consequences matter.

Compare to what journalists in the field reported at that same exact time of October 2017 — as they talked to people living right then and there with Facebook failing to solve these problems.

Warning: extremely graphic and violent depictions of genocide

Here’s another way to keep the Facebook “hit back” campaign against journalists in perspective. While the top executive in security was calling people closest to real-world consequences not expert enough on that exact topic, he himself didn’t bring any great experience or examples to the table to earn anyone’s trust.

The outspoken and public face of a high-profile risk management disaster was representing Facebook’s dangerously clueless stumbles year after year:

A person with knowledge of Facebook’s [2015] Myanmar operations was decidedly more direct than [Facebook vice president of public policy] Allen, calling the roll out of the [security] initiative “pretty fucking stupid.” […] “When the media spotlight has been on there has been talk of changes, but after it passes are we actually going to see significant action?” [Yangon tech-hub Phandeeyar founder] Madden asks. “That is an open question. The historical record is not encouraging.”

The “safety dial was pegged in the wrong direction” as some journalists put it back in 2017 under a CSO who apparently thought it good idea to complain how hard it has been to protect people from harm (while making huge revenues). Perhaps business schools soon may study Facebook’s erosion of global trust under this CSO’s leadership:

We know tragically today that journalists were repeatedly right in their direct criticism of Facebook security practices and in their demands for greater transparency. We also plainly see how an inexperienced CSO’s personal “hit back” at his critics was wrong, with its opaque promises and patronizing tone based on his fears of an Orwellian fiction.

Facebook has been and continues to be out-of-touch with basic social science. Facebook was and continues to resist safety controls on speech that protect human rights, and has continued saying it is committed to safety while arguing against norms of speech regulation.

The question increasingly is whether actions like an aggressive “hit back” on people warning of genocide at a critical moment of risk (arguing it is hard to stop Facebook from being misused as a weapon and pushing back on criticism of the use of Facebook as a weapon) makes a “security” chief criminally liable.

My sense is it will be anthropologists, experts in researching baselines of inherited rights within relativist frameworks, who emerge as best qualified to help answer questions of what’s an acceptable vulnerability in social media technology.

We see this already in articles like “The trolls are teaming up—and tech platforms aren’t doing enough to stop them“.

The personal, social, and material harms our participants experienced have real consequences for who can participate in public life. Current laws and regulations allow digital platforms to avoid responsibility for content…. And if online spaces are truly going to support democracy, justice, and equality, change must happen soon.

Accountability of a CSO for atrocity crimes during his watch appears to be the most logical change, and a method of reasoned enforcement, if I’m reading these human rights law documents right.


Update January 2020:

1) Police investigation finds Facebook facilitated six months of terror attack planning.

January 15, 2019 incident shows the attackers opened a Facebook account and used it in their planning to the last day of the raid. On the account, they exchanged ideas on the best weapons to use and how to use them in executing the mission. …the terrorists had avoided using mobile phones in their communication and shifted to Facebook to coordinate the mission.

2) Ex-Michigan health chief charged with manslaughter in Flint.

Snyder, a Republican, was governor from 2011 through 2018. The former computer executive pitched himself as a problem-solving “nerd” who eschewed partisan politics and favored online dashboards to show performance in government. Flint turned out to be the worst chapter of his two terms due to a series of catastrophic decisions that will affect residents for years. The date of Snyder’s alleged crimes in Flint is listed as April 25, 2014, when a Snyder-appointed emergency manager who was running the struggling, majority Black city carried out a money-saving decision to use the Flint River for water while a pipeline from Lake Huron was under construction. The corrosive water, however, was not treated properly and released lead from old plumbing into homes. Despite desperate pleas from residents holding jugs of discolored, skunky water, the Snyder administration took no significant action until a doctor reported elevated lead levels in children about 18 months later.

3) Opinion piece in the Toronto Star: “Facebook doesn’t need to ‘do better’ … it needs to do time

Section 83.19(1) of our Criminal Code says that knowingly facilitating a terrorist offence anywhere in the world, even indirectly, is itself a terrorist offence.

4) It’s time to start prosecuting executives for crimes they commit on behalf of their companies.

…while the US sends hundreds of thousands of poor people to prison every year, high-level corporate executives, with only the rarest of exceptions, have become effectively immune from any meaningful prosecution for crimes committed on behalf of their companies.

Update July 2020:

Facebook fails its own audit over civil rights and hate speech decisions. It must find ways to stop “pushing users toward extremist echo chambers.”

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.