What the Jan. 6 probe found out about social media, but didn’t report
16:51 JST, January 19, 2023
WASHINGTON – The Jan. 6 committee spent months gathering stunning new details on how social media companies failed to address the online extremism and calls for violence that preceded the Capitol riot.
The evidence they collected was written up in a 122-page memo that was circulated among the committee, according to a draft viewed by The Washington Post. But in the end, committee leaders declined to delve into those topics in detail in their final report, reluctant to dig into the roots of domestic extremism taking hold in the Republican Party beyond former president Donald Trump and concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel’s sensitive deliberations.
Congressional investigators found evidence that tech platforms – especially Twitter – failed to heed their own employees’ warnings about violent rhetoric on their platforms and bent their rules to avoid penalizing conservatives, particularly then-president Trump, out of fear of reprisals. The draft report details how most platforms did not take “dramatic” steps to rein in extremist content until after the attack on the Capitol, despite clear red flags across the internet.
“The sum of this is that alt-tech, fringe, and mainstream platforms were exploited in tandem by right-wing activists to bring American democracy to the brink of ruin,” the staffers wrote in their memo. “These platforms enabled the mobilization of extremists on smaller sites and whipped up conservative grievance on larger, more mainstream ones.”
But little of the evidence supporting those findings surfaced during the public phase of the committee’s probe, including its 845-page report that focused almost exclusively on Trump’s actions that day and in the weeks just before.
That focus on Trump meant the report missed an opportunity to hold social media companies accountable for their actions, or lack thereof, even though the platforms had been the subject of intense scrutiny since Trump’s first presidential campaign in 2016, the people familiar with the matter said.
Confronting that evidence would have forced the committee to examine how conservative commentators helped amplify the Trump messaging that ultimately contributed to the Capitol attack, the people said – a course that some committee members considered both politically risky and inviting opposition from some of the world’s most powerful tech companies, two of the people said.
“Given the amount of material they actually ultimately got from the big social media companies, I think it is unfortunate that we didn’t get a better picture of how ‘Stop the Steal’ was organized online, how the materials spread,” said Heidi Beirich, co-founder of the Global Project Against Hate and Extremism nonprofit. “They could have done that for us.”
The Washington Post has previously reported that Rep. Liz Cheney (R-Wyo.), the committee’s co-chair, drove efforts to keep the report focused on Trump. But interviews since the report’s release indicate that Rep. Zoe Lofgren, a Democrat whose Northern California district includes Silicon Valley, also resisted efforts to bring more focus in the report onto social media companies.
Lofgren denied that she opposed including a social media appendix in the report or more detail about what investigators learned in interviews with tech company employees.
“I spent substantial time editing the proposed report so it was directly cited to our evidence, instead of news articles and opinion pieces,” Lofgren said. “In the end, the social media findings were included into other parts of the report and appendixes, a decision made by the Chairman in consultation with the Committee.”
Committee Chairman Bennie G. Thompson (D-Miss.), did not respond to a request for comment. Thompson previously had said that the committee would examine what steps tech companies took to prevent their platforms from “being breeding grounds to radicalizing people to violence.” Rep. Jamie Raskin (D-Md.), who sat in on some of the depositions of tech employees, did not comment.
Understanding the role social media played in the Jan. 6 attack on the Capitol takes on greater significance as tech platforms undo some of the measures they adopted to prevent political misinformation on their platforms. Under new owner Elon Musk, Twitter has laid off most of the team that reviewed tweets for abusive and inaccurate content and restored several prominent accounts that the company banned in the fallout from the Capitol attack, including Trump’s and that of his first national security adviser, Michael Flynn. Facebook, too, is considering allowing Trump back on its platform, a decision expected as early as next week.
“Recent events demonstrate that nothing about America’s stormy political climate or the role of social media within it has fundamentally changed since January 6th,” the staffers’ draft memo warned.
Social media moderation also has become a flash point in the states. Both Texas and Florida passed laws in the wake of Trump’s suspension to restrict what content social media platforms can remove from their sites, while California has imposed legislation requiring companies to disclose their content moderation policies.
But the Jan. 6 committee report offered only a vague recommendation about social media regulation, writing that congressional committees “should continue to evaluate policies of media companies that have had the effect of radicalizing their consumers.”
***
Some of what investigators uncovered in their interviews with employees of the platforms contradicts Republican claims that tech companies displayed a liberal bias in their moderation decisions – an allegation that has gained new attention recently as Musk has promoted a series of leaked internal communications known as the “Twitter Files.” The transcripts indicate the reverse, with former Twitter employees describing how the company gave Trump special treatment.
Twitter employees, they testified, could not even view the former president’s tweets in one of their key content moderation tools, and they ultimately had to create a Google document to keep track of his tweets as calls grew to suspend his account.
” . . . Twitter was terrified of the backlash they would get if they followed their own rules and applied them to Donald Trump,” said one former employee, who testified to the committee under the pseudonym J. Johnson.
The committee staffers who focused on social media and extremism – known within the committee as “Team Purple” – spent more than a year sifting through tens of thousands of documents from multiple companies, interviewing social media company executives and former staffers, and analyzing thousands of posts. They sent a flurry of subpoenas and requests for information to social media companies ranging from Facebook to fringe social networks including Gab and the chat platform Discord.
Yet as the investigation continued, the role of social media took a back seat, despite Chairman Thompson’s earlier assertion that how misinformation spread and what steps social media companies took to prevent it were “two key questions for the Select Committee.”
Committee staffers drafted more subpoenas for social media executives, including former Twitter executive Del Harvey, who was described in testimony as key to Twitter’s decisions regarding Trump and violent rhetoric. But Cheney never signed the subpoenas, two of the people said, and they were never sent. Harvey did not testify. At one point, committee staffers discussed having a public hearing focused on the role of social media during the election, but none was scheduled, the people said.
***
The role of social media has been a central topic of American politics since the 2016 presidential campaign, when hackers accessed emails from Democratic Party servers and leaked the contents onto the internet, and Russian trolls posing as Americans posted misinformation on both Twitter and Facebook, without detection. Concern about the impact of social media grew in the aftermath of the 2020 election, with Facebook and Twitter suspending hundreds of accounts for spreading false information about the result as well as baseless conspiracy theories about balloting irregularities.
In the days before Jan. 6, 2021, media reports documented Trump’s call on Twitter for people to rally in Washington – it’ll be wild, he tweeted – and there was growing talk of guns and potential violence on sites such as Telegram, Parler and TheDonald.win.
The Purple Team’s memo detailed how the actions of roughly 15 social networks played a significant role in the attack. It described how major platforms like Facebook and Twitter, prominent video streaming sites like YouTube and Twitch and smaller fringe networks like Parler, Gab and 4chan served as megaphones for those seeking to stoke division or organize the insurrection. It detailed how some platforms bent their rules to avoid penalizing conservatives out of fear of reprisals, while others were reluctant to curb the “Stop the Steal” movement after the attack.
But as the committee’s probe kicked its public phase into high gear, the social media report was repeatedly pared down, eventually to just a handful of pages. While the memo and the evidence it cited informed other parts of the committee’s work, including its public hearings and depositions, it ultimately was not included as a stand-alone chapter or as one of the four appendixes.
In the weeks since the report was released, however, some of that evidence has trickled out as the committee released hundreds of pages of transcripts of interviews with former tech employees and dozens of documents. The transcripts show the companies used relatively primitive technologies and amateurish techniques to watch for dangers and enforce their platforms’ rules. They also show company officials quibbling among themselves over how to apply the rules to possible incitements to violence, even as the riot turned violent.
The transcript of Anika Collier Navaroli, one of the longest-tenured members of Twitter’s safety policy team, describes in detail how the company’s systems were outmatched as the pro-Trump mob stormed the Capitol.
When the #ExecuteMikePence hashtag started trending on Twitter on Jan. 6, 2021, Collier Navaroli was sitting in her New York apartment, scrolling through thousands of death threats and other hateful messages and trying to remove them one by one.
Her main way of finding tweets calling for Vice President Mike Pence’s execution was by pasting the hashtag into the Twitter website’s search box, manually copying each tweet’s details into an internal flagging tool, and then returning to the timeline as more tweets poured in.
“I was doing that for . . . hours,” she testified, saying only a few other people that day were doing the same work. “We didn’t stand a chance.”
Collier Navaroli also faulted top executives, including Twitter’s Harvey, for blocking potential rule changes that would have allowed company moderators to take a more proactive stance to reduce calls for violence. At one point, Collier Navaroli said she pushed the company to enact a policy that would have restricted tweets using hashtags like #LockedandLoaded, which moderators had seen being used by people boasting they were armed and ready to march on the Capitol. Harvey, Collier Navaroli said, had pushed back, arguing that the phrase could be used by people tweeting about self-defense and should be allowed.
Harvey, who is no longer with Twitter and advertises herself as a public speaker, did not respond to requests for comment sent to her email or LinkedIn.
The Purple Team’s draft outlines how extremism and violent rhetoric jumped from platform to platform in the lead-up to Jan 6. In the hours after Trump’s tweet about how Jan. 6 would be wild, the chat service Discord had to shut down a server because Trump’s supporters were using it to plan how they could bring firearms into Washington, according to the memo.
The investigators also wrote that much of the content that was shared on Twitter, Facebook and other sites came from Google-owned YouTube, which did not ban election fraud claims until Dec. 9 and did not apply its policy retroactively. The investigators found that its lax policies and enforcement made it “a repository for false claims of election fraud.” Even when these videos weren’t recommended by YouTube’s own algorithms, they were shared across other parts of the internet.
“YouTube’s policies relevant to election integrity were inadequate to the moment,” the staffers wrote.
The draft report also says that smaller platforms were not reactive enough to the threat posed by Trump. The report singled out Reddit for being slow to take down a pro-Trump forum called “r/The-Donald.” The moderators of that forum used it to “freely advertise” TheDonald.win, which hosted violent content in the lead-up to Jan. 6.
Facebook parent company Meta declined to comment. Twitter, which has laid off the majority of its communications staff, did not respond to a request for comment. Discord did not immediately respond to requests for comment.
YouTube spokeswoman Ivy Choi said the company has long-established policies against incitement, and that the company began enforcing its election integrity rules once “enough states certified election results.”
“As a direct result of these policies, even before January 6 we terminated thousands of channels, several of which were associated with figures related to the attack, and removed thousands of violative videos, the majority before 100 views,” she said in a statement.
Reddit spokeswoman Cameron Njaa said the company’s policies prohibit content that “glorifies, incites or calls for violence against groups of people or individuals.” She said that the company “found no evidence of coordinated calls for violence” related to Jan. 6 on its platform.
Former Facebook employees who testified to the committee reported their company also resisted imposing restrictions. Brian Fishman, the company’s former head of dangerous organizations, testified that the company had been slow to react to efforts to delegitimize the 2020 election results.
“I thought Facebook should be more aggressive in taking down ‘Stop the Steal’ stuff before January 6th,” Fishman said. He noted, however, that broader action would have resulted in taking down “much of the conservative movement on the platform, far beyond just groups that said ‘Stop the Steal,’ mainstream conservative commentators.”
He said he did not believe such action “would have prevented violence on January 6th.”
The committee also spoke to Facebook whistleblower Frances Haugen, whose leaked documents in 2021 showed that the country’s largest social media platform largely had disbanded its election integrity efforts ahead of the Jan. 6 riot. But little of her account made it into the final document.
“It’s sad that they didn’t include the intentional choices that Facebook made,” she said in an interview. “At the same time, you’re asking them to do a lot of different things in a single report.”
***
A large part of Twitter’s failure to act, multiple former Twitter employees, including Johnson and Collier Navaroli, told the committee was deference to Trump.
Trump’s account was the only one of Twitter’s hundreds of millions that rank-and-file officials could not review in one of their main internal tools, Profile Viewer, which allowed moderators to establish a history and share notes about an account’s past tweets and behaviors, the employees testified.
The block prevented moderators from reviewing how others had assessed Trump’s tweets, even as his following grew to 88 million and his tweets drove conversations around the world. Trump “was a unique user who sat above and beyond the rules of Twitter,” Collier Navaroli testified.
“There was this underlying understanding we’re not reaching out to the President,” she told the committee. “We’re not reaching out to Donald Trump. There is no point in doing education here because this is how this individual is. So the resolution was to do nothing.”
Collier Navaroli and a few others inside the company had worked to push executives to action long before Jan. 6, she said, citing internal memos and messages. In the week after the November 2020 election, she said, they began warning that tweets calling for civil unrest were multiplying. By Dec. 19, she said, Twitter staff had begun warning that discussions of civil unrest had centralized on Jan. 6 – the day that Trump had called his supporters to mass in Washington, saying it “will be wild!”
By Dec. 29, she and members of other Twitter teams had begun warning that Twitter lacked a coordinated response plan, and on Jan. 5, she said, she warned a supervisor directly that the company would need a much more robust response the following day.
When asked by a committee staffer whether Twitter had adopted a “war footing,” having seen the warnings, Collier Navaroli said her U.S. team had fewer than six people, and that “everybody was acting as if it was a regular day and nothing was going on.”
"News Services" POPULAR ARTICLE
-
New Rules Drive Japanese Trucking Sector to the Brink
-
G-Shock Watchmaker Casio Delays Earnings Release Due to Ransomware Attack
-
North Korea Long-Range Ballistic Missile Test Splashes Down between Japan and Russia (UPDATE 1)
-
Japan’s Nikkei Stock Closes at 2-week Peak as Tech Shares Track Nasdaq Higher (Update 1)
-
Nissan Plans 9,000 Job Cuts, Slashes Annual Profit Outlook
JN ACCESS RANKING
- Streaming Services Boost Anime Popularity Overseas; Former ‘Geeky’ Interest More Beloved Among Gen Z than 3 Major U.S. Sports
- G20 Sees Soft Landing for Global Economy; Leaders Pledge to Resist Protectionism as Trump Calls for Imported Goods Flat Tariff
- 2024 POLLS: Ruling Camp Likely to Win Lower House Majority
- Chinese Rights Lawyer’s Wife Seeks Support in Japan; Sophie Luo Calls for Beijing to Free Ding Jiaxi, Xu Zhiyong
- Chinese Social Media Still Full of Anti-Japanese Posts 1 Month After Boy’s Fatal Stabbing; Malicious Videos Gain Large Number of Views