Elon's Twitter is failing its first major stress test
Sorry not sorry, but I'm not calling it X. But the violence in Israel and Gaza give us a better idea of how Elon's changes to Twitter have made the information ecosystem more polluted.
It’s been nearly a year since Elon Musk officially closed the deal on Twitter.1 There have been a lot of changes to the technical way the platform operates since then, and there has been an exodus of power users who’ve fled to places such as Threads, Mastodon, and Bluesky (just as a reminder, I’ve left Twitter for Bluesky and you can follow me here).
With changes to how Twitter operates has come a new class of influential users on the platform. This is not particularly new, and in fact, I wrote a good deal earlier this year predicting the looming disaster of ending real verification systems. What we hadn’t had before this month, though, was a real stress test of how those changes would work in a high-conflict, high-disinformation environment.
Now we know, and sometimes I really hate being right.
On a well-functioning social platform, a verification badge like Twitter’s blue checkmark or similar features on Instagram or Facebook acts as a signal that A) the person speaking is who they purport to be and B) they are sharing high-quality, verified information as if their digital reputation depends on it. Social platforms built on user-generated content have a volume problem when it comes to consumers making sense of the information firehose, and platforms help users cut through that noise with signals about a user’s credentials or knowledge that help cue consumers on how to create their own filters and sift quality from junk.
What we have on Twitter regarding the Israel situation is, well, not that. Disinformation researchers at the University of Washington’s Center for an Informed Public released a flash study this week examining the disinformation situation on Twitter since the situation in Israel and Gaza boiled over. What they found is that a new class of paid-checkmark influencers on Twitter have emerged during this war and that they are primary vectors of a lot of the disinformation swirling around the platform.2 There’s a lot of misinformation about the Israel/Hamas war swirling on Twitter, and it’s the result of a handful of highly followed and influential accounts. This is not good when Twitter is a primary source of information for its users. It’s even worse when we’re talking about a complex foreign affairs news story given how little appetite Americans in particular historically have had for news beyond their own borders.
A major contributor to the problem cited in the study is paid verification checkmarks. Again, in the past, these verification cues were the result of a long process that forced a user to prove their identity and expertise on a subject, but now it’s given to people who have a credit card rather than credentials. As I wrote back in March:
“[P]onying up a credit card strips the verification icon of all its signal value, both for the person posting and for the people reading. To charge people, you have to know where the value lies, and so far there is no sense from Twitter HQ that it knows that the value of the badge was always trust and never about status.”
I will mostly stick to that analysis, but today I’d say I underestimated the amplification effect on the back end of that paid verification system. Elon did note that not only will these people paying for Twitter Blue get a verified checkmark (signal value), but they also would be amplified in search results, replies to others’ tweets, and in the algorithm for things like trending topics (signal boost). It’s the signal boost part I should have paid more attention to because that was a bit less clear at the time (i.e. how amplified is amplified?). Regardless, that boost in signal value has led to this moment, when people willing to put up $8 a month now have outsize influence in search and discovery. In the old system, verified users had a similar boost but they were all verified by a system that noted their expertise.
There’s another newer development that has taken the verification mess and made it worse, and that is revenue sharing. This summer, Twitter launched a system where accounts that got high engagement (replies, retweets, likes) could qualify for a share of the ad revenue Twitter brings in. So it’s a system that, not unlike YouTube’s setup, incentivizes creators to post things that get a lot of engagement traffic. The theory goes that highly engaged content makes a platform money, and so giving creators a cut of that incentivizes more engaging content. It’s how YouTube influencers make their money, for the most part.
In a bounded system with guardrails against misinformation, that could work, and this is a critical point because conspiracy theory content does quite well if left unchecked to the point where YouTube absorbs the reputation damage even as it’s forced to pay creators who make that content. YouTube has struggled with this problem for years but has invested in trust and safety efforts to make sure that conspiracy theorists weren’t amplified on its platform, effectively disincentivizing disinformation as a money-making opportunity. That has not always gone perfectly, but YouTube has at least tried.
Twitter is, famously, not trying.
For one, Elon has gutted the trust and safety team responsible for filtering out and taking down content or accounts that peddle disinformation or worse (think hate speech and harassment content, for starters).
But then consider what happens to that material within a system that boosts people willing to pay for the privilege of being boosted, and then shares revenue with those boosted creators. You end up with a vicious cycle where disinformation artists can pay to have their fake news boosted, get high engagement as a result, and then get paid through revenue sharing to do so … which only incentivizes making more disinformation.
There are incentives and then there are incentives. But there also are reverse incentives. Legitimate news organizations unwilling to pay for verification have seen traffic from the platform drop, and many are quietly pulling back or abandoning the platform altogether. So where we are now is somehow worse than before. Quality sources of information are leaving rather than co-existing alongside the trolls and troublemakers paying for visibility on Twitter, and that has left a void that’s largely been filled by misinformation. This is the same old story social platforms have been dealing with for well over a decade; it’s just that Twitter has chosen not to fight it anymore.
You could argue that news organizations should pay for verification and stay in the fight as a public service, but there’s a brand safety problem when your news is being featured alongside disinformation with the same visual verification cues and no way for consumers to make sense of that.3
What we’re left with is a cesspool of bad information. I finally said goodbye to Twitter over the summer, but this month I’ve logged in a few times just to see how things are going with the Middle East information war. I spent years building a well-curated list of quality sources there, and I still was shocked at how much conspiracy theories and disinformation dominate the feed being served to me by Twitter’s algorithm. The algorithm immediately served me up several posts featuring war photos and videos from dodgy sources who are profiting from the gamble that you won’t think critically about what your eyes see, or gambling that it doesn’t matter even if you do think critically because there is a sucker coming online every minute.
The chaotic misinformation cycle would be funny if it was bad reporting on, say, a low-stakes story such as NFL news. However, information affects public sentiment, which affects votes and policy. Public understanding of what is going on with this war is being shaped by a high-volume platform that lacks any guardrails for dealing with bad actors.
Bad actors paid by Twitter precisely because their sketchy content engages users.
Military conflicts have always been times when citizens turn to news organizations that have spent decades building methods around truthtelling and verification, and whose businesses rise and fall on the trust built by using those methods. Media Dependency Theory tells us as much4, that in times of crisis or high information needs the public flocks to the news because it is built on standards that elevate it above mere “content.”
So it’s odd (even as it’s unsurprising) that at this moment Twitter is unable to double down on trust and verified information. This would be a great time to be a source of quality information, but instead, they’ve just chosen the “have a lot of things to read” strategy by betting on volume over veracity.
So instead, Twitter took away the New York Times’ verified news organization checkmark after Elon spent months trashing the institution, rendering them Just Another Account in a sea of Bluecheck Bros. Again, it would be funny in its irony considering Elon’s big beef with Twitter’s old management was that he thought the platform suppressed views it didn’t like. But here we are, watching him suppress views he doesn’t like by taking away the NYT’s amplification tool.
I haven’t lost any faith in the value of considering information in well-rounded ways, or even in the notion that the news-consuming public can do this work. What’s failing us are the platforms and our inability to demand better. News consumers still need cues about methodologies and source reputation. The news need not be everywhere. It needs to be where the news matters to people, and the exodus of users who find the news on Twitter unusable after the changes are going to find their news elsewhere as Twitter self-immolates.
I’ll say more about where this is shaping up to be in another post. For now, well, you already know where I’ve placed my bet.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Bluesky, which is doing much better than Twitter these days.
Twitter. Twittertwittertwitter. You will never make me call it by the new name.
The study (which you should read!) comes with some caveats, in part because Elon has shut down the usual API data-pulling access that lets researchers do this study at a larger scale. But I’m satisfied with the methodology they used in that it hews closely to what we can know based on this limited access (high-volume accounts are sowing a lot of disinformation seeds and being amplified by Elon Musk) without treading into speculation territory (i.e. assuming large network effects on this information’s visibility without significant access to large sets of user data).
The most charitable way to look at Twitter’s changes is that they flatten differences between organizations pursuing verified truth and individuals who have ideas. Trolls pay for a boost, and so a news organization paying Twitter for verification merely resets the board to flat alongside paying conspiracy theory accounts, rendering both accounts mere content in a flat world. Journalists should be in the arena as a public service, yes. But pick your fights and don’t battle on turf that is set up to bury you. When you’re pushing links from institutional accounts on platforms you don’t control, you’re not engaging the public as much as you are advertising. Hence, brand safety.
Should be noted Media Dependency Theory has its detractors. In my more haughty grad school days, I once wrote a paper calling it “the Captain Obvious of media theory,” though wiser 2023 Jeremy says that doesn’t make it wrong. Look at me, progressing as a human.
Not going to disagree with you on Twitter’s issues with accurate information. But the question why does it matter if our professional institutions are failing as well?
There have been numerous articles from media critics – Oliver Darcy, Poynter, CJR, mainstream media, etc,….on the failures of amateur, democratic media like Twitter and Meta this week. And barely any notice about the failures from mainstream media on the Gaza hospital incident. It is bizarro world where we are applying professional standards of journalism to amateur media and applying amateur standards to professional media.
In a free society, the only effective way to diminish the impacts of bad information is to have our professional institutions be trustworthy and credible. And the only way to do that, is to have strong mechanisms of accountability and checks/balances on these professional institutions. And I don’t see that happening. Everybody is obsessed with Elon and Twitter. If people have trusted sources to turn to, we have less inclination to buy into the bad information on Twitter.
And yet, on Twitter, I actually do see more accountability and checks/balances on mainstream media, much more than I do in the normal media ecosystem. That’s silly and harmful.