Meta is ditching fact checkers for X-style community notes. Will they work?

Chris Vallance
Senior Technology Reporter
Getty Images Meta owner Mark ZuckerbergGetty Images
Meta owner Mark Zuckerberg

As flames tore through large parts of Los Angeles this month, so did fake news.

Social media posts touted wild conspiracies about the fire, with users sharing misleading videos and misidentifying innocent people as looters.

It brought into sharp focus a question that has plagued the social media age: what is the best way to contain and correct potentially incendiary sparks of misinformation?

It is a debate that Mark Zuckerberg, the chief executive of Meta, has been at the centre of.

Shortly after the January 6th Capitol riots in 2021, which were fuelled by false claims of a rigged US presidential election, Mr Zuckerberg gave testimony to Congress. The billionaire boasted about Meta's "industry-leading fact checking program".

It drew, he pointed out, on 80 "independent third-party fact checkers" to curb misinformation on Facebook and Instagram.

Four years on, that system is no longer something to brag about.

"Fact checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US," Mr Zuckerberg said earlier in January.

Taking their place, he said, would be something totally different: a system inspired by X's "community notes", where users rather than experts adjudicate on accuracy.

Many experts and fact checkers questioned Mr Zuckerberg's motives.

"Mark Zuckerberg was clearly pandering to the incoming administration and to Elon Musk," Alexios Mantzarlis, the director of the Security, Trust and Safety Initiative at Cornell Tech, told the BBC.

Mr Mantzarlis is also deeply critical of the decision to axe fact checkers.

But like many experts, he also makes another point that has perhaps been lost in the firestorm of criticism Meta faces: that, in principle, community-notes-style systems can be part of the solution to misinformation.

Birdwatching

Adopting a fact checking system inspired by an Elon-Musk-owned platform was always going to raise hackles. The world's richest man is regularly accused of using his X account to amplify misinformation and conspiracy theories.

But the system predates his ownership.

"Birdwatch", as it was then known, began in 2021 and drew inspiration from Wikipedia, which is written and edited by volunteers.

Meta Screenshot Mark Zuckerberg CEO of Meta announcing the changes to fact checkingMeta Screenshot
Mark Zuckerberg announced the changes in an online video

Like Wikipedia, community notes rely on unpaid contributors to correct misinformation.

Contributors rate corrective notes under false or misleading posts and, over time, some users earn the ability to write them. According to the platform, this group of contributors is now almost a million strong.

Mr Mantzarlis - who himself once ran a "crowd-sourced" fact checking project - argues this type of system potentially allows platforms to "get more fact checks, more contributions, faster".

One of the key attractions of community-notes-style systems are their ability to scale: as a platform's userbase grows, so does the pool of volunteer contributors (if you can persuade them to participate).

According to X, community notes produce hundreds of fact checks per day.

By contrast, Facebook's expert fact checkers may manage less than 10 per day, suggests an article by Jonathan Stray of the UC Berkeley Center for Human-Compatible AI and journalist Eve Sneider.

And one study suggests community notes can deliver good quality fact checks: an analysis of 205 notes about Covid found 98% were accurate.

A note appended to a misleading post can also organically cut its viral spread by more than half, X maintains, and research suggests they also increase the chance that the original poster will delete the tweet by 80% .

Keith Coleman, who oversees community notes for X, argues Meta is switching to a more capable fact checking programme.

"Community notes are already covering a vastly wider range of content than previous systems," he told me.

"That is rarely mentioned. I see stories that say 'Meta ends fact checking program'," he said.

"But I think the real story is, 'Meta replaces existing fact checking program with approach that can scale to cover more content, respond faster and is trusted across the political spectrum'."

Checking the fact checkers

But of course, Mr Zuckerberg did not simply say community notes were a better system - he actively criticised fact checkers, accusing them of "bias".

In doing so, he was echoing a long-held belief among US conservatives that Big Tech is censoring their views.

Others argue fact checking will inevitably censor controversial views.

Silkie Carlo, director of UK civil liberties group Big Brother Watch - which ran a campaign against alleged censorship of David Davis MP by YouTube - told the BBC allegations of Big Tech bias have come from across the political spectrum.

Centralised fact checking by platforms risks "stifling valuable reporting on controversial content", she told the BBC, and also leads users to wrongly believe that all the posts they are reading are the "vetted truth".

But Baybars Orsek, the managing director of Logically Facts, which supplies fact checking services to Meta in the UK, argues professional fact checkers can target the most dangerous misinformation and identify emerging "harmful narratives".

Community-driven systems alone lack the "consistency, objectivity and expertise" to address the most harmful misinformation, he wrote.

Professional fact checkers, and many experts and researchers, strongly dispute claims of bias. Some argue fact checkers simply lost the trust of many conservatives.

A trust Mr Mantzarlis claims was deliberately undermined.

"Fact checkers started becoming arbiters of truth in a substantial way that upset politically-motivated partisans and people in power and suddenly, weaponised attacks were on them," he said.

Trust in the algorithm

The solution that X uses in an attempt to keep community notes trusted across the political spectrum is to take a key part of the process out of human hands, relying instead on an algorithm.

The algorithm is used to select which notes are shown, and also to ensure they are found helpful by a range of users.

In very simple terms, according to X, this "bridging" algorithm selects proposed notes that are rated helpful by volunteers who would normally disagree with each other.

The result, it argues, is that notes are viewed positively across the political spectrum. This is confirmed, according to X, by regular internal testing. Some independent research also backs up that view.

Meta says its community notes system will require agreement between people with a range of perspectives to help prevent biased ratings, "just like they do on X".

But this wide acceptance is a high bar to reach.

Research indicates that more than 90% of proposed community notes are never used.

This means accurate notes may go unused.

But according to X, showing more notes would undermine the aim of displaying only notes that will be found helpful by the most users and this would reduce trust in the system.

'More bad stuff'

Even after the fact checkers are gone, Meta will still employ thousands of moderators who remove millions of pieces of content every day, like graphic violence and child sexual exploitation material, which break the platform's rules.

But Meta is relaxing its rules around some politically divisive topics such as gender and immigration.

Mark Zuckerberg admitted the changes, designed to reduce the risk of censorship, meant it was "going to catch less bad stuff".

This, some experts argue, was the most concerning aspect of Meta's announcement.

The co-chair of Meta's Oversight Board told the BBC there were "huge problems" with what Mr Zuckerberg had done.

So what happens from here?

Details of Meta's new plans for tackling misinformation are scarce. In principle, some experts believe community notes systems could be helpful - but many also feel they should not be a replacement for fact checkers.

Community notes are a "fundamentally legitimate approach", writes Professor Tom Stafford of Sheffield University, but platforms still need professional fact checkers too, he believes.

"Crowd-sourcing can be a useful component of [an] information moderation system, but it should not be the only component."