How Meta’s new moderation policy could worsen misinformation and tensions in Nigeria
Meta’s decision to end professional fact-checking and rely on user-generated ‘Community Notes’ may worsen the information landscape in Nigeria
The recent shift of Meta in content moderation has sparked a global debate, particularly in regions like Nigeria, where misinformation and disinformation thrive. By replacing its third-party fact-checking program with a crowdsourced moderation model, Meta risks exacerbating the spread of false information, political propaganda, and inflammatory narratives. This change could deepen existing ethnic, religious, and political tensions — all of which have been amplified through misinformation in the past.
This article explores how Meta’s decision to end professional fact-checking and rely on user-generated ‘Community Notes’ may worsen the information landscape in Nigeria, a country where social media plays a critical role in shaping public opinion and influencing real-world events.
Meta’s shift in moderation policy
On 07 January 2025, Meta CEO Mark Zuckerberg announced the company’s decision toeliminateits third-party fact-checking program, a system that had been in place since 2016 to counter misinformation. Instead, Meta will introduce Community Notes, allowing users to flag misleading or false content.
‘We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms,’ Zuckerberg stated.
Meta justified the change by arguing that fact-checking organisations had become biased and the company would now only focus on illegal and high-severity violations. However, critics warn that community-driven fact-checking cannot be effective, particularly in regions where misinformation is systematically weaponised.
Why Nigeria is especially vulnerable
Nigeria is one of the largest users of Meta platforms in Africa. According to datareportal, the country boasts 36.75 million Facebook users and 12.40 million Instagram users.
Misinformation crises have a long history in Nigeria, where social media has played a central role in shaping public perception — often with dangerous consequences. During elections, false claims about voter suppression and rigging have been strategically spread to manipulate public sentiment, erode trust in the electoral process, and influence voter behavior. Beyond politics, misinformation has also been used to incite violence, particularly by amplifying fake news about ethnic or religious conflicts. Fabricated reports and doctored images have fuelled tensions, leading to retaliatory attacks and deepening existing divisions.Public health initiatives have not been spared either, as false narratives — especially those surroundingvaccines— have contributed to widespread hesitancy. Campaigns against vaccines, often portraying them as tools of Western control orpopulation reduction, haveunderminedimmunisation efforts and jeopardised public safety. Across these critical areas, misinformation continues to be weaponised, making social media both a powerful tool for discourse and a dangerous vector for disinformation in Nigeria.
Crowdsourced moderation: Lessons from X
Meta’s Community Notes system is modelled after X’s crowdsourced moderation, but mounting evidence suggests that this approach is ineffective in combating misinformation. In 2023, the European Union (EU) Commission declared X the social media platform with the largest ratio of mis-/disinformation posts. Another EU assessment revealed that X had the highest ratio of misinformation posts among all major social media platforms. The Climate Actions Against Disinformation (CAAD) — a global coalition of leading climate and disinformation organisations, also ranked the platform as the least-performing among social media platforms due to its poor approach to curbing climate change misinformation.
Similarly, research by NewsGuard, a leading global information reliability site, found in November 2023 that viral disinformation about the Ukraine war remained unflagged, even when directly debunked by professional fact-checkers. This study also revealed that X displayed programmatic advertisements for dozens of major brands, educational institutions, governments, and non-profits in feeds directly below viral posts advancing false or misleading claims about the war. Earlier, in April 2023, NewsGuard’s ‘misinformation monitor’ report showed that X’s ‘blue check’ verification badge, previously an indication of an account’s authenticity, has turned into a tool that propagators of false information used to present themselves as credible. X also failed to contain misinformation during the 2024 United States elections, where multiple false claims went unchecked because Community Notes did not provide fact-checks in real-time. Analysts have said many proposed and public Community Notes contain misinformation themselves.
The vulnerability of Nigeria, where misinformation is frequently weaponised to manipulate public opinion, incite violence, and influence elections, is heightened by the country’s complex socio-political landscape. Without a solid mechanism to expose deliberate disinformation, Meta’s new system exposes the platform to manipulation and misinformation forces, ultimately threatening social cohesion and democratic processes.
Civic groups and fact-checkers raise concerns
More than 100 fact-checking organisations under the International Fact-Checking Network (IFCN) have condemned Meta’s decision to end its third-party fact-checking program, warning that it will undermine online accuracy and have real-world consequences. Fact-checkers expressed disappointment, arguing that the move threatens nearly a decade of progress in curbing misinformation in an open letter to Zuckerberg published on 09 January 2025. They rejected Meta’s claim that fact-checking had become a tool for censorship, calling the statement ‘false’ and emphasising that its fact-checking program complied with strict nonpartisanship standards. IFCN-affiliated fact-checkers undergo annual independent verification, including assessments and peer reviews, to ensure transparency and objectivity.
Advocacy groups have also raised alarm. The National Online Safety Coalition, leading the #FWDwithFacts campaign, warned that Meta’s decision would accelerate the spread of misinformation and hate speech, particularly in countries like Nigeria, where social media platforms hold significant influence.
‘We noted that social media platforms like Facebook and WhatsApp, with tens of millions of Nigerian users, remain central to the country’s information ecosystem and can be weaponised without adequate fact-checking and content moderation,’ said Shirley Ewang, a senior specialist at Gatefield, a public strategy and advocacy firm. She cautioned that the absence of fact-checking would create a dangerous vacuum, fueling disinformation, deepening societal divisions, and putting lives at risk.
The coalition urged African governments to demand greater transparency from tech companies regarding their misinformation strategies. It also called for stronger legal frameworks to hold tech platforms accountable for harmful content and encouraged partnerships with civil society to promote media literacy.
‘African governments must act now to protect their citizens and preserve the integrity of democracy,’ Ewang emphasised.
The high stakes of Meta’s moderation shift
Meta’s decision to abandon professional fact-checking could escalate Nigeria’s already fragile information ecosystem, leaving the country more vulnerable to mis-/disinformation. With false narratives frequently targeting political campaigns, ethnic tensions, and public health initiatives, the absence of independent oversight creates an open field for bad actors to manipulate public discourse.
Without effective safeguards, disinformation networks could operate unchecked, emboldening those who seek to incite violence, destabilise political processes, and erode trust in democratic institutions. The consequences extend beyond online discourse — misleading narratives have real-world implications, influencing elections, fueling ethnic and religious conflicts, and undermining public confidence in critical health interventions.
This article was co-written by Nurudeen Akewushola, freelance journalist, working with the Pravda Association, and Jakub Śliż. The article was edited by senior editors Eva Vajda and Aleksandra Wrona and iLAB managing editor Janet Heard.