Facebook, Google Need to Flatten the Disinformation Curve of their Own Making

By Gus Rossi, Principal, Omidyar Network

The COVID-19 pandemic has exposed an explosion of misinformation, fraud, and fear. And despite their latest set of promises, Google, Facebook, and other tech companies continue to promote, serve, and monetize disinformation about the virus. This is unacceptable when people’s health and the stability of our democratic institutions are at stake. The dominant tech platforms that offer services essential for our resilience should establish a moratorium on the algorithmic amplification of artificial content, stop the monetization of fraudulent COVID-19 related content, and open their data to independent researchers who can help them fight this contagion.

A few days after the World Health Organization declared COVID-19 a pandemic, big tech companies committed to jointly “combat fraud and misinformation about the virus” and elevate credible content from health authorities. These efforts don’t seem to be going well. ProPublica recently identified 10,000 fake Twitter accounts linked to the Chinese government, posting propaganda and disinformation about the coronavirus outbreak. Popular and credible YouTube videos on COVID-19 are being overshadowed by conspiracy theories in comment threads, linking to videos that accuse philanthropists of being behind the virus. It’s easy to find Facebook groups full of conspiracy-theory posts. And in Facebook’s WhatsApp, audio messages have spread the fake theory that ibuprofen accelerates the effects of the coronavirus.

If this were the first time big tech struggled at countering disinformation, one could argue that with a bit more time and patience they will be able to solve this very complex problem on their own. Unfortunately, big tech has showed little will to truly stop these infectious and dangerous messages. In 2018, Facebook announced the creation of an online library of all the advertisements on the social network, “creating a new standard of transparency and authenticity for advertising”. Yet, that potentially great tool to combat disinformation never really worked, as denounced by Mozilla, the French Government, and NYU researchers. Similarly, Google has been promising since 2018 to fight disinformation and radicalizing content in YouTube, but recent research out of UC Berkeley’s School of Information suggests that much is left to be desired regarding the efficacy of those efforts.

This is why, for these companies to live up to their promise and responsibility in these critical times, they must:

1. Put safety and accuracy above maximizing engagement, and suspend all algorithmic amplification of non-verified, artificial content, which too often is disinformation, conspiracies, or radicalizing content. For Google, this means turning off YouTube’s auto-play function; for Facebook, turning off the “suggested for you” algorithmic feed; for Twitter, turning off recommended content from public profiles and bots you aren’t following; and for WhatsApp slowing down the speed that non-verified information can travel, including making all groups opt-in by default, and further reducing the number of forwards permitted. These and other features can unknowingly promote false and misleading information. In normal times, artificial amplification is enough of a problem on its own. Now that the largest platforms have sent their content moderators home to protect them from COVID-19 and there’s no one to correct the machines’ already numerous errors, it seems completely imprudent to allow attention-maximizing algorithms to spread content online.

2. Extend their efforts to ensure that their platforms and malicious users cannot make money on COVID-19 disinformation. It will be impossible to stop the spread of virus-related disinformation, if advertisers and advertising platforms are complicit. A recent report from the Global Disinformation Index — a group that aims to disrupt, defund, and down-rank disinformation websites — reveals how fake news sites earn around $75M from advertising a year, much of it placed by Google. Unscrupulous actors will keep promoting COVID-19 disinformation, if they know there’s money to be made in doing so. Advertising platforms, such as Google, could instead partner with disinformation-tracking organizations and co-develop lists of untrustworthy websites or criteria for identifying suspicious sites.

3. Allow independent, verified researchers and institutions to be a part of the team fighting disinformation, and grant them affordable access to their full-stream, unfiltered content and all related metadata. Problems like COVID-19 need publicly-led solutions and oversight. Public oversight in this case means that experts, such as Data & Society, should be able to process and analyze this data to understand the extent of the problem, the efficacy of companies’ responses, and inform the fight against it. While this notion sounds radical, much of this information previously had been available until in recent years when the big platforms deliberately shut down instruments that allowed researchers to check their engines. Until 2015Twitter offered full access to all data to any third-party researcher; now it is only available with rigid restrictions to a handful of very wealthy institutions. Similarly, Crowdtangle, an application that showed researchers and journalists public posts on Facebook and Instagram in an easy-to-read format, was acquired and rendered useless by Facebook in 2017, presumably to prevent research on Russian disinformation. Health data research guidelines, the auditing industry, and social media researchers’ experience offer important blueprints to guarantee that the sharing of information happens in a manner that respects privacy.

So much is tied to this fight against disinformation — from public health and the safety of our loved ones all the way to the integrity of our democratic system. This health crisis has inspired a stronger response to a legacy problem, but it is critical that users and policy makers demand these additional mechanisms to ensure big tech also takes their responsibility seriously — especially long after this crisis ends, when it comes to our elections, hate speech, and all other misinformation battles. With the open study of their response, the tactics that work best can be deployed to fight other forms of false information and truly put an end to tech-enabled disinformation. Our lives and institutions depend on it.