Facebook is becoming more and more dangerous. A recent MIT Technology Review investigation found that Facebook funds misinformation by “paying millions of advertising dollars to fund clickbait actors” through its advertising platform.
In the war against disinformation, the enemy can be difficult to determine. Journalists, politicians, governments and even grandparents have been accused of allowing lies to spread online.
While none of these groups are entirely innocent, the real adversary is more mundane. As Facebook whistleblower Frances Haugen testified late last year, it’s social media’s own algorithms that make misinformation accessible.
Since its launch in 2004, Facebook has gone from a social networking site for students to a surveillance monster destroying social cohesion and democracy around the world. Facebook collects tons of user data — including intimate facts, like body weight and pregnancy status — to map its users’ social DNA. The company then sells this information to anyone – from shampoo makers to Russian and Chinese intelligence services – who want to “micro-target” its 2.9 billion users. In this way, Facebook allows third parties to manipulate minds and trade “human futures”: predictive models of the choices people are likely to make.
Around the world, Facebook has been used to sow distrust of democratic institutions. Its algorithms have facilitated real-world violence, from genocide in Myanmar to terrorist recruitment in South America, West Africa and the Middle East. Lies about voter fraud in the United States, promoted by former President Donald Trump, flooded Facebook on the eve of the January 6 riots. Meanwhile, in Europe, Facebook has enabled the evil efforts of Belarusian strongman Alexander Lukashenko to use migrants as weapons against the European Union.
In the Czech Republic, disinformation originating from Russia and shared on the site has flooded Czech cyberspace, thanks to Facebook’s malicious code. An analysis by my company found that the average Czech citizen is exposed to 25 times more misinformation about the Covid-19 vaccine than the average American. The situation is so dire and the government’s action so inept that the Czechs rely on civil society – including volunteers known as Czech Elves – to monitor and counter this influence.
So far, efforts to mitigate Facebook’s threat to democracy have failed miserably. In the Czech Republic, Facebook has partnered with Agence France-Presse (AFP) to identify harmful content. But with just one part-time employee and a monthly quota of just ten dodgy posts, these efforts are just a drop in the ocean of misinformation. The “Facebook files”, published by the Wall Street Journal, confirm that Facebook acts on “as little as 3-5% of hate speech”.
Facebook has given users the option to opt out of personalized and political ads, but it’s a token gesture. Some organizations, like Ranking Digital Rights, have asked the platform to disable ad targeting by default. It’s not sufficient. Micro-targeting at the core of Facebook’s business model relies on artificial intelligence to grab users’ attention, maximize engagement, and disable critical thinking.
In many ways, micro-targeting is the digital equivalent of the opioid crisis. But the US Congress has taken aggressive action to protect people from opioids through legislation designed to increase access to treatment, education and alternative medications. To end the world’s reliance on fake news and lies, lawmakers must recognize the misinformation crisis for what it is and take similar action, starting with proper regulation of micro-targeting.
The problem is that no one outside of Facebook knows how the company’s complex algorithms work — and it could take months or even years to decode them. This means regulators will have no choice but to depend on Facebook’s own employees to guide them through the factory. To encourage this cooperation, Congress should provide blanket civil and criminal immunity and financial compensation.
Regulating social media algorithms sounds complicated, but it’s a low hanging fruit in the face of even greater digital dangers looming on the horizon. “Deepfakes” — the large-scale AI-powered manipulation of videos and images to sway opinion — is barely a topic of conversation in Congress. While lawmakers worry about the threats posed by traditional content, deepfakes pose an even greater challenge to privacy, democracy and national security.
Meanwhile, Facebook is becoming more and more dangerous. A recent MIT Technology Review investigation found that Facebook funds misinformation by “paying millions of advertising dollars to fund clickbait actors” through its advertising platform. And CEO Mark Zuckerberg’s plans to build a metaverse, “a convergence of physical, augmented and virtual reality,” should scare regulators around the world. Just imagine the potential damage these unregulated AI algorithms could cause if allowed to create an immersive new reality for billions of people.
In a statement after recent hearings in Washington, DC, Zuckerberg repeated an offer he’s made before: regulate us. “I don’t think private companies should make all the decisions on their own,” he wrote on Facebook. “We are committed to doing the best job we can, but at some level the right body to assess trade-offs between social equities is our democratically elected Congress.”
Zuckerberg is right: Congress has a responsibility to act. But Facebook also has a responsibility to act. He can show Congress what social inequalities he continues to create and how he creates them. Until Facebook opens up its algorithms to scrutiny — guided by the know-how of its own experts — the war on misinformation will remain unwinnable, and democracies around the world will continue to be at the mercy of a unscrupulous and renegade industry.
František Vrabel is CEO and Founder of Semantic Visions, a Prague-based analytics company that collects and analyzes 90% of global online news content.