Today I want to discuss Meta's recent announcement regarding their decision to control information less and allow more freedom of speech. People will be able to say exactly what they believe. Now, there's this sandwich method where you say something good, something bad, and then something good again. But we're going to do the opposite: start with something bad, then say something not so bad, and end with another negtive point.
First, let's talk about the freedom of speech. It sounds wonderful, right? How could anyone argue that freedom of speech is bad? Well, it turns out there are many nuances. Sometimes freedom of speech is great, but other times it can cause significant damage. We need to weigh the benefits when it's good against the harm when it's bad. One major issue people don't often understand is that inaccurate information is more like corrosive information. When we use terms like misinformation or disinformation, we might think it's just a matter of correcting wrong information. But it's not that simple.
For several years now, I've been investigating this phenomenon. I wrote a book called *Misbelief* about my encounters with misbelievers—people who even thought I helped bring about COVID-19. In the book, I carefully analyze the psychology of these kind but misguided individuals who were deeply influenced by false information. On a personal level, I've received death threats and had my books burned. But the real tragedy lies in how this information fundamentally changes people.
So, when we talk about freedom of information and speech, we often overlook how much damage incorrect information can cause. This is a serious issue that we don't take into account sufficiently.
Now for something not so bad: Facebook never did a particularly good job blocking false information anyway. Many accusations against me originated from Facebook, but also from WhatsApp, Telegram, and other platforms. Facebook was never a paragon of truth verification; they were unable or unwilling to fight misinformation seriously. They might put up banners saying something wasn't verified as true, but misbelievers would wear that as a badge of honor.
Back to something bad: corrosive information creates tremendous damage. I recently met a woman who was deeply entrenched in these misbeliefs. She lost trust in healthcare and was diagnosed with cancer but refused treatment because she believed pharmaceutical companies were hiding better treatments. This will likely cost her her life.
Analyzing the influence of information shows how it can turn people from believing mostly true things to harmful falsehoods due to stress and social factors—not just social media but real-life connections too. We're in a period of high stress—economic issues, AI concerns, geopolitical tensions—and this makes people more susceptible to misinformation.
I worry more about Facebook than platforms like X (formerly Twitter) because Facebook is built on social connections. It's where people share personal stories and form long-term relationships. When people explore alternative realities and get rejected by their friends and family, they can find supportive communities on Facebook.
Finally, let’s consider the broader implications of freedom of information and expression. While these are important values, they sometimes come with significant costs. We don’t allow someone to shout fire in a crowded theater under the guise of free speech because it causes harm. Similarly, spreading false beliefs about COVID-19 or chemotherapy can have devastating effects.
We need more rules to understand the damage caused by misinformation and balance what it gives us against what it takes away. This includes exploring its impact on financial decisions, health advice, and relationships before forming an ideology around freedom of speech.
One last thought: while freedom of speech is generally good, should foreign governments have this right within our borders? We don’t grant them freedom of movement or employment here for good reasons—so why allow them to influence our people through speech? The same question applies to companies and bots.
In conclusion, figuring out when information is beneficial or harmful is one of society's most pressing issues. Hopefully, by the end of 2025 we will have made strides in understanding how to regulate it better for everyone's benefit.