Recently, I came across some research about Holocaust denial advancing on Wikipedia. It's disturbing how people are trying to rewrite history. By the way, I also saw a report stating that one of the most edited items on Wikipedia is about Adolf Hitler. Considering Hitler died long ago, we should have settled on the facts by now. The fact that this remains a contentious issue on Wikipedia highlights how much the platform has become a battleground for political activism rather than just a repository of facts.
But it's not just Wikipedia. This reminds me of a Coursera class I once taught. Each time, there were about 200,000 students, and inevitably, there was one awful person among them. One time, this individual was extremely unkind to women in various groups. Another time, someone accused me of conducting unauthorized experiments on Coursera students, which wasn’t true. Every experiment I do has ethical approval. However, every time there was a glitch on the platform, this person would interpret it as evidence of wrongdoing and complain, prompting an investigation.
I approached Coursera and asked if I could remove these disruptive individuals from the platform. My argument was that they were intruding my class like unwelcome guests in my home. Both times, Coursera said no; they valued inclusiveness too much. Eventually, I stopped teaching because dealing with such negativity was too painful.
I see a similar issue with Wikipedia. It’s an amazing platform backed by a noble idea, but it assumes everyone has good intentions. When platforms like Coursera or Wikipedia grow, they create opportunities for bad actors to cause harm within systems designed for good intentions.
We need to address this correctly. While inclusiveness and freedom of speech are important, they also make us vulnerable. We must create guardrails when it’s clear people are acting with bad intentions. For example, allowing Holocaust deniers on Wikipedia is unacceptable.
The challenges with Wikipedia are deep and broad; many topics have become ideological battlegrounds. No system is foolproof; bad actors will always find ways to exploit it.
So what’s the point? Technology often makes us vulnerable in the name of inclusiveness. Inclusiveness is crucial, but so is protecting against bad actors. We need a cost-benefit analysis rather than rigid ideologies like absolute freedom of speech or inclusiveness.
Scale invites bad actors; we must protect against them in all areas, otherwise, we risk creating terrible systems that fail to deliver their potential benefits.
In conclusion, let's adopt a cost-benefit approach instead of rigid ideologies to create better systems and a better world.