Censorship on the Rise Amid AI Adoption

HodlX Guest Post Submit Your Post
AI (Artificial Intelligence) presents an ethical dilemma regarding algorithmic moderation, which could potentially let authorities and businesses dictate worldwide discourse if we neglect to address it.
As a crypto investor, I can’t help but notice the explosive growth of AI technology and the broader industry we’re part of. Every day, it seems the control it could exert through censorship grows stronger.
Approximately every year or two since 2010, the computational power used in training artificial intelligence systems has grown tenfold. This rapid advancement has made the potential for censorship and manipulation of public conversations a growing concern.
Globally, corporations have identified privacy concerns and data management issues as the most significant risks associated with artificial intelligence, whereas censorship has not been a major concern for them.
Artificial Intelligence, capable of sifting through vast amounts of data incredibly quickly, serves as a tool for filtering and managing content, as well as regulating the flow of information.
Large Language Models (LLMs) and content suggestions have the ability to sift through, control the spread of, or widely disseminate information in a massive manner.
In 2023, Freedom House highlighted that AI is enhancing state-led censorship.
In China, the Cyberspace Administration of China (CAC) has integrated a censorship approach into AI-driven conversation tools, such as chatbots. This means that these chatbots must uphold “core socialist values” and filter out content deemed inappropriate or to be censored by the Communist Party.
Instead of using AI models from China, like DeepSeek’s R1, which often filter out sensitive topics such as the Tiananmen Square incident, to disseminate government-approved stories.
Or:
Chinese AI models, including DeepSeek’s R1, tend to suppress discussions on subjects like the Tiananmen Square massacre, in order to promote state-sanctioned narratives.
According to Freedom House, it is crucial for democratic governments, in collaboration with international experts from various sectors, to set robust standards grounded in human rights when creating or using AI technologies. These guidelines should apply equally to both governmental and non-governmental entities involved in the development and deployment of such tools, with the aim of safeguarding a free and accessible internet.
2021 research conducted by UC San Diego revealed that AI models developed using restricted datasets like Baidu Baike in China, where ‘democracy’ is often linked to ‘disorder,’ can exhibit biased associations.
Models trained on uncensored sources associated ‘democracy’ with ‘stability.’
As an analyst reviewing the Freedom on the Net report by Freedom House in 2023, I observed a concerning trend: The global state of internet freedom has declined for the 13th straight year. A significant portion of this decrease can be linked to advancements in artificial intelligence (AI).
In approximately twenty-two nations, there are regulations that mandate social media platforms to utilize automated systems for managing content. These systems might potentially stifle discourse and protests.
In countries like Myanmar and Iran, the ruling military authorities have been known to employ artificial intelligence (AI) for surveillance purposes. They monitor chat groups on platforms such as Telegram, apprehending critics and issuing capital punishments based on their online postings. This is a chilling example of AI being used in oppressive ways.
Furthermore, authorities in Belarus and Nicaragua have handed down severe prison sentences to individuals due to the content of their internet communications.
According to Freedom House, at least 47 governments manipulated online discussions by expressing views to favor their specific narratives.
Last year, at least 16 nations employed novel technologies to cast uncertainty, tarnish adversaries, or sway public discourse.
In at least 21 nations, digital platforms are mandated to employ machine learning technology for the removal of political discussions, social content, and religious expressions.
As a crypto investor, I can’t help but reflect on a chilling revelation from a 2023 Reuters report: the potential for AI-manufactured deepfakes and misinformation to erode public faith in democratic processes. This insidious tactic could empower regimes intent on tightening their grip on information, a concerning development indeed.
2024 US Presidential Elections saw AI-created images wrongly suggesting Taylor Swift supported Donald Trump, showcasing the capacity of Artificial Intelligence to sway public sentiment prematurely.
China offers the most prominent example of AI-driven censorship.
In the year 2025, as an analyst, I came across a leaked dataset that I scrutinized. This dataset, exposed by TechCrunch, unveiled the existence of a complex AI system engineered to suppress discussions on various sensitive topics such as pollution scandals, labor disputes, and political issues concerning Taiwan.
Instead of using conventional keyword-filtering methods, this system employs Language Learning Models (LLMs) to assess context and identify political satire.
Researcher Xiao Qiang observed that these systems enhance the efficiency and detail of information management by government entities.
2024 saw a House Judiciary Committee report alleging that the National Science Foundation had been utilizing artificial intelligence resources to counteract misleading information about Covid-19 and the 2020 elections.
The report found that the NSF funded AI-based censorship and propaganda tools.
According to the document, grants worth millions of dollars are being awarded by NSF to university and non-profit research groups in an effort to counter rumored misinformation about Covid-19 and the 2020 election.
These government-funded initiatives are designed to create AI systems capable of controlling online discourse. These tools, which may be employed by governments and tech giants alike, can manipulate public opinion by suppressing certain viewpoints or amplifying others.
According to a 2025 report from Wired magazine, it was found that the R1 model of DeepSeek has filters for censorship built into it, not only during operation but also during the learning phase. This leads to the blocking of content related to sensitive subjects.
In the year 2025, a study by the Pew Research Center revealed that 83% of American adults expressed worry over AI-generated misinformation. This concern was not only about the potential spread of false information, but also its impact on freedom of expression.
Through interviews with AI specialists, it was found that the data used to train artificial intelligence systems may unwittingly strengthen pre-existing power dynamics.
Addressing AI-driven censorship
2025’s HKS Misinformation Analysis advocated for improved journalism practices to lessen the demand for censorship driven by excessive fear.
A study revealed that approximately 83.4% of American citizens have varying degrees of concern about artificial intelligence’s potential to disseminate misinformation during the 2024 U.S. presidential election. Specifically, around 38.8% expressed some level of worry, while a higher percentage of 44.6% showed significant concern. Furthermore, only 9.5% reported having no concerns about the matter, and 7.1% admitted to being entirely uninformed on this issue.
As a crypto investor, I firmly believe that building an open-source Artificial Intelligence (AI) ecosystem is crucial. It involves companies openly sharing their training data source details and acknowledging potential biases in their AI systems, ensuring transparency and fairness.
Governments should create AI regulatory frameworks prioritizing free expression.
To ensure a human-centered future rather than an AI-controlled technocracy that resembles a dystopia, it’s crucial for both the AI sector and its users to gather the necessary bravery to confront censorship issues.
Manouk Termaaten serves as an entrepreneur, specializing in AI technology, and is the founder and CEO of Vertical Studio AI. His goal is to democratize AI for all users. With a solid foundation in engineering and finance, he intends to shake up the AI industry by offering customization tools that are user-friendly and providing affordable computers.
Follow Us on Twitter Facebook Telegram
Read More
- The Last of Us season 2 confirms spring 2025 release on HBO
- Cookie Run: Kingdom Version 6.4 mid update brings Beast Raid, Boss Rush Season 2-2 and more
- Netmarble announces Game Of Thrones: Kingsroad, with the open-world RPG coming to Mobile and PC in 2025
- Mission: Impossible The Final Reckoning Review: An Adrenaline-Fueled Homage
- Deadly Dudes Hero Tier List
- The Handmaid’s Tale season 6: Everything we know about the final season
- DreamHack Dallas meets IEM Dallas 2025: Everything to know and how to secure your tickets
- Cookie Run: Kingdom Pure Vanilla Cookie (Compassionate) Guide: How to unlock, Best Toppings, and more
- Clash Royale Best Boss Bandit Champion decks
- Original The Elder Scrolls IV: Oblivion Designer Says Bethesda’s Remaster Is So Impressive It Could Be Called ‘Oblivion 2.0’
2025-05-24 06:45