AI In A Dictatorship

Artificial intelligence (AI) has opened new frontiers for authoritarian regimes to expand their power and tighten control. Modern autocrats are wielding AI as a tool for mass surveillance, censorship, propaganda, predictive policing, and even direct population control. As Freedom House observes, AI acts as an “amplifier of digital repression,” making it easier, faster, and cheaper for governments to censor information, surveil citizens, and spread disinformation​freedomhouse.org. This article explores real-world examples of these practices today – from China’s social credit scoring and facial recognition in Xinjiang to Iran’s and North Korea’s digital censorship – and looks ahead to how emerging technologies like autonomous weapons, drones, and smart sensors could further entrench authoritarian power. Finally, we consider how technologists, activists, and democratic institutions can push back and protect human rights in an AI-powered future.

AI-Powered Surveillance and Predictive Policing

One of the most pervasive uses of AI by authoritarian governments is for surveillance – watching the populace in fine-grained detail. In China, authorities have constructed a tech-enabled police state that is unprecedented in scale. An estimated hundreds of millions of CCTV cameras, many equipped with AI-driven facial recognition, blanket public spaces to track citizens’ movements in real time​journalofdemocracy.org. Nowhere is this more extreme than in the Xinjiang region, which is “believed to be one of the most surveilled areas in the world.” There, the largely Muslim Uyghur minority is monitored by an extensive network of cameras and sensors. Reports even indicate that police in Xinjiang have tested AI emotion-recognition software on Uyghur detainees – using facial analysis to purportedly detect stress or dissent – as part of interrogation and surveillance in detention camps​bbc.combbc.com. Such facial recognition systems flag individuals automatically, making anonymity nearly impossible. Chinese security forces justify this as counterterrorism, but in practice it enables the profiling and oppression of an entire ethnic group.

A detention facility in Xinjiang, China. The region’s Uyghur population lives under pervasive AI-powered surveillance, with facial recognition cameras at checkpoints and public spaces feeding into policing databases.womblebonddickinson.comwomblebonddickinson.com

Beyond passive surveillance, regimes are employing predictive policing – using AI analytics on big data to predict who might violate laws or pose a threat. In Xinjiang, police operate an Integrated Joint Operations Platform (IJOP) that aggregates data from CCTV feeds, wifi networks, banking records, and even neighbors’ reports. The IJOP’s algorithms generate “predictive warnings” in real time, flagging people deemed suspicious so that officers can “identify targets … for checks and control” before any crime occurs​hrw.org. Human Rights Watch found that individuals in Xinjiang could be flagged for innocuous behaviors (e.g. frequently exiting through a back door, or suddenly abstaining from smartphone use) and then subjected to questioning or detention – essentially algorithmic “pre-crime” profiling. This data-driven approach represents a new form of policing powered by AI, where automated systems decide who merits state scrutiny or punishment.

Authoritarian governments elsewhere are adopting similar tools. In Russia, Moscow’s extensive CCTV network (enhanced with facial recognition software) has been used to identify and track protesters or even would-be protesters. For example, at least 141 people were preemptively detained in Moscow in 2022 after the system recognized them as past demonstrators – they were stopped in the subway on days when authorities anticipated protests, before they could assemble​reuters.com. This tactic of preventive arrest shows how AI surveillance can directly facilitate targeted repression, by letting police intercept dissidents based on algorithmic “alerts.” As one analysis succinctly put it, autocrats now have AI tools to “surveil, target, and crush dissent” on a scale that was not possible before​journalofdemocracy.orgjournalofdemocracy.org.

China’s leadership has openly embraced this vision of high-tech social control. President Xi Jinping has enlisted major AI companies – SenseTime, Hikvision, Megvii, iFlytek, and others – as partners in building what has been described as perhaps “history’s most oppressive authoritarian apparatus”womblebonddickinson.com. In the words of one investigative report, “With AI, Xi can build [this system] without the manpower Mao needed” in the past​womblebonddickinson.com. Xinjiang’s millions of Uyghurs have effectively become a test population for perfecting these surveillance and policing techniques​womblebonddickinson.com. And once refined, nothing prevents such AI-enhanced repression from being expanded nationwide in China – or exported abroad to empower other dictators​womblebonddickinson.com. Indeed, Chinese tech firms are already selling “smart city” surveillance packages globally, spreading these tools beyond China’s borders​atlanticcouncil.org.

Censorship, Disinformation, and Propaganda

AI is equally revolutionizing how authoritarian states control information. Regimes have long censored the press and broadcast propaganda; today they are augmenting these tactics with machine learning. Internet censorship can now be partly automated: advanced filtering algorithms and AI moderation systems scan online content at scale, blocking or deleting posts that contain banned keywords, images, or ideas​journalofdemocracy.org. “The world’s most technically advanced authoritarian governments,” such as China and Iran, actively shape AI tools (like chatbots and social media algorithms) to ensure they strengthen state censorship systemsfreedomhouse.org. For example, China’s censors have trained AI models to recognize and remove politically sensitive speech on domestic platforms, and they require tech companies to do the same. In Iran, authorities use an equally sophisticated “national internet” infrastructure to filter traffic. During the 2022 Women, Life, Freedom protests after the death of Mahsa Amini, Iran’s “technically advanced censorship apparatus” was pushed to its limits by the volume of dissent – the regime responded by intermittently shutting down the internet entirely and blocking popular apps like WhatsApp and Instagram to quell antigovernment protestsfreedomhouse.orgfreedomhouse.org. Reports also indicate Iran is turning to facial recognition to enforce social rules offline: in 2023, a UN fact-finding mission found that Iran was using cameras and AI to identify women unveiled in cars or public places, combining digital surveillance with morality policing​cfr.org.

North Korea represents the extreme end of information control – essentially 100% censorship. The Pyongyang regime tightly restricts its citizens to a closed domestic intranet (the “Kwangmyong”), entirely cut off from the global Internet​ts2.techts2.tech. Only a tiny elite of trusted officials can access the real Web, under heavy monitoring. Everyone else is fed a diet of state-approved sites and propaganda content on the sealed intranet. Foreign media, social networks, and outside communications are completely banned, enforced by draconian punishments. This digital isolation, while low-tech, is effective: it ensures North Koreans only see the regime’s version of reality​ts2.tech. In short, both AI-driven censorship and blunt network shutdowns are being used to erect “digital iron curtains” that cut populations off from uncensored information.

On the propaganda front, AI is supercharging disinformation campaigns. Sophisticated algorithms can now generate deepfake images, videos, and audio that are difficult to distinguish from real footage. They can also produce floods of automated posts on social media through bot networks. Autocratic regimes are exploiting these capabilities to spread their narratives and attack opponents. Freedom House notes that in at least 47 countries last year, governments employed “cyber armies” of fake accounts and bots to manipulate online discussions – double the number from a decade ago​freedomhouse.org. And in 16 countries, state-linked groups deployed new generative AI tools to create text, audio, or visuals that sowed doubt, smeared opposition figures, or distorted public debates​freedomhouse.org. For example, Myanmar’s military junta in 2021 unleashed swarms of AI-driven bot accounts to harass pro-democracy activists online and flood social platforms with pro-regime messages​journalofdemocracy.org. The sheer volume of these bot posts aimed to drown out dissenting voices and create a false impression of widespread support for the coup.

Another emerging tactic is the use of AI-generated “news anchors” and fake media outlets for propaganda. In early 2023, observers caught Venezuela’s state media circulating videos featuring a supposed foreign news channel called “House of News” – in reality, the newscasters were AI-generated avatars (created with a software tool called Synthesia) reading scripts with pro-government spin​freedomhouse.org. Similarly, a network nicknamed “Wolf News” that pushed pro-Chinese Communist Party disinformation was found to be using AI-created English-speaking personas​freedomhouse.org. These digital puppets add a patina of credibility to falsehoods. The ease of creating such content means authoritarian propagandists can now mass-produce fake journalists, deepfake videos, and bogus social media personalities to disseminate their messaging at scale​freedomhouse.org. The result is an information space where truth becomes harder to discern – exactly the goal of regimes that want to confuse the public and undermine independent media.

An AI-generated “news anchor” appearing in a fake broadcast. Authoritarian regimes are experimenting with AI-driven propaganda, using deepfake videos and synthetic media to spread pro-government narratives or discredit opponents​freedomhouse.org. In one case, Venezuelan state media outlets shared videos of fabricated newscasters (as shown above) delivering regime talking points.journalofdemocracy.orgfreedomhouse.org

The consequences of AI-powered disinformation are sobering. It allows authoritarian governments to artificially amplify their voice while silencing real critics. Troll farms and bot armies (often state-sponsored) harass journalists and activists with waves of online abuse, creating a climate of fear and self-censorship​journalofdemocracy.org. AI-edited videos and images can ruin reputations or incite hatred – for instance, fabricated intimate videos have been used to attack female journalists and opposition figures, a tactic seen in places like India and Belarus​freedomhouse.org. And by flooding social media with fake stories and conspiracy theories, regimes hope to polarize societies and distract citizens from factual reportingfreedomhouse.org. All of these techniques, turbocharged by AI, strengthen authoritarian control over the narrative.

Social Credit Systems and Population Control

Perhaps the most far-reaching application of AI in an authoritarian context is the attempt to algorithmically engineer social behavior – exemplified by China’s developing Social Credit System. This initiative, still being refined and rolled out, aims to monitor, rate, and regulate the conduct of all citizens by integrating data across many aspects of life. In essence, each person is assigned a “social credit” score based on their behaviors, associations, and even opinions. AI algorithms crunch data from surveillance cameras, financial records, travel logs, social media posts, and more to continually update these scores.

What are the real-world effects of such a system? One observer described a dystopian scenario already taking shape: imagine a society with “unlimited electronic and physical surveillance” – millions of cameras, drones, phone and internet monitoring – all fed into AI engines that can “assign meaning to every act it captures”womblebonddickinson.com. The AI aggregates each person’s life into a composite score that then determines their opportunities. “Receive a good score from the government,” and you might be rewarded – expedited access to loans, a better apartment, permission to travel or even to have a child. But “a bad score means roadblocks in your life,” from being disqualified for jobs to being barred from certain schools or restricted in movement​womblebonddickinson.com. China is rapidly moving toward this all-encompassing surveillance society, pairing omnipresent data collection with AI-driven evaluations of “trustworthiness.” As the analysis notes, “this is what China is rapidly becoming.”womblebonddickinson.com

In practice, elements of the Social Credit System are already in use. Authorities have blacklists that automatically penalize individuals for offenses like jaywalking, debt default, or criticizing the government. In some cities, facial recognition cameras identify jaywalkers and immediately post their images on public screens to shame them – and deduct points from their social score. Those with low scores can be banned from buying plane or train tickets, denied loans, or restricted from certain jobs. The system’s scope is expanding: officials have discussed using it to regulate moral behavior and even minor infractions (like playing music too loud). It is a prime example of using big data and AI to enforce conformity and loyalty. As one legal analysis observed, “China is not only instituting a surveillance society, including a social scoring system for every resident, but is investing heavily in AI needed to manage it all and make evaluations of what cameras, biometric readers and internet filters capture.”womblebonddickinson.com The government reportedly spent tens of billions on AI development in just a few years to support these ambitions​womblebonddickinson.com.

Xinjiang again provided a blueprint for how such population control might function. In what has been called an “open-air prison,” millions of Uyghur Muslims were not only surveilled but also assigned ratings for their perceived loyalty or dissidence, with those deemed problematic sent to re-education camps​womblebonddickinson.comwomblebonddickinson.com. Police checkpoints there collect biometric data (faces, fingerprints, DNA) and check each person’s status on a mobile app – some Uyghurs reported their ID cards would “make noise” when scanned, marking them as flagged or blacklisted individuals​womblebonddickinson.com. This testbed demonstrates how AI and data can be used to tier citizens into categories: the “trustworthy” who get relative freedom, versus the “suspicious” who are constantly harassed or detained. Human rights experts warn that once such a system is perfected in one region, “no technological limitations will prevent [the government] from extending it across [the country].”womblebonddickinson.com Indeed, China’s vision of total population monitoring could become a reality nationwide in the near future, with AI enabling the state to watch every move and grade every citizen, all without requiring an army of human watchers.

Notably, China is exporting elements of this model. Chinese companies have sold “Safe City” surveillance platforms – featuring facial recognition, crowd analytics, and database integration – to dozens of other countries​atlanticcouncil.org. Some of these buyers are authoritarian governments in Africa, the Middle East, and Southeast Asia keen to import China’s approach to managing society. This raises the prospect of social-credit-like systems and AI-enhanced monitoring spreading globally, entrenching new forms of digital authoritarianism beyond China’s borders.

The Next Phase: AI Integration with Weapons, Drones, and Robotics

Looking ahead, emerging AI technologies could hand authoritarian regimes even more direct and formidable means of coercion. Today’s surveillance and scoring systems, while oppressive, primarily monitor and restrict people. The next generation of AI tools may actively strike at regime opponents or entire populations through autonomous force. Consider several worrying scenarios:

  • Automated Border Enforcement: Authoritarian states could combine AI with lethal hardware to create fully automated security perimeters. A glimpse of this future exists today in Gaza – where Israel’s military (in a heavily surveilled, conflict context) has deployed an AI-assisted “smart” border fence. The Gaza fence is equipped with remote-controlled machine gun towers guided by AI surveillance cameras, capable of tracking and firing on anyone who approaches the buffer zone​stevensaidthis.squarespace.com. This essentially creates a robotic sentry that can enforce borders without on-site human soldiers. A dictatorship could adopt similar technology to secure its borders or sensitive zones, using facial recognition and motion detectors to automatically identify and shoot “intruders.” If such systems are fully implemented, crossing a border without permission might trigger a deadly response from an algorithm – a frightening expansion of state power over life and death.
  • Autonomous Weapons and Riot Control: Advances in AI-controlled weapons, often dubbed “killer robots,” could give regimes a means to suppress unrest with minimal human intervention. Small autonomous drones, for instance, can be programmed to swarm through city streets, identifying protesters via facial recognition and firing tear gas or other munitions at them. In the worst case, drones or ground robots armed with lethal weaponry could be sent to hunt down specific individuals marked as dissidents. Military analysts warn that in the hands of a government with no respect for human rights, swarms of AI-guided drones could be unleashed “to hunt down civilians or perceived threats”, even selecting targets based on predefined characteristics like ethnicity or clothing​newlinesinstitute.org. Unlike human enforcers, machines would not hesitate or question orders – an autocrat’s dream tool for quelling opposition. We have already seen precursors: Turkey and Russia have used semi-autonomous drones in conflict zones to identify and strike targets, and regimes like Iran are rapidly developing drone capabilities​newlinesinstitute.orgnewlinesinstitute.org. The widespread proliferation of cheap, AI-equipped drones means even poorer regimes or non-state militias can acquire a sort of instant air force for repression​newlinesinstitute.org. This technology could be turned inward to patrol streets or assassinate opponents with chilling efficiency.
  • Predictive Threat Assessment & “Pre-crime” Policing: As machine learning gets better at analyzing big data, future authoritarian governments may integrate AI across all surveillance feeds to predict dissent before it happens. Every digital footprint – social media posts, private messages, purchases – and every physical movement captured by cameras or sensors could be fed into an AI system profiling citizens for “anti-state sentiment.” One can imagine an AI system assigning each person a dynamic “threat score” (not unlike a credit score) indicating how likely they are to engage in protest or rebellion. If the score crosses a threshold, the system could automatically dispatch police or drones to intervene preemptively. Elements of this are already visible: In Egypt, the government reportedly uses AI to monitor social media for signs of dissent, analyzing keywords and hashtags to predict and preempt protests by arresting organizers in advance​journalofdemocracy.org. China’s IJOP platform in Xinjiang, as noted, flags people for police attention based on subtle behavioral data. Scaling this up, an authoritarian state might link nationwide CCTV (including smart city sensors, license plate readers, even “smart” home devices) with AI analytics to constantly watch for patterns of “abnormal” activity. If, say, a usually compliant citizen suddenly starts visiting opposition-linked websites, meets with known activists, or deviates from their routine in a way the model finds correlates with past dissent, the system could alert authorities to take action. This vision of total predictive policing – essentially AI-powered thought-policing – would let a regime suppress nascent challenges before they can coalesce, strangling free expression at inception.
  • Autonomous Law Enforcement Robots: We may also see more robots and AI-driven machines replacing human police and soldiers in enforcing regime orders. Prototypes already exist for robotic riot police (wheeled or drone-like machines that can deploy tear gas, stun weapons, or perform surveillance in crowds). In an extreme scenario, a government could deploy legions of AI-powered robots to patrol streets, man checkpoints, or enforce curfews, all following programmed directives. Such robots might use facial recognition to verify IDs, and if an individual is flagged (by the social credit system or predictive model) as untrustworthy, the robot could detain or neutralize them on the spot. Unlike human police, robots wouldn’t be susceptible to empathy or bribery – making them ideal instruments of a tyrant’s will. While this remains largely speculative, the components are rapidly advancing: AI vision for target recognition, autonomous navigation, and weaponization of robots are active areas of research. Without legal and ethical constraints, authoritarian states could readily adopt these technologies to strengthen their monopoly on force.

It’s important to stress that these are not distant science fiction scenarios – the prototypes exist today. For instance, lethal autonomous drones have reportedly been used in conflicts (a Turkish drone may have autonomously attacked targets in Libya in 2020), and border-defense turrets with AI targeting are deployed in places like the Korean DMZ and Israel. As AI algorithms improve, the cost of such systems falls, and norms remain unsettled, there is a real risk that repressive regimes will integrate AI into weapons and coercive tools to an unprecedented degree. The combination of unaccountable AI decision-making with life-or-death power poses a dire threat to human rights. An autonomous system might make split-second judgments to shoot or arrest based on sensor inputs – without any human in the loop. In the hands of an authoritarian government, that could mean automated repression at scale, from machine-gun-armed robots quelling a protest to AI that decides who lives or dies in a counterinsurgency operation.

Protecting Human Rights: Resistance and Hope in the AI Era

The picture painted above is undeniably grim. Yet, it is not hopeless. Around the world, developers, activists, civil society organizations, and democratic governments are waking up to these dangers and mobilizing to resist the misuse of AI. Just as autocrats are leveraging AI to tighten their grip, those committed to freedom can harness creativity, technology, and policy to uphold human rights. A future in which AI is dominated by authoritarian abuse is not inevitable – but avoiding it will require concerted effort on multiple fronts. Below are some of the key ways different actors can push back and promote an ethical, rights-respecting AI future:

  • Ethical AI Development and Industry Responsibility: The tech community – researchers, engineers, and companies building AI – have a critical role. They can embed human rights safeguards into AI design (for example, developing algorithms that protect privacy and resist bias) and refuse to build Orwellian tools for oppression. There is growing awareness among developers that “just because we can build it doesn’t mean we should.” In practice, this has meant some companies pulling out of or reconsidering projects that could abet repression. Industry leaders can establish ethics boards and strict use policies so that their AI products are not sold to notorious human-rights abusers. Importantly, partnerships are emerging between AI experts and human-rights groups to create “AI for good” – tools that help rather than harm. For example, technologists are working on AI systems to detect deepfake propaganda, to enhance encryption and anonymity for vulnerable users, and to audit government AI systems for abuses. By aligning with civil society, AI developers can help inoculate society against malign uses of their innovationsjournalofdemocracy.org. Ultimately, a culture of “human rights by design” in AI will be a strong foundation against authoritarian exploitation.
  • Civil Society and Activist Innovation: Around the globe, activists, NGOs, and citizen movements are finding creative ways to fight back against digital repression. They are often outgunned, but not powerless. Activists have adopted counter-surveillance tactics – for instance, Hong Kong protesters famously used laser pointers, masks, and umbrellas to confound facial recognition cameras​journalofdemocracy.org. Privacy advocates train journalists and dissidents in operational security and promote tools like VPNs, secure messaging, and anti-censorship software to circumvent internet blocks. Some groups are turning AI on the oppressors: using machine learning to document human rights abuses (e.g. scanning satellite imagery for prison camps or analyzing social media evidence of war crimes). Others develop chatbots to help citizens access information anonymously in censored environments. As noted in a Journal of Democracy piece, “activists and movements worldwide are beginning to harness AI as a force for good,” from using AI to document abuses to leveraging it for better secure communication​journalofdemocracy.org. Digital literacy and grassroots innovation will be key – by staying a step ahead with new techniques, civil society can keep carving out space for freedom even under high-tech repression.
  • Democratic Governance and International Standards: Liberal democracies and international institutions can use policy and law to set guardrails on AI. A first step is acknowledging that AI-based human rights abuses are a real and urgent issue – not a distant sci-fi risk. Policymakers in democratic nations are starting to act. They can implement export controls to stop the sale of advanced surveillance tech to authoritarian regimes (some Western countries have already banned or restricted exports of facial recognition systems to China, for example). Democratic governments can also sanction companies and officials who facilitate AI-driven repression – much as global Magnitsky sanctions target human rights violators. Importantly, the free world can offer an alternative model: passing laws (like the EU’s proposed AI Act) that ban the most egregious uses of AI (such as social scoring or real-time biometric surveillance in public) and require transparency and accountability for other high-risk AI systems. On the international stage, there is growing momentum for treaties to address technologies like autonomous weapons. Over 70 countries have supported launching negotiations on a binding agreement to ensure “meaningful human control” over the use of force and to prohibit killer robots that operate without human judgment​hrw.org. Human Rights Watch and a coalition of NGOs (the Stop Killer Robots campaign) are leading calls for such a treaty, arguing it’s vital to prevent “digital dehumanization” and unlawful killing by AI​hrw.org. Similarly, global bodies can develop human-rights-based standards for AI – as Freedom House urges, democratic nations working with civil society should establish robust norms and regulations so that AI deployment aligns with fundamental rights like privacy, free expression, and due process​freedomhouse.org. By setting this example and exerting diplomatic pressure, democracies can blunt the appeal of the authoritarian “AI model” and encourage tech companies worldwide to adopt pro-human-rights practices.
  • Empowering the Public and Building Resilience: Ultimately, an informed and vigilant public is the last line of defense. Education initiatives can teach citizens about the risks of AI-enhanced surveillance and propaganda, so they are less easily victimized by these tactics. Supporting digital hygiene (like using strong encryption, verifying news sources, and protecting personal data) makes individuals harder targets for authoritarian AI systems. Grassroots networks can share circumvention tools and strategies quickly when a crackdown looms (for example, distributing offline mesh-network kits or safe proxy servers during an internet shutdown). Psychological resilience is also key: regimes try to induce a chilling effect, but widespread awareness that “you are not alone – others are finding ways to speak out safely” can empower more people to resist fear-based control. International solidarity movements, such as those assisting dissidents in Iran and China with technology and lobbying, amplify this effect. The goal is to ensure that technology serves the people, not just the state – a principle that hearkens back to the early ideals of the open Internet. If citizens demand that AI be used to better their lives (improving services, health, education) and push back against its use in repression, governments will feel pressure to moderate their worst impulses.

In conclusion, the rise of AI in authoritarian regimes presents one of the defining human rights challenges of our time. We have entered an era where dictators can leverage face-scanning cameras, invasive algorithms, and autonomous machines to tighten their grip on power in ways Orwell could only imagine. The case studies – from China’s social credit scores and Uyghur surveillance, to Iran’s digital censorship and deepfake propaganda – underscore that this is not theoretical; it’s happening now. And the coming wave of AI-driven weapons and predictive policing hints at a future that could be even more oppressive if left unchecked.

Yet the future is not written. The same technologies enabling repression can be repurposed for liberation – or at least restrained by ethical norms and laws. The world’s democracies, tech innovators, and human rights defenders are increasingly aware of what’s at stake. By championing transparency, accountability, and human dignity in AI development – and by supporting those on the frontlines of digital repression – we can strive to ensure that AI becomes a tool of empowerment, not tyranny. As one Freedom House report emphasized, we must “apply the lessons of past governance challenges to AI” and insist that even in this brave new digital world, fundamental rights remain non-negotiable​freedomhouse.orgfreedomhouse.org. The contest between authoritarian and democratic uses of AI is underway; its outcome will shape the liberty and well-being of billions. The time to act and set the right precedent is now, while the technologies are still evolving. With vigilance, creativity, and global cooperation, humanity can harness AI’s benefits without surrendering to its threats – keeping our future free in the age of intelligent machines.

Sources:

  1. Freedom House (2023). “The Repressive Power of Artificial Intelligence.” Freedom on the Net 2023 report​freedomhouse.orgfreedomhouse.org.
  2. BBC News (2021). “AI emotion-detection software tested on Uyghurs.” bbc.combbc.com.
  3. Human Rights Watch (2019). “China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police App.” hrw.org.
  4. Reuters (2022). “How facial recognition is helping Putin curb dissent.” reuters.com.
  5. Atlantic Council (2020). “The West, China, and AI Surveillance.” atlanticcouncil.orgatlanticcouncil.org.
  6. Womble Bond Dickinson (2020). “Dystopic Population Control System – China’s AI.” womblebonddickinson.comwomblebonddickinson.com.
  7. Journal of Democracy (Cevallos, 2025). “How Autocrats Weaponize AI — And How to Fight Back.” journalofdemocracy.orgjournalofdemocracy.org.
  8. CFR – Council on Foreign Relations (2025). “Iran Using Electronic Surveillance to Enforce Veiling Laws.” cfr.org.
  9. TS2 Space Blog (2025). “Internet Access in North Korea: Kwangmyong.” ts2.techts2.tech.
  10. New Lines Institute (2021). “As Drones Proliferate, Authoritarian Regimes Profit.” newlinesinstitute.org.
  11. Steven Said This blog (2023). “Israel and the Imperial Boomerang.” stevensaidthis.squarespace.com (on Gaza “smart wall” and automated border defense).
  12. Human Rights Watch (2025). “A Hazard to Human Rights: Autonomous Weapons Systems.” hrw.org.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *