AI in a Dictatorship

Its hard to picture but in todays age of AI and the constant push to build more advanced thinking systems we find ourselves at a cross road of sorts. Will AI be the ultimate good for humanity or a tool used to oppress people ? Maybe it will be used for both like many things where in the first world nations its used to profit and help people at an exploitive price and suppress people in other nations that do not have a free voice or the means to stand up to their governments. The title chosen for this article is meant to capture the reader. There is no known existence as of today of such a tool existing in a Dictatorship to such an extent but this articles aims to explore the topic and how one might use this tool to suppress a group of people.

Capabilities of AI today

In todays age we have ChatGPT, DeepSeek, Gemini, Claude, Perplexity and so many more AI generators that can produce useful information, pictures, videos, and audio. Overall they are better content machines than people could hope to be with faster output and more detailed info. Everything that they do is based and learned from humans and their actions and human creativity and discoveries. This is important to note. Good data comes from real world experience and as of today the best observers are still people and that holds true even with AI being so dominant and capable. AI still processes things linearly and that is true for any machine as all machines even AI powered machines fall under the definition of a Turing Machine.

Safety Barriers on AI today ?

We call these ethical barriers or the moral correctness of an AI agent. Whatever you call it, AI and ethics go hand in hand with humans and ethics. There is the right and good, there is the wrong and bad behavior, then there is also the gray and somewhat case by case dependent behavior we can push AI into. Most AI agents today have ethical barriers in place to give people using them a guideline or safety bars needed to navigate the ever changing and morally gray world. We need to understand what is right and what is wrong as a species, and now we use machines to help us do that too so we must teach them the ethical principles we hold ourselves too. Most companies do this as with all media and truthful content the goal is to be transparent, dependable, accountable, uphold privacy, honest, and trustworthy. There have been cases were we skew AI results to fit an agenda and that is when it becomes dangerous as a self learning algorithm AI can pick up some bad habits, just like people. These habits could be things like spreading miss-information. Most dictatorships thrive on miss-information and re-written history. Almost all dictatorships do this early on to fit their agenda and story. People have to believe what the government is doing is justified and for the betterment of their lives or others lives. These safety barriers aim to remove bias from AI models and put on ethical safety guards along with privacy guards. Such as not being able to look up famous peoples info, or create scenario’s with specific people. Although there are loopholes most AI agent companies are constantly working to establish better standards and regulation.

Is AI biased ?

It also also important to mention AI is inherently unbiased. Afterall its just a computational model with probabilities and weights. There is nothing inside of it that thinks, feels, or breathes. The part that people think is “human” inside of AI is just people personifying the AI agents. People by nature do this from a young age well into adult hood. We all had a favorite stuffed animal or blanket that wasn’t alive but we pretended it was or treated it just like a real friend. AI is the same way, our virtual imagined friend. But its not a friend its a tool. A better comparison would be to call AI a double edged sword. It can help us solve massive problems like protein folding or it can help track peoples habits define a pattern in human behavior and exploit human weakness. The bias that AI is taught comes from the people. We are all full of opinions and ideas so AI learns from its creators and teachers ideas, just like young impressionable children do from parents and teachers.

Moreover, the bias can also come from the data people feed the AI model. Some data can be noisy or messy with lots of opinions or complexity that can add bias. Bias is something that is very hard to extract from a model once it is introduced. Another important thing to note is almost all AI agents are biased to some degree and are not able to spit out pure facts or cold hard truth. Sometimes they even generate false answers known as AI hallucination.

What is a dictatorship ?

How could AI be leveraged ?

AI as an autonomous system

AI as an observer

Does AI have limits ?

In our doomsday scenario it is fair but maybe a bit extreme to compare AI to a nuclear bomb. Nuclear bombs are incredibly controversial. Most people would agree they are bad. But what most people don’t know is that its that exact doomsday fear that keeps us from continuing world wars or massive invasions of smaller nations. There is that ever threat that a nuclear power can and will use its nukes against an enemy that keeps nations at the negotiating table talking about