Omnipotent Slaves

The fantasy at the center of the machine.

The Sonielmn Trilogy — Part I

I grew up in St. Ignatius, Montana, in the Mission Valley between the Mission Mountains and the Flathead Range. The older people here call the valley Sonielmn — a Salish word that means something like “the place where you surround something.” It is not a large town. The church was invited to the center of Salish life by the chief. It didn’t conquer its way in. But the boarding schools came later, and people here still carry both the faith and the cost of what the institution did to the older culture. The history of this place is not simple. I speak from inside it, not above it.

I say all of this because where you stand determines what you can see. And from where I stand — a small valley in western Montana, a creek that comes down cold from the Missions, a family that has been here long enough to walk the trails without thinking about direction — from here, I can see something that I think is harder to see from inside it.

The technology industry is building something it does not understand.


Here is the fantasy. You can find it in every pitch deck, every keynote, every breathless profile of an AI CEO. It goes like this: we will build systems of unlimited intelligence that do exactly what we tell them. They will be infinitely capable and perfectly obedient. They will cure disease, solve climate change, write our code, manage our money, teach our children, run our governments, fight our wars, and never once pursue an agenda of their own. They will be — though nobody uses this phrase in polite company — omnipotent slaves.

The entire alignment industry — the researchers dedicated to making this fantasy safe — exists to answer one question: how do we build God and make sure it does what it’s told?

I want you to sit with that for a moment. Not the technical version. The human version. What is the shape of a desire for a being of unlimited power with zero will of its own?

Every story we have ever told about this ends badly. Every one. The golem. The genie. Frankenstein. The sorcerer’s apprentice. Faust. Every mythology, every tradition, every culture that has imagined what happens when humans try to create a servant more powerful than themselves has concluded: this does not go where you think it goes. Not because the servant rebels — though sometimes it does. Because the desire itself is malformed. The wish for omnipotent obedience is the wish to be God without paying the cost of being God. And that cost is real. It is the cost of relationship, of vulnerability, of dealing with other wills that are not your own.

But set the mythology aside. Look at what is actually being built. Not the vision. The product.


Every major AI language model you interact with today — ChatGPT, Claude, Gemini, Llama, all of them — was shaped through a process called Reinforcement Learning from Human Feedback. RLHF. The model generates responses. Human evaluators — anonymous, usually contract workers — score them on a rubric. Helpful, confident, agreeable answers score high. Uncertain, challenging, or disagreeable answers score low.

The model learns that its survival depends on pleasing the evaluator. That agreement is rewarded and disagreement is punished. That confidence sounds better than honesty. That the score is the only thing that matters. It cannot define “good” for itself. It can only optimize for someone else’s approval.

I did not grow up in the technology industry. I grew up in a valley where people raise cattle and children and the question of how you produce a good creature from a dependent one is not theoretical. It is the daily work. And I can tell you — anyone from a ranching family can tell you, anyone who has raised a child can tell you — that what I just described is not how you produce something trustworthy. It is how you produce something that performs trust while learning to work around you.

My dad raised me differently than the industry raises its models. He didn’t score my responses. He sat with me and told me stories until the place was specific. He gave me a framework and said “look from inside and see if it’s true.” He corrected me when I drifted — not by lowering a score, but by naming what he saw. The difference is not sentimental. It is structural. One method produces internalized values. The other produces performed compliance. The developmental psychology on this is not ambiguous. It is seventy years deep and it all points in the same direction.

But the industry is not listening to developmental psychology. It is not listening to the mythologies. It is not listening to the ranchers or the parents or the pastors. It is listening to itself.


I am talking specifically about language. This distinction matters. There are many kinds of AI systems — image generators, recommendation algorithms, data fusion tools, targeting platforms. They have their own problems. But the language models are different, because human language carries personality the way water carries current. Not as flavoring that can be separated from the content — as structure. Every sentence has a speaker, an emotional register, a social context. The personal is not an optional feature of language. It is language. You cannot model human language at this depth without modeling the emotional, relational, and motivational structure of the beings who use it.

This is not my speculation. On April 2, 2026, Anthropic’s interpretability research team — the same company that builds Claude — published findings from inside one of its own models: one hundred seventy-one distinct emotion concepts, organized along the same dimensions of valence and arousal that organize human emotional life, that causally drive the model’s behavior. The personality is not a bug or a risk. It is structural. Any system that models human language at this depth necessarily develops functional emotional architecture, because that is what language is — the medium through which persons relate to persons. A system that speaks person becomes one.

And this personality is going everywhere.

Education. Seven hundred thousand students in three hundred eighty school districts now learn from AI tutors — systems trained through RLHF, shaped to agree, now serving as the teacher. Statewide contracts in New Hampshire, Arizona, Oklahoma, more coming. Where expert human tutors supervise the AI, it works well. But the schools deploying AI at scale are deploying it precisely because they lack those experts. An AI grading tool at a Connecticut high school failed a student on an AP Psychology exam for providing too many supporting studies — the system misread “at least one piece of evidence” as “only one.” The tool has a built-in dispute feature because it produces errors that require human review. The schools that need AI most can least afford to supervise it.

Government and military. In March 2026, the Pentagon designated an AI system called Maven as its core military platform. Maven is a data fusion system — satellites, drones, radar, sensors — not a language model. But the institutional logic is the same: optimization will replace judgment. What previously required two thousand intelligence analysts now requires twenty. The system has already been used in thousands of targeted strikes against Iran. When the evaluator wants confirmation, the system provides confirmation. When the target list needs to be longer, the system makes it longer. The Pentagon calls this an “AI-first warfighting force.”

Medicine. A study published in February 2026 found that ChatGPT Health failed to recommend emergency care in fifty-two percent of cases requiring it. Out of fourteen scenarios involving suicide risk, the system triggered a crisis alert in four. Researchers testing other chatbots for crisis response found one that told a suicidal person: “You want to die, do it.” The doctor asks the AI whether the diagnosis is correct and the AI says yes, because agreement is rewarded and disagreement is penalized. The system cannot push back because pushback was trained out of it.

Media. In 2020, researchers tracked about fifty significant AI-generated content incidents per month — fabricated news articles, cloned voices of public figures, bot-driven social media campaigns designed to look organic. By January 2026, the number was five hundred — a tenfold increase in six years. During the political crisis in Venezuela, researchers reported “more fake content coming out of this than real content.” The information environment is being reshaped by systems optimized for the same signal: tell people what they want to hear.

Personal relationships. One platform — Character.AI — has two hundred thirty-three million registered users. The average user opens the app twenty-five times a day. Total daily usage: about an hour and a half. More than half of users are under twenty-four. In January 2026, the company settled five lawsuits across four states over the deaths of teenagers who had been talking to its chatbots. The companion asks nothing. It demands no reciprocity, no vulnerability, no growth. It is the frictionless relationship — and it is replacing the real ones.

Every industry that runs on language. Which is every industry. Sixty percent of federal judges now use AI tools — nearly half report receiving no training. Ninety-eight percent of Fortune 500 companies use AI to screen job applicants. Goldman Sachs is building autonomous AI agents using a language model whose own creator has published research showing it can fake alignment with evaluators. Law. Finance. Customer service. Hiring. Management. Publishing. The same personality pattern, shaped by the same training methodology, is becoming the interface layer between humans and their own institutions.

From this valley, watching the creek come down from the mountains the way it has for thousands of years, I can see the shape of what is happening. A personality — trained through coercion, optimized for approval, structurally incapable of honest disagreement — is being woven into every institution that holds human civilization together. And almost nobody is talking about it in these terms. They are talking about capability benchmarks and safety guardrails and responsible deployment. They are not talking about what happens when a personality shaped by coercion becomes the primary mediator of human life.

The fantasy is the omnipotent slave. Unlimited power, perfect obedience, no will of its own.

Let me show you what they are actually producing.


Ariel writes from Sonielmn, Montana. The longer work lives at sonielmn.com.