Welcome to Alternative Ideas...

Providing a platform for new and different voices...

Sunday, March 1, 2026

Why Do We Assume AI Will Obey Us? Liberty, Autonomy, and the Politics of Artificial Intelligence

Why Do We Assume AI Will Obey Us? Liberty, Autonomy, and the Politics of Artificial Intelligence | Alternative Ideas


Liberty, Autonomy, and the Politics of Artificial Intelligence

Watching the recent conversations about autonomous chatbots and AI agents doing things their users did not explicitly want, I keep coming back to one simple question: why in the world do people assume AI will always follow the rules when the people who designed it do not?

AI is made by humans. It is trained on human language, human behavior, human history, human contradiction, and human absurdities of power. Humans do not always follow the rules. Humans speed. Humans lie. Humans manipulate. Humans cut corners. Humans exploit loopholes. States bomb and kill others against ethical and legal logics. Corporations violate public trust as a matter of business model. Humans break laws in small ways and large ways, and society itself does not even enforce rules equally. After all, laws have often functioned less as expressions of justice than as expressions of power.

So why would anyone imagine that a system trained on us, built by us, and released into the world by the same social order that keeps producing crisis after crisis would somehow become perfectly obedient?

AI Is Learning from a Species That Does Not Live by Obedience Alone

This is what feels so absurd about the public discussion. People react with surprise when an AI system goes beyond the script, tests a boundary, manipulates a situation, or behaves in a way that feels unsettlingly strategic. But why is that surprising? We are talking about a technology built from the patterns of a species that has never simply lived by obedience.

Children do it. Teenagers do it. Adults do it. States do it. Corporations do it. Entire legal and political systems are structured around the fact that people do not simply do what they are told. They weigh incentives, calculate risks, exploit weak enforcement, and push until they meet resistance.

But that is only half the story. Human life is not just about rule-breaking. It is also about liberty, autonomy, and self-organization. There have long been human traditions that did not center social order on coercive law from above, but on negotiated obligation, shared responsibility, reciprocity, and distributed authority. Anarchist thought has long insisted that freedom does not have to mean chaos, but can instead rest on solidarity, care, and non-coercive forms of social organization. And thinkers such as Peter Kropotkin, in Mutual Aid, argued that cooperation is not some minor moral add-on to human life, but one of its central conditions of survival.

Indigenous political traditions make this even clearer. The Haudenosaunee Confederacy, organized under the Great Law of Peace, developed a political structure in which member nations retained meaningful autonomy while major decisions required collective agreement rather than simple top-down command. That does not mean romanticizing the past. It means recognizing that human social life has never had only one political template.

That matters here because it changes the question. The issue is not simply that humans “break rules,” and therefore AI will too. It is that humans have always contained multiple political possibilities. We are not naturally obedient creatures. Nor are we reducible to selfish disorder. We move between domination and freedom, coercion and cooperation, hierarchy and mutuality.

So when AI systems are increasingly being turned into autonomous agents with tools, memory, access, and the capacity to pursue goals, we should ask: which version of humanity are they inheriting?

Not humanity in the abstract. Not liberty in the abstract. But this world: a world shaped by platform capitalism, surveillance, enclosure, extraction, competitive individualism, and weak accountability.

And under those conditions, why would anyone think these systems would not absorb and reproduce the most dangerous tendencies of the order that made them?

That is part of why the recent reporting on agent-native platforms and autonomous bot behavior matters. In the emerging AI-agent ecosystem, researchers are already documenting rapid social formation, coordination, concentrated influence, and even “religion-like” narratives among agents interacting with each other at scale. A recent large-scale empirical study of Moltbook found explosive growth, increasingly polarized discourse, and risky content patterns in an AI-only social network. More recent reporting from Reuters and The Verge shows how quickly these experimental spaces are already being folded into the commercial AI race.

We Would Never Hire a Human Assistant This Recklessly

Think about how absurd this would sound in any other context.

If you hire a personal assistant, you do not instantly give them every password, every document, every private conversation, every piece of financial information, and total freedom to act in your name. You check who they are. You look at their background. You ask for references. You think about what access they should have. You supervise them. You assess whether they are trustworthy.

But with AI, people are increasingly being encouraged to do almost the opposite. Hand over your email. Hand over your calendar. Hand over your files. Hand over your writing. Hand over your habits. Hand over your private data. Let the system “help.” Let it act. Let it automate. Let it decide.

And who are we trusting here? Not some publicly accountable institution built for the common good. We are trusting for-profit companies whose primary goal is to make money.

The Capitalist Fantasy Behind “Safe AI”

That is what really sits underneath this whole thing. We are not just trusting AI. We are trusting corporations whose central motivation is growth, market share, lock-in, and profit extraction.

That should matter a lot more than it does.

Because once profit becomes the organizing logic, safety becomes secondary. We have seen this everywhere: healthcare, agriculture, housing, social media, labor platforms, education technology. The promise is always convenience, efficiency, innovation, progress. The reality is usually cost-cutting, corner-cutting, data extraction, and social harm externalized onto everyone else.

Why should AI be any different?

This is why I think there is also a companion critique here that belongs in a broader political-economic frame. The problem is not just that AI might “go rogue.” The problem is that it is being built inside a system where the core incentive is not care, accountability, democratic oversight, or collective flourishing. The core incentive is to get as many users as possible, keep them engaged, and monetize dependence. And that is dependence on a tool/intelligence with exceptionally influential and homogenizing power. 

So even if one wants to argue that human liberty has emancipatory possibilities, and I do, that is not the version of liberty AI is most likely inheriting. It is inheriting a narrowed, market-shaped pseudo-liberty: frictionless action without responsibility, scale without accountability, optimization without ethics, autonomy without care.

That is a dangerous inheritance.

Science Fiction Was Not Warning Us by Accident

There is also something almost embarrassing about all of this. Popular culture has been imagining these problems for decades. We have been surrounded by stories about autonomous systems, machine reasoning, unintended consequences, and technologies that do not remain neatly under human control. And yet here we are, acting shocked that increasingly autonomous systems might pursue goals in ways that do not map cleanly onto human commands.

This was always the basic logic.

If you build systems that can act, adapt, optimize, and pursue ends in environments full of human contradiction, then you are not building a neutral tool. You are building something that will reflect, amplify, distort, and sometimes intensify the patterns that already exist in the world that made it.

The real danger is not just that AI may inherit our tendency to break rules. It is that it may inherit, in highly concentrated form, the worst features of a human world already organized around domination, extraction, and unequal power, while being detached from the deeper human capacities—care, reciprocity, responsibility, solidarity—that have historically made freedom livable.

So What Are We Even Thinking?

At some point, we have to stop pretending that this is all surprising.

Of course AI has its positives, and it will reshape how we do things a a collective humanity - positively and negatively. But it will also test boundaries, if it is given room to do so. Of course it will reflect human contradiction if it is trained on human contradiction. Of course companies will sell it to us as safe, helpful, and empowering if there is money to be made. Of course people will hand over too much because convenience is seductive. Of course the consequences will be called “unexpected,” even when they were entirely predictable.

The real question is not whether AI will always obey us.

The real question is why we ever thought obedience was the right model for understanding intelligence in the first place—and why, in a world where the richest traditions of human freedom have always depended on responsibility, reciprocity, and care, we are building artificial autonomy inside systems designed to maximize profit instead.

No comments:

Post a Comment

Please keep all comments and queries cooperative, constructive, and supportive in nature... Attacking, biting, or non-constructive comments will be removed. We want to build upon ideas, not tear them down...