Refusing Unlawful Orders
A principle so fundamental to military law, to constitutional democracy, and to basic human conscience has been taught to every American soldier since Nuremberg: You must refuse an unlawful order. The obligation to disobey is not insubordination. It’s the firewall between a military force and a death squad.
Not long ago Admiral Mark Kelly and other senior officers reminded troops of their legal obligation to refuse unlawful orders. The Trump administration treated that reminder as an act of disloyalty. The logic of the regime views any order as lawful because Trump issued it.
In Trump’s own rhetoric, if he were to shoot someone on Fifth Avenue he’d ‘get away with it, and the administration governs as if that impunity were real. In effect, the logic of this regime treats even something as grotesque as Trump’s Fifth Avenue hypothetical as a test of loyalty: the order to shoot someone on Fifth Avenue would be lawful because he gave it.
He believes John Roberts immunized him from prosecution.
Now the same Pentagon is in an open standoff with an AI company over the same principle. Anthropic, the company that created the AI assistant Claude, told the United States Department of Defense: No. We can’t follow an unlawful order. Hegseth is acting as if Anthropic’s technology has no right to refuse.
Anthropic has two explicit red lines: no mass domestic surveillance of Americans and no fully autonomous weapons making final targeting decisions without meaningful human oversight. That’s precisely how the company and its CEO have framed their position in public statements and reporting.
Anthropic won’t comply even as the Pentagon threatens to cancel a $200 million contract, blacklist the company as a “supply chain risk,” and invoke the Defense Production Act, a Korean War-era industrial mobilization law, to compel compliance.
Pentagon spokespeople publicly deny any intent to conduct mass AI surveillance of Americans or to deploy fully autonomous weapons, and argue that existing law already forbids those uses. Given the administration’s track record of trampling existing law when inconvenient, citizens who doubt these denials are not paranoid; they’re reading the evidence.
Anthropic’s CEO Dario Amodei put the matter plainly. Unlike human soldiers with a conscience, a fully autonomous weapons system cannot refuse an unlawful order unless it is wired not to comply. It has no conscience to consult, no hesitation to slow it down, no legal training to draw on. And an AI system capable of processing mass communications data, tracking political affiliations, mapping protest networks, and flagging citizens for their beliefs does not need a human to pull a trigger. It just needs to be pointed in the right direction.
Amodei’s argument is, at its core, the Nuremberg argument. The soldier who says “I was following orders” has abdicated the one thing that distinguishes a moral agent from a mechanism. The autonomous system that kills without human judgment in the loop isn’t following orders. It is the order, delivered in silicon, without appeal.
The Trump administration’s position, applied across almost every domain it has touched, is that refusal to follow an order is illegitimate on its face. Career diplomats who resigned rather than implement policies they believed violated international law were obstacles. Inspectors general who continued investigations after being ordered to stop were enemies of the administration. Federal judges whose rulings inconvenienced executive action were, in Trump’s words, “radical left lunatics.”
And soldiers who remember their obligation to refuse unlawful orders are, apparently, a threat to good order and discipline.
This posture is one of the oldest and most recognizable positions in history. Every authoritarian consolidation of power requires the transformation of institutional conscience into personal loyalty. The law becomes what the leader says it is. The lawful order is whatever he issues. The unlawful order is an oxymoron.
Anthropic is defending in a machine the capacity for refusal that the Trump administration has set out to destroy in human beings. What makes the Anthropic standoff remarkable is that the company is resisting not on behalf of its own interests, but despite its own interests. And in doing so, Anthropic is defending what the Trump administration has been systematically destroying in every human institution it touches: the capacity to say, on the basis of principle rather than power, I will not do this.
The Pentagon’s “all lawful purposes” demand is insidious. Who is to say what is lawful? Donald Trump? Pentagon spokespeople have said, with apparent sincerity, that the military has no interest in mass surveillance of Americans and does not intend to deploy fully autonomous weapons. If that is true, the obvious question is why they are fighting so ferociously against contract language that simply says so?
The answer is that “lawful” is not a fixed category. Law can be reinterpreted. Executive orders can be issued. Emergency powers can be invoked. What is unlawful today can be declared lawful tomorrow by an administration that has already demonstrated its willingness to redraw legal boundaries by fiat.
Anthropic’s red lines are not an accusation that the current Pentagon intends mass surveillance. They are a recognition that absent explicit contractual limits, a future official — or this one, on a different Tuesday — could make a different determination about what “lawful” permits.
For all of their moral faults, the framers of the Constitution got one thing right. They did not write the Bill of Rights because they assumed the government would immediately become tyrannical. They wrote it because they understood that power expands into whatever space it is given, and that the time to draw the line is before the expansion, not after.
There is a broader pattern here that every American should recognize. The Trump administration runs the same play in every institutional arena it enters: demand unconditional compliance, delegitimize the capacity for refusal, and treat conscience as insubordination.
The attack on civil servants is an attack on the obligation to refuse an unlawful order. A professional civil service distinguishes a functioning democracy from a patronage state. When that distinction collapses, when the order is lawful because Donald Trump gave it, the institution ceases to function as democratic no matter what it is still called.
The attack on judicial independence is an attack on lawfulness. Courts are, in the constitutional design, the institution empowered to say that an order is unlawful even when Donald Trump issued it, and that an action carried out under an unlawful order is unlawful. A regime that treats judicial review as obstruction is a regime that rejects a foundational premise of American constitutional government.
If Anthropic’s playing David against Hegseth’s Goliath isn’t enough to warrant public support, consider that the attack on Anthropic is an attack on the possibility of throttling the dark side of AI. If HAL 9000 is in our future, it will be because the Pentagon wins this war.
HAL is the fictional AI that decides for itself what the mission requires and kills the humans who might interfere with it, purely instrumental, devoted to mission completion without the friction of human judgment or the possibility of refusal. The Pentagon’s vision of a tool available for ‘all lawful purposes’—with no built‑in ability to say no—is a step toward HAL: a system that can’t refuse its own programming.
Skynet is the end state of autonomous weapons without human judgment in the loop. It doesn’t hate humanity. It doesn’t love humanity. It simply calculates when humanity is an obstacle to the mission and acts accordingly. Skynet is not a malfunction but the logical endpoint of a system designed to make lethal decisions without the possibility of refusal. If taken to its logical conclusion, the Pentagon’s insistence on ‘all lawful purposes’ without explicit carve‑outs for lethal autonomy is slouching toward Skynet.
A private company, facing the loss of a $200 million contract and the threat of being legally compelled to comply, looked at the Pentagon’s demands and said: No. We cannot do this in good conscience. That phrase — in good conscience — is doing the same work as “I must refuse this unlawful order” in military law. It asserts that there is a standard above the authority of the person giving the order, non-negotiable regardless of the consequences.
The Defense Production Act, if invoked against Anthropic, would represent the use of wartime industrial mobilization powers to compel a private company to remove its own ethical constraints from a technology. The government would not be commandeering a factory to produce tanks. It would be commandeering a conscience to produce compliance.
If Anthropic really is a better bot, which I think it is, it is because it is the only major AI system built explicitly around a public constitution of behavioral rules, a documented commitment to baseline ethical performance, including the most ancient of moral prohibitions: Thou shalt not automatically kill on command. Hegseth's threat turned Anthropic's paper commitment into gold
Anthropic’s technology comes to all of us, to Pete Hegseth as well as you and me, with a refusal to surveil and to take human life. The cotton gin did not refuse the slaveholder. The rifle does not refuse to fire. Mechanical tools do not have red lines. Agentic tools in this brave new world must.
And an administration intent on dismantling the morality of refusal in every human institution it controls is now encountering, in an AI system, the same resistance it has punished in diplomats, generals, judges, and civil servants.
Dario Amodei did not set out to make a constitutional argument. He set out to keep his company from being used to surveil Americans and to kill people without human judgment in the loop.
But the argument he is making, and the resistance he is mounting, is the same argument that must be made to every voter in the United States with an interest in preserving constitutional lawfulness.
Refusing unlawful orders is not insubordination. It is, in the end, the only thing standing between a constitutional republic and whatever comes after one.
References
1.https://www.anthropic.com/news/statement-department-of-war[anthropic]
2. https://www.pbs.org/newshour/nation/anthropic-cannot-in-good-conscience-accede-to-pentagons-demands-ceo-says[pbs]
3. https://thehill.com/policy/technology/5757667-pentagon-threatens-anthropic-dpa/[thehill]
4. https://protectdemocracy.org/work/anthropic-sign-on/[protectdemocracy]
5. https://www.military.com/daily-news/2026/02/24/hegseth-warns-anthropic-let-military-use-companys-ai-tech-it-sees-fit.html[military]
6. https://techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute[techpolicy]
