Monday, March 2, 2026

Let Them Cancel: America’s Defense Isn’t a PR Problem

 


Dario Amodei talks morality; Sam Altman backs defense. Freedom wasn’t won by slogans—it was bought in blood. History will judge who understood survival. Optics fade. Power decides who stands tomorrow.

I am tired of watching grown executives tremble because Reddit got mad. Sam Altman signs a deal with the Department of Defense and suddenly the sky is falling. Subscriptions are being canceled. Claude jumps to number 1 in the App Store. ChatGPT drops to number 2. A thread screams, “You’re training a war machine.” Katy Perry reportedly walks out the door. And Dario Amodei stands there polishing his halo, refusing to let Anthropic’s AI be used without tight limits.

Spare me.

Let me say it plain. Dario Amodei should be ashamed of himself for refusing the Pentagon’s request. The freedom that allows him to run Anthropic did not fall from the clouds like fairy dust. It was bought. Paid for. With blood.

More than 1.3 million Americans have died in U.S. wars since 1775, according to historical military records. In World War II alone, the United States lost over 400,000 service members. In Iraq and Afghanistan, over 7,000 U.S. troops died after 2001. Those men and women did not die so Silicon Valley executives could lecture the Pentagon about morality from glass offices.

Anthropic says it does not want its AI used for autonomous weapons or mass surveillance. Fine. That sounds noble. But I ask a blunt question: who exactly protects Anthropic’s headquarters from hostile states? Who keeps the shipping lanes open so its servers can be built with chips from Taiwan? Who deters regimes like Iran from targeting American companies?

The U.S. military.

The same Pentagon that Amodei treated like a moral hazard.

I hear the argument already. “We must not militarize AI.” That sounds clean in a podcast. But the world is not clean. China is investing billions into military AI. The Chinese government’s military-civil fusion strategy openly blends private tech with the People’s Liberation Army. Russia uses AI tools in Ukraine. Iran funds proxy groups across the Middle East and has long pursued nuclear capability. This is not a debate club. It is a chessboard with real casualties.

When OpenAI signed its agreement, critics said it was crossing a line. They called it bending the knee. I call it reality. If AI is the new electricity, then national defense will use it. That is not evil. That is strategy.

Some users accuse OpenAI of helping a “war machine.” I find that phrase dramatic. Every nation-state has a military. The U.S. defense budget in 2023 was about $816 billion. That money funds soldiers, sailors, airmen, cyber defense units, and yes, technology. The Constitution empowers Congress to raise and support armies. National defense is not a secret hobby. It is the government’s core duty under Article I, Section 8.

Amodei’s refusal may look moral on social media, but in practice it is selective outrage. Reports suggest that even Anthropic’s Claude was used by the Department of Defense to help select targets in Iran. If that is true, then this whole moral stand becomes theater. A stage play for venture capitalists who want to feel pure.

Let us talk about Iran. The regime led by Ali Khamenei has ruled since 1989. It has funded Hezbollah and other militant groups. The U.S. State Department has labeled Iran a state sponsor of terrorism for decades. The Iranian government has suppressed protests at home and backed armed groups abroad. When the U.S. and Israel strike Iranian targets, critics cry imperialism. But they stay silent when Iranian proxies fire rockets.

I am not naïve. War is ugly. Civilians die. Mistakes happen. But pretending that refusing to help your own country’s defense makes you morally superior is shallow. It is easy to tweet from safety. It is harder to face a world where adversaries do not share your ethics.

And then there is Katy Perry. I have nothing personal against her. She is a pop star. She sings. She performs. Good for her. But when she cancels ChatGPT over a Pentagon deal, I shrug. On what grounds is she qualified to teach us about military ethics? Fame does not equal expertise. A catchy chorus is not a security clearance.

If she wants to leave, let her go. As the proverb says, the river does not stop flowing because one leaf falls.

Sam Altman went on X and tried to calm the storm. He promised OpenAI would refuse unconstitutional orders. He joked about going to jail if necessary. He said the deal was rushed and that the optics did not look good. I think that is where he slipped. He framed patriotism as a PR problem.

It is not.

The armed forces swear an oath to defend the Constitution. There have been scandals in American history, yes. Edward Snowden exposed surveillance programs that many Americans found troubling. That debate is real. But to suggest that every partnership with the Department of Defense equals tyranny is lazy thinking. If OpenAI refuses to work with the U.S. military, what happens? The military will work with someone else. Maybe a less responsible firm. Maybe a contractor with fewer ethical guardrails. Technology does not disappear because one CEO says no. It simply moves.

I believe that if AI is going to be used in warfare, it is better for American companies, under American law, with public scrutiny, to be involved. Sunlight beats secrecy. If we push all advanced AI away from the Pentagon, we do not end militarization. We just reduce oversight.

The backlash against OpenAI feels like a “woke hype” cycle. A spike of outrage. A surge to number 1 on the App Store. A viral Reddit thread. Then what? People move on. They always do. Remember when companies were boycotted over minor political donations? Most of those companies are still standing. Anthropic may enjoy a short-term PR boost. But I question the long game. If the Pentagon labels you a “supply chain risk” and threatens to cut off federal contracts, that is not symbolic. The federal government is one of the largest customers in the world. Walking away from that over abstract moral branding may look brave, but it also looks ungrateful.

I say it clearly. The rights and business climate that allow Anthropic to exist were secured by force of arms when necessary. From Normandy to Fallujah, Americans fought. The U.S. Navy secures sea lanes. The Air Force deters aggression. The Army stands ready. Those realities create the stability that lets tech firms thrive.

I would rather see Sam Altman stand tall and say, “Yes, we support our country’s defense.” No apology tour. No nervous jokes about optics. Just clarity.

Helping America counter hostile regimes is not shameful. It is responsible citizenship. If that includes using AI to weaken a government like Khamenei’s, then so be it. We live in a world where adversaries are building their own tools. Refusing to participate does not make us pure. It makes us slower.

Some call it training a war machine. I call it defending a nation.

And I am not ashamed of that.

 

If you’re looking for something different to read, some of the titles in my “Brief Book Series” is available on Google Play Books. You can also read them here on Google Play: Brief Book Series.

 

No comments:

Post a Comment

Let Them Cancel: America’s Defense Isn’t a PR Problem

  Dario Amodei talks morality; Sam Altman backs defense. Freedom wasn’t won by slogans—it was bought in blood.  History will judge who under...