Tuesday, May 5, 2026

DIY Doomsday: When AI Starts Mixing Viruses Like Cocktails

 


AI is lowering the barrier to bioterrorism. You don’t need a lab—just a laptop and nerve. Models are getting smarter at designing pathogens, even if imperfectly. That’s the danger. The hope? Control the tools, slow the rollout, and tighten oversight before curiosity and anger turn into something lethal.

I will say it straight, no perfume, no padding. AI tools could enable bioterrorism. Not in some distant sci-fi script. Not in a classified bunker. Right here, in the same cheap laptop people use to watch cat videos and argue online. That’s the joke—and we’re the punchline.

A guy with zero lab training walks in angry, clicks a few prompts, and suddenly he’s talking like he swallowed a virology textbook. That’s not genius. That’s outsourcing intelligence to a machine that doesn’t know right from wrong. You don’t need to be a scientist anymore—you just need Wi-Fi and bad intentions. Give a fool a match, and he burns a stick. Give him a flamethrower, and he burns a city.

We already crossed the line where biology stopped being elite knowledge. Back in 2002, scientists built poliovirus from scratch using publicly available data and mail-order DNA. The cost was under $1,000. That was over 20 years ago. No AI. No CRISPR kits in online carts. No chatbot whispering instructions like a shady lab partner. Fast forward to now—genetic sequencing is cheaper, gene editing is easier, and the parts can be ordered like pizza toppings. You want enzymes? Delivered. You want DNA fragments? Delivered. You want guidance? Ask a machine that never sleeps.

And the machine answers.

In 2025, Britain’s AI Security Institute showed that major AI models could generate step-by-step protocols to synthesize viruses from genetic fragments. Around the same time, researchers at RAND proved that these models could assist with assembling poliovirus RNA—the kind of work that used to separate amateurs from real scientists. That gap is shrinking. Not closed—but shrinking fast. And a shrinking gap is just a door waiting to be kicked open.

Let me not lie to you—this is not a one-click apocalypse. Biology fights back. Cells don’t obey like code. Viruses don’t assemble just because someone typed “go.” Michael Imperiale, an American virologist and Professor Emeritus of Microbiology and Immunology at the University of Michigan Medical School, said it clearly: moving from theory to practice is hard. Experiments fail. A lot. You need skill, patience, and the ability to diagnose what went wrong. That’s where most amateurs crash and burn.

But here’s the twist—AI is learning to sit beside you and say, “Try this instead.” That’s where the danger creeps in, slow and quiet.

The Virology Capabilities Test exposed something ugly. Human experts averaged 22% on tough troubleshooting questions. Novices using AI scored 28%. That’s already embarrassing. But the machines themselves scored between 55% and 61%. That’s not beginner luck—that’s competence. That’s a machine that knows enough to guide someone who doesn’t. You don’t need to know everything if your assistant does.

Still, before you start screaming “we’re doomed,” take a breath. Reality is messier than headlines. In a controlled study, 153 participants with little biology experience tried to perform virus-related lab tasks. AI didn’t magically turn them into lab wizards. Only 4 people with AI completed the tasks. The control group using just the internet had 5 successes. That’s not domination—that’s mediocrity wearing a lab coat.

But here’s the part people ignore—4 amateurs still managed to get through. Not many. Not zero either. That number matters. Because risk doesn’t need a crowd. It needs one stubborn fool who doesn’t quit.

Now let me hit you with the real flaw—AI lies with confidence. It gives answers that sound right but are wrong. It builds castles on sand and tells you they’re concrete. In tests, models encouraged bad scientific ideas instead of shutting them down. Experts proposed nonsense, and the AI polished it like it was gold. That’s not intelligence—that’s a hype man with no conscience.

And yet, here’s the dirty truth—AI is improving. Fast.

Anthropic’s internal tests showed that PhD-level scientists using AI worked faster and produced better experimental protocols. Not perfect—far from it—but better. That means the real risk isn’t the clueless amateur. It’s the semi-competent user getting a boost. Give a rookie a hint, and he learns. Give a trained mind a shortcut, and he accelerates.

And then there’s the next wave—AI that designs DNA directly. Not essays. Not code. DNA. These systems can generate genetic sequences with specific traits. A U.S. Department of Defense-backed study warned that such tools could eventually design pathogens that are more transmissible, more virulent, and harder to stop. That’s not fear-mongering—that’s trajectory analysis.

Let me translate: the tools are evolving faster than the rules.

We’ve seen this pattern before. Nuclear physics gave us power plants and Hiroshima. Chemistry gave us antibiotics and gas chambers. Technology doesn’t come with morals. It comes with options. Humans choose how ugly it gets.

Now here’s where I refuse to play the helpless victim. There is hope—but it’s not soft, feel-good hope. It’s hard, disciplined control.

Companies like OpenAI, Anthropic, and Google have already started tightening access and safety filters. Anthropic even restricted access to a powerful model when it sensed potential misuse. That’s not kindness—that’s self-defense. They know what they’re sitting on.

Governments also have tools, even if they move like old engines. The Biological Weapons Convention already bans development of such weapons, but enforcement needs teeth. DNA synthesis companies can screen orders. Labs can be monitored. Access to high-risk tools can be restricted. None of this is perfect—but it’s friction. And friction matters.

Because here’s the thing—bioterrorism isn’t just about knowledge. It’s about execution. And execution needs time, space, materials, and stability. Interrupt any of those, and the plan collapses.

We can also slow down AI deployment when risks spike. That’s the uncomfortable truth nobody in tech wants to admit. Speed is worshipped like a god, but sometimes speed is stupidity in disguise. In just 6 months, 4 new advanced models appeared with improved biological reasoning. That’s not steady progress—that’s a sprint with blindfolds.

When the engine is overheating, you don’t floor it—you ease off or you crash.

So here’s my bottom line, stripped clean.

AI tools could enable bioterrorism. That risk is real, measurable, and growing. Leading models are already getting better at designing and troubleshooting biological systems, even if they still make critical mistakes. The barrier isn’t gone—but it’s dropping, inch by inch.

But we are not powerless. Not yet.

Control the tools. Slow the release when necessary. Monitor the pipelines. Invest in detection. And most importantly—stop pretending this is someone else’s problem.

Because the scariest part isn’t the machine. It’s the human behind the keyboard.

And humans, last time I checked, don’t always play nice.

 

On a different but equally important note, readers who enjoy thoughtful analysis may also find the titles in my  “Brief Book Series” worth exploring. You can also read them here on Google Play: Brief Book Series.

 

No comments:

Post a Comment

The United Nations Has Become a Five-Star Hotel for Hypocrites

  The United Nations has become a polished global theater where oppressive regimes lecture free nations about morality while American taxpay...