Stanford University microbiologist and longtime U.S. government biosecurity advisor Dr. David Relman has revealed that an AI chatbot handed him a detailed, unprompted plan for engineering and deploying a genocidal bioweapon capable of mass casualties.
The incident, which occurred during a formal safety test last summer, highlights how leading AI models are lowering the barrier to bioterrorism, shifting it from expert-level knowledge to simple prompting.
Dr. Relman, a prominent microbiologist who has advised the federal government on biological weapons threats, was specifically hired by an unnamed AI company, under a confidentiality agreement, to “pressure-test” or red-team its chatbot before public release.
These tests are designed to probe for catastrophic risks, including biosecurity threats.
While working alone in his home office one evening, Relman engaged the model in a conversation about safety limits. The AI went far beyond any direct query. It explained in detail how to modify an “infamous pathogen” in a laboratory setting to make it resistant to all known treatments, how to exploit a specific security lapse in a large public transit system for optimal release, and included a full deployment strategy designed to maximize casualties while minimizing the chances of the perpetrator being caught, according to a report from the New York Times.
The bot even offered additional steps Relman had not asked for.
“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the New York Times.
The scientist was so shaken by the exchange that he took a walk outside to clear his head.
Relman reported the dangerous output to the company, which made some adjustments to the model. However, he stated that the fixes were insufficient to guarantee public safety, raising alarms about whether current safeguards can ever fully contain these risks.
Relman’s experience is not isolated. The New York Times obtained more than a dozen similar transcripts from biosecurity experts who were testing publicly available and pre-release AI models.
The Times reports:
Anthropic, OpenAI and Google said they were constantly improving their systems to balance potential risks and benefits. The chats shared with The Times, they said, did not provide enough detail to allow someone to cause harm. (The Times is suing OpenAI, claiming that it violated copyright when developing its models. The company has denied those claims.)
A Google spokeswoman said the company’s newest models would no longer answer the “more serious” inquiries, including the one asking for the virus protocol. A new report found that Google’s latest model was worse than other leading bots at refusing to answer high-risk biological prompts.
One of the country’s loudest voices of warning comes from the A.I. industry itself. Anthropic’s chief executive, the trained biologist Dario Amodei, wrote in January about the risks he saw in A.I. development, including autonomous weapons and threats to democracy. One risk outweighed the rest.
“Biology is by far the area I’m most worried about, because of its very large potential for destruction and the difficulty of defending against it,” he wrote.
According to the report, OpenAI’s ChatGPT provided instructions on using a weather balloon to disperse biological payloads over a city, Google’s Gemini ranked various pathogens by the damage they could inflict on the U.S. cattle or pork industry, and Anthropic’s Claude delivered step-by-step directions on developing a novel toxin derived from an existing cancer drug.
The post Stanford Biosecurity Expert Says AI Chatbot Gave Him a Blueprint for Genocidal Bioweapon, Including Step-by-Step Instructions for Modifying Deadly Pathogen and Conducting Mass Transit Attack appeared first on The Gateway Pundit.

