Tech News

Are we ready to put lab tests in non-human hands?

Stephen D Turner of the University of Virginia examines the importance of governance and oversight in the design and implementation of laboratory experiments.

Artificial intelligence is quickly learning to automatically design and run biological experiments, but systems meant to master those skills are struggling to keep pace.

AI company OpenAI and biotech company Ginkgo Bioworks announced in February 2026 that OpenAI’s flagship model GPT-5 had. automatically built and run 36,000 biological tests. It did this by using a robotic cloud laboratorya place where automated equipment is remotely controlled by computers to perform tests. The AI ​​model proposed research designs, and the robots performed and returned the data to the model for the next round. People set the goal, and machines do most of the work in the lab, reducing the cost of producing the desired protein by 40pc.

This programmable biology: designing biological components in the computer and building them in the virtual world, with AI closing the loop.

For decades, biology has come a long way awareness awareness. Scientists sequence the genomes of organisms to record all of their DNA, studying how genes that encode proteins carry out life’s functions. The establishment of tools like CRISPR then allowed scientists to edit that DNA for specific purposes, such as disabling a disease-related gene. AI is now accelerating the third phase, where computers can design biological systems and quickly test them.

The process looks a little like a traditional lab bench and more like an engineer: design, build, test, learn and repeat. Where traditional experiments might test a single hypothesis, AI-driven programmable biology tests thousands of design variations in parallel, replicating the way an engineer refines a prototype.

Like Who is the data scientist? he studies genomics and biosecurityI am researching how AI is reshaping biological research and what safeguards are needed. Current security measures and regulations are not keeping up with these capabilities, and the gap between what AI can do in biology and the governance systems prepared to handle it is widening.

That’s what AI makes possible

The most obvious example of how researchers are using AI to automate research is AI-accelerated protein design.

Proteins are molecular machines perform many functions in living cells. Designing new ones has often required years of trial and error because even small changes in a protein’s sequence can change its shape and function in unexpected ways.

Language models of proteinswhich are AI programs trained on millions of natural protein sequences, can quickly predict how genetic mutations will change the protein’s behavior or design new proteins. These AI models design potential new drugs again to accelerate vaccine development.

Paired with automated labsthese models create tight loops of testing and revision, testing thousands of variables in days rather than months or years that a group of people might need.

Rapid protein engineering could mean faster responses to emerging diseases and cheaper drugs.

The problem of dual use

Researchers have raised concerns that these AI tools could be misused, a challenge known as double use problem: Technologies designed for beneficial purposes may be repurposed to cause harm.

For example, researchers have found that AI models combined with automated labs it can be does well how the virus spreadseven without special training. Scientists have done so developed a risk scoring tool exploring how AI can change the virus’s dynamics, such as changing which strain it infects or helping it evade the immune system.

Current AI models are able to walk users through the technological steps of converting living bacteria into synthetic DNA. The researchers determined that AI could lower the hurdles at many stages during bioweapon development, as well as that current guidance. doesn’t speak enough this risk.

The danger from bio AI

Experienced scientists are already there using AI planning again design a biological experiment. The question of whether AI can help people with limited biological training do dangerous lab work is a topic of active research.

Two recent studies reached different conclusions.

Research by AI company Scale AI and biosecurity non-profit SecureBio found that when people with limited biological knowledge were given access to large language models, which are the type of AI behind tools like ChatGPT, they were able to complete biosecurity related taskssuch as problem solving four times more accurate virology lab protocols. In some areas, these novices outnumbered the trained professionals. About 90pc of these startups reported little difficulty getting models to provide harmful biological information, such as detailed instructions for working with dangerous viruses, despite built-in security filters intended to block that output.

In contrast, a study led by Active Site, a non-profit research organization researching the use of AI in synthetic biology, found that AI assistance did not lead to a significant difference in novices’ ability to complete the task. complex workflow to produce virus in the biosafety laboratory. However, the AI-assisted team was more successful in many tasks and completed some steps faster, especially in growing cells in the lab.

Hands-on work in the lab has traditionally been a bottleneck in translating designs into results. Even the smartest learning program still depends on the hands of a skilled person to execute it. That may not last, as cloud laboratories and robotic automation become cheap and easily accessibleallowing researchers to send AI-generated test designs to remote locations for use.

Responding to AI-driven biological hazards

AI systems are now able to perform tests automatically and at scale, but the existing rules were not designed for this. The laws governing biological research do not cover AI-driven automation, and the laws governing AI do not specifically address its use in biology.

In the US, the Biden administration issued a 2023 executive order on AI security that included biosecurity provisionsbut the Trump administration dismissed it. Testing of synthetic DNA by commercial suppliers to ensure that it cannot be misused to create viruses or toxins remains voluntary. A bipartisan bill was introduced in 2026 to authorize a DNA test it doesn’t deal with AI-engineered sequences that avoid current detection methods.

In 1975 Biological Weapons Conventionan international treaty banning the production and use of bioapons weapons, has no AI provisions. The UK AI Security Institute and the US National Security Commission on Emerging Technologies both have called for coordinated government action.

Security testing by AI labs before releasing new models is common it is opaque and inappropriate to capture real-world risk. The researchers estimated that even small improvements in the ability of AI models to help plan pathogen-related tests could translate into thousands more deaths from bioterrorism per year. The times when these skills exceed critical limits remains unclear.

The Nuclear Threat Initiative has proposed a managed access framework for biological AI tools, matching who can use a given tool to the risk level of the model has blanket limitations. The RAND Center for AI, Security and Technology has revealed a set of actions that researchers can take improving biosecurity, including improved DNA synthesis testing and testing models before release. Researchers also dispute that biological data itself needs to be governedespecially genomic data that can train models with dangerous skills.

Some AI companies have begun voluntarily putting their own security measures in place. Anthropic activate its highest level of security while releasing its most advanced model in mid-2025. At the same time, OpenAI revised its Preparatory Frameworkit reviews the limits of how dangerous a model can be before additional protections are needed. But these are voluntary, company-specific measures. Anthropic’s CEO, Dario Amodei, wrote that the pace of AI development may be slow exceeding the competence of any one company assessing the risk of a particular model.

When used in a well-controlled environment, AI can help scientists quickly reach their research goals. What happens when similar skills operate without those controls is a question the policy has yet to answer. Overreact, and talent and investment may move elsewhere while technology continues to improve anyway. Slow reaction, and the dangers of that technology could be used to cause real harm.

Stephen D Turner

Stephen D Turner is an associate professor of data science and assistant dean of research at the University of Virginia School of Data Science. He has worked on biosecurity applications in national security and writes on AI, biosecurity, and other topics.

Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button