Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday October 02, @09:16AM   Printer-friendly

Experts Alarmed That AI Is Now Producing Functional Viruses:

In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections.

But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related).

"There is no sugarcoating the risks," the pair warned in a piecefor the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that's the world we're now living in."

In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes.

They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses.

Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus.

But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them."

That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely."

"The federal government should make building these high-quality datasets a priority," the duo opined.

From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue.

Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.)

"Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said.

The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did.

But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.

More on synthetic biology:Scientists Debate Whether to Halt Type of Research That Could Destroy All Life on Earth

AI Creates Bacteria-Killing Viruses: 'Extreme Caution' Warns Genome Pioneer:

A California outfit has used artificial intelligence to design viral genomes before they were then built and tested in a laboratory. Following this, bacteria was then successfully infected with a number of these AI-created viruses, proving that generative models can create functional genetics.

"The first generative design of complete genomes."

That's what researchers at Stanford University and the Arc Institute in Palo Alto called the results of these experiments. A biologist at NYU Langone Health, Jef Boeke, celebrated the experiment as a substantial step towards AI-designed lifeforms, according to MIT Technology Review.

"They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements," Boeke said.

They team created 302 full genomes, outlined by their AI, Evo - a LLM similar to that of ChatGPT - and introduced them to E. coli test systems. 16 of these designs created successful bacteriophages which were able to replicate and kill the bacteria.

Brian Hie, who leads the Arc Institute lab, reflected on the moment the plates revealed clearings where bacteria had died. "That was pretty striking, just actually seeing, like, this AI-generated sphere," said Hie.

The team targeted bacteriophage phiX174, a minimal DNA phage with approximately 5,000 bases across 11 genes. Around 2 million bacteriophage were used to train the AI model, allowing it to understand the patterns in their makeup and gene order. It then proposed new, complete genomes.

J. Craig Venter helped create the cells with these synthetic genomes. He saw the approach as being "just a faster version of trial-and-error experiments."

"We did the manual AI version - combing through the literature, taking what was known," he explained.

Speed is the appeal here. The prediction from the AI on the protein structure could certainly speed up the processes within drug and biotechnical development. The results could then be used to fight bacterial infections in, for example, farming or even gene therapy.

Samuel King, a student who led the project, said: "There is definitely a lot of potential for this technology."

The team excluded human-infecting viruses from the AI's training, but testing in this area could still be dangerous, warns Venter.

"One area where I urge extreme caution is any viral enhancement research,, especially when it's random so you don't know what you are getting.

"If someone did this with smallpox or anthrax, I would have grave concerns."

There are other issues with this idea. Moving to a 'simple' phage to something more complex such as bacteria - something that AI simply won't be able to do at this point.

"The complexity would rocket from staggering to ... way way more than the number of subatomic particles in the universe," Boeke said.

Despite the challenges surrounding this test, it is an extremely impressive result - and something that could influence the future of genetic engineering.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by PiMuNu on Thursday October 02, @09:43AM (5 children)

    by PiMuNu (3823) on Thursday October 02, @09:43AM (#1419278)

    Worth having a look at "The Windup Girl" by Paolo Bacigalupi. Set in a dystopian future where bioweapons have wiped out food production.

    • (Score: 4, Interesting) by khallow on Thursday October 02, @12:14PM (4 children)

      by khallow (3766) Subscriber Badge on Thursday October 02, @12:14PM (#1419287) Journal
      Or "A War of Shadows" where a series of designer bioweapon attacks have resulted in the imposition of fascism in the US.
      Turns out a secret US faction developed the bioweapon, organized the various terrorist attacks, and used the crisis they created to take control of the US.
      • (Score: 3, Funny) by namefags_are_jerks on Thursday October 02, @01:15PM (2 children)

        by namefags_are_jerks (17638) on Thursday October 02, @01:15PM (#1419292)

        Personally I'd like the prompt to produce a custom virus that targets a specific racial group who are quite rudely occupying land that belongs to my race according to an old collection of books of dubious origin..

        • (Score: 2) by stormwyrm on Friday October 03, @03:17AM

          by stormwyrm (717) on Friday October 03, @03:17AM (#1419370) Journal
          The Eclipse / A Song Called Youth trilogy of books by John Shirley. It's chillingly prophetic in its vision of the future, with the rise of racist right-wing politics and fundamentalist religion around the world, Russia pushing west into Europe. There's a plot thread there where a racially targeted virus is being developed by a powerful right wing private military contractor (which eerily resembles Blackwater IRL). However, by the end of the trilogy when they've been stopped, they only succeeded in making a virus that kills indiscriminately since humans of different races are not so dissimilar at the genetic level.
          --
          Numquam ponenda est pluralitas sine necessitate.
        • (Score: 2) by cereal_burpist on Tuesday October 07, @02:59AM

          by cereal_burpist (35552) on Tuesday October 07, @02:59AM (#1419779)

          Bumper sticker that I've seen (in USA): "You are on Indian land"

      • (Score: 1) by khallow on Thursday October 02, @04:27PM

        by khallow (3766) Subscriber Badge on Thursday October 02, @04:27PM (#1419318) Journal
        Oops, author of "A War of Shadows" is Jack Chalker.
  • (Score: 4, Interesting) by looorg on Thursday October 02, @12:12PM (9 children)

    by looorg (578) on Thursday October 02, @12:12PM (#1419286)

    Scientists Debate Whether to Halt Type of Research That Could Destroy All Life on Earth

    I can't help but to think that sounds like a good idea. Unless we want it to do it so we know what to do in case that it is about to happen ... But only in theory. No practical testing!

    • (Score: 5, Interesting) by shrewdsheep on Thursday October 02, @12:35PM (5 children)

      by shrewdsheep (5215) Subscriber Badge on Thursday October 02, @12:35PM (#1419289)

      I would argue that Pandora's box has already been opened. There is no stopping or turning back now like in AI research in general.

      It is certainly an exaggeration that all life could be destroyed, but immense harm could be inflicted. At this moment, research is urgently needed to limit LLMs in specific areas. This cannot be done by additional training as prompt engineering is likely to get around such measures but by "neurosurgery", i.e. the identification of critical layers/features for certain knowledge and the randomization of corresponding weights.

      • (Score: 5, Interesting) by Thexalon on Thursday October 02, @01:04PM (3 children)

        by Thexalon (636) on Thursday October 02, @01:04PM (#1419291)

        It is certainly an exaggeration that all life could be destroyed, but immense harm could be inflicted.

        Depending on what these viruses are targeting, it could make the P-Tr event look preferable, though. Like, "maybe something survives at the bottom of the ocean near the thermal vents" kind of event.

        --
        "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
        • (Score: 2, Interesting) by khallow on Thursday October 02, @06:23PM (2 children)

          by khallow (3766) Subscriber Badge on Thursday October 02, @06:23PM (#1419332) Journal
          What can viruses target that has that kind of reach? The enormous variation of life means there is no common weakness that a virus can latch onto. If I were looking for a destructive organism, then something significantly more efficient than photosynthesis would be a start. And if it were also inedible to organic life (say a silicon-carbon based organism), that could wipe out most plant life and then starve most animal life too.
          • (Score: 2) by Thexalon on Friday October 03, @12:01AM (1 child)

            by Thexalon (636) on Friday October 03, @12:01AM (#1419353)

            I was thinking of a virus that targets something like the ATP cycle which handles key energy processes in all cells. Stuff like that which is basic building blocks of life.

            --
            "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
            • (Score: 3, Informative) by khallow on Friday October 03, @12:46AM

              by khallow (3766) Subscriber Badge on Friday October 03, @12:46AM (#1419359) Journal

              I was thinking of a virus that targets something like the ATP cycle which handles key energy processes in all cells.

              The problem is that the ATP cycle is not something a virus can directly target. The typical infectious virus starts outside the cell. It needs to target something on the surface of the cell in order to get inside. And once inside, it needs to target something that will make more viruses. Targeting ATP directly (probably by nailing mitochrondria which has most of the ATP machinery in eukaryotes) destroys the energy needed to make more viruses.

              That's why i was thinking artificial competition for plants that can operate at really low levels of CO2. If you can bring CO2 below around 150 ppm permanently, it will shut down most plants. They might not die immediately, but they can't grow.

      • (Score: 4, Informative) by VLM on Thursday October 02, @04:25PM

        by VLM (445) Subscriber Badge on Thursday October 02, @04:25PM (#1419317)

        limit LLMs in specific areas.

        You can run and train local, or at least at nation state cloud size.

        It's parallel to the idea of trying to license and regulate C compilers to prevent the spread of any crypto better than rot-13 back in the 80s and early 90s.

    • (Score: 2) by Username on Thursday October 02, @02:34PM (2 children)

      by Username (4557) on Thursday October 02, @02:34PM (#1419297)

      How will this research halt stop israel from engineering a virus that kills only Palestinians? What will stop them? Sure, stopping research here will slow down thier progress, since they can't just steal it, but it doesn't nothing to prevent them from continuing the research themselves. Then they have on demand bioweapons, and we don't.

      • (Score: 5, Insightful) by shrewdsheep on Thursday October 02, @06:05PM

        by shrewdsheep (5215) Subscriber Badge on Thursday October 02, @06:05PM (#1419331)

        Genetically, Israelis and Palestinians are neigh identical https://doi.org/10.13140/RG.2.2.22216.94721 [doi.org]

        Engineering a virus against specific populations would be very challenging in general as viruses interact with only very few host proteins in general and those would have to be polymorphic across populations. Most likely any engineered virus would be universal.

      • (Score: 1) by khallow on Saturday October 04, @03:42AM

        by khallow (3766) Subscriber Badge on Saturday October 04, @03:42AM (#1419448) Journal

        How will this research halt stop israel from engineering a virus that kills only Palestinians?

        It'd be easier to engineer a virus that has a reliable vaccine. Then let it go for a few months before you "discover" the vaccine.

  • (Score: 4, Informative) by pdfernhout on Thursday October 02, @01:51PM (6 children)

    by pdfernhout (5984) on Thursday October 02, @01:51PM (#1419295) Homepage

    "Charting the Future of Biotechnology and AI"
    https://soylentnews.org/article.pl?sid=25/05/13/0133228 [soylentnews.org]
    "He says here on on biotech that he used to think AI could be used to defend against biological attacks to mitigate the risks of AI being used to create biological attacks -- but he no longer thinks that. He now expects bioweapons that AI could be used to create may not be significantly defensible against on a short time-scale by AI-developed cures because bioweapons are "offense dominant". He says that is "why we are so worried about it". So, essentially, do people think he is saying between the lines that AI used to make bioweapons will doom us all and there is nothing we can do about it except race forward to it? ...
            I feel the AI and biotech risk issues can only be transcended by a change from a scarcity perspective to an abundance perspective (as with my sig). If AI is used cooperatively from an abundance perspective, there is some hope that AI might be a net benefit to humanity. Otherwise, if AI is used from a competitive scarcity perspective, we are probably doomed as Eric Schmidt seems to me to imply (reading between the lines, perhaps incorrectly)."

    --
    The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
    • (Score: 1, Insightful) by Anonymous Coward on Thursday October 02, @05:09PM (5 children)

      by Anonymous Coward on Thursday October 02, @05:09PM (#1419324)

      Yep, we're all looking at a future of gray goo...
      https://en.wikipedia.org/wiki/Gray_goo [wikipedia.org]

      ...Unless we somehow all (where all means "humanity with bio design & synthesis capability") realize that we're all in this together, only one Earth & one biosphere that makes our lives possible.

      • (Score: 2, Funny) by khallow on Thursday October 02, @05:39PM (2 children)

        by khallow (3766) Subscriber Badge on Thursday October 02, @05:39PM (#1419327) Journal
        Would the gray goo agree that we're all in this together?
        • (Score: 0) by Anonymous Coward on Friday October 03, @01:51AM (1 child)

          by Anonymous Coward on Friday October 03, @01:51AM (#1419364)

          Gray goo is the result, not the actor.

          • (Score: 2, Funny) by khallow on Friday October 03, @02:03AM

            by khallow (3766) Subscriber Badge on Friday October 03, @02:03AM (#1419365) Journal

            Gray goo is the result, not the actor.

            So you assume. Will gray goo agree with that?

      • (Score: 0) by Anonymous Coward on Friday October 03, @05:17AM

        by Anonymous Coward on Friday October 03, @05:17AM (#1419376)

        I'm not sold on this Grey Goo thing. Can I have Gay Goo instead?

      • (Score: 2) by pdfernhout on Friday October 03, @04:21PM

        by pdfernhout (5984) on Friday October 03, @04:21PM (#1419406) Homepage

        "The Wombat (All is One)"
        https://www.youtube.com/watch?v=IHyH3MPgZDo [youtube.com]
                "The wombat speaks, and he's smarter than you, so listen up! In less than a minute, this rapid-fire animation tells you everything you need to know about how to get along on earth for the next million years. By Jason Ables. (For more information, visit www.global-mindshift.org)"

        --
        The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
  • (Score: 3, Insightful) by Anonymous Coward on Friday October 03, @02:23AM (1 child)

    by Anonymous Coward on Friday October 03, @02:23AM (#1419366)
    I don't see how the AI stuff makes determining whether the synthetic virus is actually effective at doing what you want it to do any easier. The article doesn't say that it was able to determine ahead of time whether the virus genomes it cooked up were any more effective at killing the bacteria. It seems they still had to synthesise the viruses, and then put them through their paces by infecting the target organism to see how well they did. What's to stop others from using open data on human pathogens to build their own models? The fact that you still have to actually test the virus on humans to see if it does what you want it to do. How different is this from what nature has been doing for the past couple billion years? We got SARS-CoV-2 this way.
    • (Score: 0) by Anonymous Coward on Saturday October 04, @12:41AM

      by Anonymous Coward on Saturday October 04, @12:41AM (#1419440)

      > What's to stop others from using open data on human pathogens to build their own models?

      The late great Firesign Theater --
          https://www.youtube.com/watch?v=D3zZ_ih0Jpc [youtube.com]

  • (Score: 1, Insightful) by Anonymous Coward on Sunday October 05, @08:31AM

    by Anonymous Coward on Sunday October 05, @08:31AM (#1419552)

    "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them."

    In practice the US will just wait for China to notice the problem, build new hospitals in days, lock down provinces; then mostly ignore the problem while millions of US people get sick and die. Then blame China for covering it up.

    I mean seriously look what the US has learned from Covid-19. You think the US will handle it better the next time round? 🤣

(1)