Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday June 06 2018, @10:48AM   Printer-friendly
from the own-worst-enemy dept.

"Alexander Berezin, a theoretical physicist at the National Research University of Electronic Technology in Russia, has proposed a new answer to Fermi's paradox — but he doesn't think you're going to like it. Because, if Berezin's hypothesis is correct, it could mean a future for humanity that's 'even worse than extinction.'

'What if,' Berezin wrote in a new paper posted March 27 to the preprint journal arxiv.org, 'the first life that reaches interstellar travel capability necessarily eradicates all competition to fuel its own expansion?'" foxnews.com/science/2018/06/04/aliens-are-real-but-humans-will-probably-kill-them-all-new-paper-says.html

In other words, could humanity's quest to discover intelligent life be directly responsible for obliterating that life outright? What if we are, unwittingly, the universe's bad guys?

And if you are not sure what the Fermi paradox is then the link should help, and there is a long explanation of that one in the article.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Wednesday June 06 2018, @06:32PM (3 children)

    by HiThere (866) Subscriber Badge on Wednesday June 06 2018, @06:32PM (#689458) Journal

    That's not a new answer, it's one of the classic ones.

    FWIW, I don't believe it, as I feel that any lifeform that aggressive would get into fights with itself and if it were interstellar capable, they would generate visible signs. (Asimov even used that as a hidden sub-theme to justify having only human civilizations, but he blamed it on the actions of the 3-laws. So that makes things a bit more plausible.)

    OTOH, if you have interstellar capability, wiping out any non-spacefaring life would be easy. Just hit their planet with a high speed (automated?) ship. At, say, 0.1C even a small ship would carry enough energy to wipe out any conceivable planet-based civilization. Steering it to the target might be a bit tricky, though.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by DutchUncle on Wednesday June 06 2018, @07:40PM

    by DutchUncle (5370) on Wednesday June 06 2018, @07:40PM (#689504)

    Classic SF also included the concept of races so xenophobic and/or expansionist that they wiped out anything intelligent to take over their livable planet. Cordwainer Smith had one, Doc Smith's "Lensmen" series had another. There have also been ideas about more subtle attacks, like an advanced race offering "medical assistance" that turns out to be sterilization (can't remember the classic SF story, also used in Stargate SG-1 episode "2010").

  • (Score: 2) by cubancigar11 on Thursday June 07 2018, @08:30AM (1 child)

    by cubancigar11 (330) on Thursday June 07 2018, @08:30AM (#689771) Homepage Journal

    any lifeform that aggressive would get into fights with itself

    But the paper is clear that the lifeform doesn't have to be aggressive. The whole hypothesis is that the altruistic, friendly life that also "grows" and needs to grow is basically at an ultimate unstable equilibrium, and that equilibrium will get tilted by 1 mistake. And as that life form grows, the number of mistakes needed don't - it remains one, and hence it becomes more and more probable that the mistake will be made.

    The hypothesis presented in the paper is very clear that it's definition of what constitutes an alien encounter is very specific, and the number of variables to determine that are also very low.

    • (Score: 2) by HiThere on Thursday June 07 2018, @06:36PM

      by HiThere (866) Subscriber Badge on Thursday June 07 2018, @06:36PM (#690001) Journal

      Well, there is the argument for "paper clip maximizers", but I don't really believe such is possible, except locally. If you get multiple maximizing entities, they will start trying to convert each other into paper-clips.

      Note that "paper clip" here stands for any simple goal, and light-speed will necessitate divergent evolution of the maximizers. Even Saberhagen's "Berserker machines" weren't really believable, and he worked at it, and allowed such things as FTL to increase plausibility. (Fewer generations of separation gives less time for divergent evolution.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.