"Alexander Berezin, a theoretical physicist at the National Research University of Electronic Technology in Russia, has proposed a new answer to Fermi's paradox — but he doesn't think you're going to like it. Because, if Berezin's hypothesis is correct, it could mean a future for humanity that's 'even worse than extinction.'
'What if,' Berezin wrote in a new paper posted March 27 to the preprint journal arxiv.org, 'the first life that reaches interstellar travel capability necessarily eradicates all competition to fuel its own expansion?'" foxnews.com/science/2018/06/04/aliens-are-real-but-humans-will-probably-kill-them-all-new-paper-says.html
In other words, could humanity's quest to discover intelligent life be directly responsible for obliterating that life outright? What if we are, unwittingly, the universe's bad guys?
And if you are not sure what the Fermi paradox is then the link should help, and there is a long explanation of that one in the article.
(Score: 2) by HiThere on Wednesday June 06 2018, @06:32PM (3 children)
That's not a new answer, it's one of the classic ones.
FWIW, I don't believe it, as I feel that any lifeform that aggressive would get into fights with itself and if it were interstellar capable, they would generate visible signs. (Asimov even used that as a hidden sub-theme to justify having only human civilizations, but he blamed it on the actions of the 3-laws. So that makes things a bit more plausible.)
OTOH, if you have interstellar capability, wiping out any non-spacefaring life would be easy. Just hit their planet with a high speed (automated?) ship. At, say, 0.1C even a small ship would carry enough energy to wipe out any conceivable planet-based civilization. Steering it to the target might be a bit tricky, though.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by DutchUncle on Wednesday June 06 2018, @07:40PM
Classic SF also included the concept of races so xenophobic and/or expansionist that they wiped out anything intelligent to take over their livable planet. Cordwainer Smith had one, Doc Smith's "Lensmen" series had another. There have also been ideas about more subtle attacks, like an advanced race offering "medical assistance" that turns out to be sterilization (can't remember the classic SF story, also used in Stargate SG-1 episode "2010").
(Score: 2) by cubancigar11 on Thursday June 07 2018, @08:30AM (1 child)
But the paper is clear that the lifeform doesn't have to be aggressive. The whole hypothesis is that the altruistic, friendly life that also "grows" and needs to grow is basically at an ultimate unstable equilibrium, and that equilibrium will get tilted by 1 mistake. And as that life form grows, the number of mistakes needed don't - it remains one, and hence it becomes more and more probable that the mistake will be made.
The hypothesis presented in the paper is very clear that it's definition of what constitutes an alien encounter is very specific, and the number of variables to determine that are also very low.
(Score: 2) by HiThere on Thursday June 07 2018, @06:36PM
Well, there is the argument for "paper clip maximizers", but I don't really believe such is possible, except locally. If you get multiple maximizing entities, they will start trying to convert each other into paper-clips.
Note that "paper clip" here stands for any simple goal, and light-speed will necessitate divergent evolution of the maximizers. Even Saberhagen's "Berserker machines" weren't really believable, and he worked at it, and allowed such things as FTL to increase plausibility. (Fewer generations of separation gives less time for divergent evolution.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.