Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Saturday February 23 2019, @01:44PM   Printer-friendly
from the Erlenmeyer-Flask-2 dept.

NASA-Funded Research Creates DNA-like Molecule to Aid Search for Alien Life

In a research breakthrough funded by NASA, scientists have synthesized a molecular system that, like DNA, can store and transmit information. This unprecedented feat suggests there could be an alternative to DNA-based life, as we know it on Earth – a genetic system for life that may be possible on other worlds.

This new molecular system, which is not a new life form, suggests scientists looking for life beyond Earth may need to rethink what they are looking for. The research appears in Thursday's edition of Science Magazine.

[...] The synthetic DNA includes the four nucleotides present in Earth life – adenine, cytosine, guanine, and thymine – but also four others that mimic the structures of the informational ingredients in regular DNA. The result is a double-helix structure that can store and transfer information.

[Steven] Benner's team, which collaborated with laboratories at the University of Texas in Austin, Indiana University Medical School in Indianapolis, and DNA Software in Ann Arbor, Michigan, dubbed their creation "hachimoji" DNA (from the Japanese "hachi," meaning "eight," and "moji," meaning "letter"). Hachimoji DNA meets all the structural requirements that allow our DNA to store, transmit and evolve information in living systems.

Also at NYT, Discover Magazine, and ScienceAlert.

Hachimoji DNA and RNA: A genetic system with eight building blocks (DOI: 10.1126/science.aat0971) (DX)

Related: Scientists Add Letters X and Y to DNA Alphabet
Scientists Engineer First Semisynthetic Organism With Three-base-pair DNA
How Scientists Are Altering DNA to Genetically Engineer New Forms of Life
Synthetic X and Y Bases Direct the Production of a Protein With "Unnatural" Amino Acids


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by EvilSS on Saturday February 23 2019, @05:05PM (6 children)

    by EvilSS (1456) Subscriber Badge on Saturday February 23 2019, @05:05PM (#805634)
    When it comes to movies/TV there is also a practical reason to consider: Money. Even today CGI isn't exactly cheap. I recall the SG1 producers once talking about how expensive it was every time they had the Asgardians on screen, and that's why they limited the use of the species.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Saturday February 23 2019, @05:25PM

    by Anonymous Coward on Saturday February 23 2019, @05:25PM (#805654)

    no need to worry. thors sister took care of that.

  • (Score: 2) by takyon on Saturday February 23 2019, @06:04PM (4 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday February 23 2019, @06:04PM (#805675) Journal

    SG-1 had some pretty decent effects. Some of it looks goofy, other bits stand up well. Obviously they were cost-conscious. Many alien planets looked like the alien forests of Vancouver. Ship sequences were reused. Etc. And it still ended up being a pretty expensive show to make ($1-2 million per episode).

    It's obviously much easier to have a full-scale humanoid alien since you can put a guy in a costume and makeup. The Asgard in SG-1 used a mix of puppets and CGI for some later sequences. But the trend of humanoid aliens in sci-fi goes back way earlier to 50s movies, Doctor Who, Star Trek, etc.

    Now you have shows like Altered Carbon with absurd budgets and incredible looking special effects. Probably in the neighborhood of $5-7 million per episode [hollywoodreporter.com], reflecting heightened TV spending and an intense battle for eyeballs (if Netflix stops making original content, it gets steamrolled by established license holders like Disney). I would also point at shows like Sanctuary (starring Amanda Tapping) which mostly replaced IRL sets with CGI and green screens.

    Going forward, we have stuff like real-time raytracing [soylentnews.org] that can be used to render a virtual environment around your green screen actors in real time. We also have technological paths [soylentnews.org] that could increase CPU and GPU performance by orders of magnitude. What costs Netflix $5 million per episode today could be done on a kid's computer in 20 years (machine learning and procedural generation can do the heavy lifting of creating a populated virtual world, you just adjust the parameters).

    But there's one thing that won't change that much. The human imagination. It is easier for us to come up with humanoid alien designs, and it's easier for humans to relate to these designs. We like WALL-E because "he" is created in our image with human-like traits.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 1, Interesting) by Anonymous Coward on Saturday February 23 2019, @07:49PM (3 children)

      by Anonymous Coward on Saturday February 23 2019, @07:49PM (#805724)

      actually "bad puppets" or mechatronics are in a way still more realistic then bad CGI.
      the computer has to start from "nothing". so every cgi iteration needs to improve on itself to become more realistic.
      with puppets, they come from reality and we can see it and we kindda know that because if that and our limited understanding of nature that somewhere in the universe the "puppets" might be real.
      also making puppetts and mechanotronics teaches real life skills whilst a 20 year in the future computer program does not.
      so hurry for puppets and robots!
      p.s. any typos are due to this lenghty text eating into non growing input tezt bix (cannot see my typing) and the utterly shitty typing on android

      • (Score: 0) by Anonymous Coward on Saturday February 23 2019, @07:57PM (1 child)

        by Anonymous Coward on Saturday February 23 2019, @07:57PM (#805727)

        ah, also young obi wan didnt like acting with imaginary co-actors that would only show up after he had done all his acting and maybe thats why some people didnt like the star wars origin episodes?
        acting towards a puppet is\was easier for some actors -vs- thin air ... or maybe obi wan was still high from that previous movie that involved a imaginary train?

        • (Score: 2) by takyon on Saturday February 23 2019, @08:20PM

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday February 23 2019, @08:20PM (#805740) Journal

          Some productions, including parts of the Star Wars prequel trilogy IIRC, use stand-in actors that are completely replaced by CGI later. Depending on how you do it, you can have your human Obi Wan reacting to a real guy in a body suit who is off camera, and then render a CGI model where necessary, or use the guy for motion capture purposes.

          In the near future, we will see greater use of entirely virtual actors. This can lead to copyright/personality rights issues if you are trying to resurrect Marilyn Monroe or somebody, but you can always create a virtual actor from scratch or perhaps blend characteristics of multiple living or once-living people. Or you can simply ignore legal issues and bypass legally sanctioned distribution mechanisms. If you can animate and provide text-to-speech voice acting for completely virtual photorealistic actors, you can eventually provide lone wolves working on their 2035 desktop computers the ability to make a professional looking movie on a $0 to $10,000 budget (price of the computer and their free time essentially, maybe a little contract work to other people who are better at using the software, making music, etc).

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by takyon on Saturday February 23 2019, @08:09PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday February 23 2019, @08:09PM (#805734) Journal

        Well we are now entering into the machine learning era where you can copy someone's movements (motion capture) without fancy suits, create photorealistic people that never existed, and other absurdities. Just the last 1-2 years has made a huge difference in the field.

        This video came out 3 hours ago: AI-Based 3D Pose Estimation: Almost Real Time! [youtube.com]

        This is the one I was looking for: This AI Learned How To Generate Human Appearance [youtube.com]

        (you can see how the new one can complement the old one)

        So yeah, animatronics/puppets/mo-cap definitely helped SG-1, Jurassic Park, Lord of the Rings (Gollum/Smeagol/Andy Serkis) and other productions. But CGI is continuing to forge ahead and will be combining with machine learning, ray-tracing, etc. For now I am predicting 1,000x and possibly 1,000,000x improvement in CPU/GPU/tensor computational power using new transistor designs and/or 3D architectures. We have not reached the end of the line yet.

        There is still a demand for robotics-related skills. After all, where are the robots that will cook, clean, AND fuck? Or the robots that will replace factory and fast food workers using a common base model? Or the robotic boots on the ground for the police and military as well as wearable mecha suits? The answer is that all of what is commercially available is embarrassingly bad. There is still a long way to go. Sure, battery technology may be a limiting factor, but we should ultimately be able to make a robot that is indistinguishable in its appearance and movement capabilities from a human (not necessarily a design goal, but an aspirational one) even if it can only run for 5 minutes. Then work on improving battery energy density separately.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]