Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Friday April 10 2015, @01:21AM   Printer-friendly
from the stay-on-my-lawn-for-a-long-long-time dept.

From the phys.org article:

As modern software systems continue inexorably to increase in complexity and capability, users have become accustomed to periodic cycles of updating and upgrading to avoid obsolescence—if at some cost in terms of frustration. In the case of the U.S. military, having access to well-functioning software systems and underlying content is critical to national security, but updates are no less problematic than among civilian users and often demand considerable time and expense. That is why today DARPA announced it will launch an ambitious four-year research project to investigate the fundamental computational and algorithmic requirements necessary for software systems and data to remain robust and functional in excess of 100 years.

The Building Resource Adaptive Software Systems, or BRASS, program seeks to realize foundational advances in the design and implementation of long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate. Such advances will necessitate the development of new linguistic abstractions, formal methods, and resource-aware program analyses to discover and specify program transformations, as well as systems designed to monitor changes in the surrounding digital ecosystem. The program is expected to lead to significant improvements in software resilience, reliability and maintainability.

DARPA's press release and call for research proposals.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Funny) by c0lo on Friday April 10 2015, @01:22AM

    by c0lo (156) on Friday April 10 2015, @01:22AM (#168577) Journal
    All immortal software is written in COBOL.
    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0
    • (Score: 5, Touché) by sigma on Friday April 10 2015, @01:38AM

      by sigma (1225) on Friday April 10 2015, @01:38AM (#168581)

      Undead and immortal are not the same thing.

      • (Score: 1) by Wierd0n3 on Friday April 10 2015, @05:06AM

        by Wierd0n3 (1033) on Friday April 10 2015, @05:06AM (#168631)

        vampires? (not the sparkly ones lol)

        • (Score: 3, Insightful) by Jeremiah Cornelius on Friday April 10 2015, @03:58PM

          by Jeremiah Cornelius (2785) on Friday April 10 2015, @03:58PM (#168767) Journal

          Is this in the service of maintaining the USA as a "glorious, thousand-year reich"?

          There are so many kinds of wrong in technology, politics and general mental-health. It's hard to know from which angle this deserves the greatest criticism.

          --
          You're betting on the pantomime horse...
    • (Score: 2) by mtrycz on Friday April 10 2015, @10:14AM

      by mtrycz (60) on Friday April 10 2015, @10:14AM (#168687)

      No, JavaScript is the best candidate for the job.

      --
      In capitalist America, ads view YOU!
  • (Score: 1, Interesting) by Anonymous Coward on Friday April 10 2015, @01:31AM

    by Anonymous Coward on Friday April 10 2015, @01:31AM (#168580)

    Linus and crew are allowing Systemd and Gnome crew to delete all the hard work of the past.

    • (Score: 0) by Anonymous Coward on Friday April 10 2015, @08:43PM

      by Anonymous Coward on Friday April 10 2015, @08:43PM (#168831)

      You're talking about completely different layers.
      Torvalds and associates build the kernel.
      Other folks build other layers consisting of many distros [without-systemd.org] and desktop environments[1] [wikipedia.org]--many of those having routed around the damage that is systemd.

      Development of alternatives has proceeded in parallel with systemd.
      Nothing has been "deleted".
      It's still up to you if you want to wrestle with the tar baby. [wikipedia.org]

      [1] Note how GNOME has been forked to avoid lock-in to its "improvements".

      -- gewg_

  • (Score: 5, Funny) by meisterister on Friday April 10 2015, @01:41AM

    by meisterister (949) on Friday April 10 2015, @01:41AM (#168583) Journal

    Given that the BSD developers care about functionality and stability more than pandering to the lowest common denominator, I would fully expect a BSD install to last for several decades if not a century (barring component failures).

    They should also use a KISS approach, since I don't expect that anyone 100 years from now would want to maintain this clusterf*ck http://en.wikipedia.org/wiki/Systemd [wikipedia.org]

    --
    (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
    • (Score: 5, Interesting) by sigma on Friday April 10 2015, @01:59AM

      by sigma (1225) on Friday April 10 2015, @01:59AM (#168586)

      Given that the BSD developers care about functionality and stability more than pandering to the lowest common denominator, I would fully expect a BSD install to last for several decades if not a century (barring component failures).

      Then you're completely missing the point of BRASS. Their goal is to have software that is ADAPTIVE - software that can modify itself to cope with hardware and other resource changes and developments. BSD's stability (stagnation?) is the opposite of the dynamic system DARPA are envisioning, and like it or not, systemd looks much more like a step down that adaptive path than any other init system.

      The Building Resource Adaptive Software Systems, or BRASS, program seeks to realize foundational advances in the design and implementation of long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate.

      • (Score: 1, Insightful) by Anonymous Coward on Friday April 10 2015, @02:01AM

        by Anonymous Coward on Friday April 10 2015, @02:01AM (#168588)

        fuck you and systemd

        • (Score: 3, Funny) by sigma on Friday April 10 2015, @02:05AM

          by sigma (1225) on Friday April 10 2015, @02:05AM (#168591)

          Fuck me?

          Sorry, AC, but I don't go in for these backdoor shenanigans. Sure, I'm flattered, maybe even a little curious, but the answer is no!

          • (Score: 4, Interesting) by tynin on Friday April 10 2015, @02:23AM

            by tynin (2013) on Friday April 10 2015, @02:23AM (#168600) Journal

            I'm pretty sure you don't give a toot about systemd, because that isn't what this is about. It is about truly adaptive software that can integrate in the face of changing hardware. One of the places these systems will make sense is in infrastructure that just needs to do 1 thing well, and for a long long time. These systems will not be as modern as the new tech of that day yet to come, but they don't need to be, they just need to work. Some things shouldn't need to have a staff of admin's constantly relearning the latest init systems of the day to keep the machine working after the next patch. Having a solid high tech infrastructure that can be repaired and perhaps scaled with the hardware tech of the day would be a boon across the board for the entire baseline of civilization.

            • (Score: 1, Insightful) by Anonymous Coward on Friday April 10 2015, @07:49AM

              by Anonymous Coward on Friday April 10 2015, @07:49AM (#168664)

              You mean like TCP/IP along with the associated alphabet soup of protocols? Packetheads figured that stuff out decades ago. It would be nice to apply that methodology to other things. The track record for networking robustness is amazing.

              • (Score: 0) by Anonymous Coward on Friday April 10 2015, @08:24AM

                by Anonymous Coward on Friday April 10 2015, @08:24AM (#168671)

                Apparently TCP/IP software was not able to automatically adapt to a growing number of connected computers, so a manual update (IPv6) was needed.

            • (Score: 4, Funny) by Gaaark on Friday April 10 2015, @04:48PM

              by Gaaark (41) Subscriber Badge on Friday April 10 2015, @04:48PM (#168773) Journal

              Having a solid high tech infrastructure that can be repaired and perhaps scaled with the hardware tech of the day would be a boon across the board for the entire baseline of civilization.

              And call the software "Harry Seldon"

              --
              --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 3, Funny) by lentilla on Friday April 10 2015, @02:12AM

        by lentilla (1770) on Friday April 10 2015, @02:12AM (#168594)

        systemd looks much more like a step down that adaptive path

        Well put. Slightly further down that road and we'll be calling it "SkyNet".

      • (Score: 2) by c0lo on Friday April 10 2015, @02:35AM

        by c0lo (156) on Friday April 10 2015, @02:35AM (#168605) Journal

        Their goal is to have software that is ADAPTIVE - software that can modify itself to cope with hardware and other resource changes and developments.

        Like what? Write a controller for a caterpillar track robotic tank and have it adapting with no difficulties to starwars walkers?

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0
        • (Score: 3, Interesting) by sigma on Friday April 10 2015, @03:54AM

          by sigma (1225) on Friday April 10 2015, @03:54AM (#168617)

          See tibman's comment below. http://soylentnews.org/comments.pl?sid=6948&cid=168614 [soylentnews.org]

          It's about software that's tolerant to large disruptions to its hardware, potentially including, as you say, different robotics platforms.

          Frankly, it's not that hard to imagine - older platforms like Multics and even commodity Amiga computers had some very good automatic configuration systems. A redesign that included the ability to search and integrate something like OSRS projects [osrfoundation.org] on demand should be able to handle robotic hardware variants.

          Better hardware design standards that included a modern version plug and play of the Amiga's Autoconfig would go a long way to making component changes seamless, as would open hardware with ROM-based self-documenting properties.

          • (Score: 0) by Anonymous Coward on Friday April 10 2015, @07:46AM

            by Anonymous Coward on Friday April 10 2015, @07:46AM (#168663)

            Then it is no longer the software that is adaptable but hardware that is fixed enough through time that software does not need to change itself. Might as well call windows infinitely adaptable because a usb stick can be plugged in with a patching script.

            • (Score: 2) by tibman on Friday April 10 2015, @01:26PM

              by tibman (134) Subscriber Badge on Friday April 10 2015, @01:26PM (#168734)

              A USB stick isn't a piece of hardware the OS is running on. The hardware shouldn't be fixed in time, that is the point. The software should be adaptable enough to recognize that ram, processors, and storage being added and removed from the system. You should be able to bisect the bus and the system still function (end users won't even notice).

              --
              SN won't survive on lurkers alone. Write comments.
      • (Score: 3, Informative) by q.kontinuum on Friday April 10 2015, @08:00AM

        by q.kontinuum (532) on Friday April 10 2015, @08:00AM (#168665) Journal

        systemd looks much more like a step down that adaptive path than any other init system

        If only there was a "flamebait +1"... Som baits are just too entertaining to down-mod them ;-)

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 0) by Anonymous Coward on Friday April 10 2015, @08:21AM

        by Anonymous Coward on Friday April 10 2015, @08:21AM (#168670)

        Their goal is to have software that is ADAPTIVE - software that can modify itself

        Ah, self-modifying code. I thought that was identified as bad practice a long time ago. ;-)

    • (Score: 5, Interesting) by bzipitidoo on Friday April 10 2015, @05:07AM

      by bzipitidoo (4388) Subscriber Badge on Friday April 10 2015, @05:07AM (#168632) Journal

      Not a chance. In the past 30 years, we've moved from 8 bit to 16, 32, and 64 bit systems. Every one of those moves required a lot of reworking. You might think after moving from 16 to 32, we'd have it down, and the shift to 64 bit would be easy, but no. Many programs have an implicit limit on the amount of data they can handle, often restricted to what 32bit addresses allow, and must be extensively rewritten, not just recompiled, to expand their capacity. Systems have changed so much in so many other ways. Hard drives took a big jump from 40M to 500M in the mid 90s, and that killed much of the interest in compressed file systems. The 80486 introduced some new operations that are key to running a multitasking OS. Graphics computations have shifted hugely from CPUs driving primitive VGA graphics without any GPU at all, to dedicated massively parallel GPUs. Took a massive rewrite of software to properly utilize that change, and we're still working on it. That's the reason the code for something like the original Doom game engine is no longer practical or particularly interesting-- just isn't relevant to current graphics. It's also why XWindows so badly needs a redesign, and projects like Wayland have sprung up. The xlib part of XWindows is full of 1980s cruft for having the CPU draw lines and other such primitive operations that GPUs do now.

      OSes have also changed massively. In the days of DOS, everyone provided their own graphics drivers, and programs were quite free to just take over the system and ignore DOS. Protected mode was another huge advance that empowered a massive shift in OS technology, which then drove a big rewrite of a great deal of software to make apps more aware of system facilities. For instance, no DOS program had code to handle the "clipboard", and, without help, can't participate in the copying and pasting between apps that is easy and routine now. Also, socket programming used to be a niche, now, with the Internet everywhere, networking libraries are much more important. Early Linux used this "a.out" executable file format and libc5. Changing to ELF and libc6 was another big move that required much reworking, a simple recompile was often not enough. Relatively new in hardware support is the No Execute bit for virtual pages. There could still be programs that deliberately modify their own machine code, and all those will no longer work on a system that uses a No Execute bit, they must be modified. Who knows what the future will bring in the way of advances? Virtual machine support is still new, and still difficult to do cleanly on a PC.

      I don't think computing is settled enough yet to think of 100 year lifetimes. Programming languages are more numerous and divergent than ever, with only a broad consensus that Structured Programming, OOP, and function programming are all good, but no agreement on the details.

      We're still stuck with a lot of legacy PC design. Shifting away from the antiquated PC platform to finally get rid of that, will require much work.

      • (Score: 3, Interesting) by tftp on Friday April 10 2015, @06:38AM

        by tftp (806) on Friday April 10 2015, @06:38AM (#168654) Homepage

        You are describing existing execution environments. They all are unsuitable, of course, that's why DARPA is asking for a solution.

        I would think that the desired solution will come with its own, sufficiently abstract language and I/O, and all that can run on any hardware that can execute the language (interpreted, or compiled into IL, or whatever.) This might work for tasks that are simple and abstract, like calculation of digits of Math.PI. However any software that operates hardware probably cannot be portable enough to do the job with an acceptable efficiency. Sure, you can render a modern FPS with merely setPixel() API, but that would not be such a great idea - especially if future monitors have not only (X,Y) but Z as well.

        To rephrase a classical joke, you can write software that will remain usable for 100 years. But nobody will want to use it, except few very special applications, like control circuits. You can run Windows 3.1 today, in a VM if you must; but why would you want to do that if the only extermal connection in that OS is a CD ROM and a floppy? It's pretty hard to design software that is not only functional so far in the future, but is also useful. Most of the software today is made for a specific purpose, be it to control a TV set or to decode a compressed audio file and play samples via some audio hardware. They have no value outside of that compression format and that audio wave API.

        This DARPA contract probably will end up taking several years, several million dollars, and will deliver a souped-up VM that will be capable of running a well defined execution environment. Perhaps it will have some abstraction capabilities in the hardware. For example, if it has video cameras, you can enumerate them, you can find out their orientation, resolution, day/night settings... you can poll for LIDARs, propulsion, energy sources - all the stuff that you could find in, say, a robot. You can expand this introspection to batteries, RAM, thermal management. You would be able then to write software that can run in that environment, inspect available functions and make use of those that are relevant. Does it appear to be practical? Hard to say. But it surely will be immediately profitable. It will also be very hard to be certain that the product works correctly in every combination of peripherals that come online and offline as they please.

  • (Score: 5, Interesting) by jmorris on Friday April 10 2015, @01:59AM

    by jmorris (4844) on Friday April 10 2015, @01:59AM (#168587)

    If a software based system is expected to remain in service more than a few years, making it even more complex than current systems is exactly the wrong direction. A modern system is already a clusterfsck of bogosity held together by a constant stream of patches.

    TCP/IP itself isn't even really securable in any sort of real sense.

    What is needed is a total rethink, ground up. Silicon designed to facilitate secure code, provable secure software designs, languages and tools to produce provable secure code, securable data interchange methods developed, tested and hardened and finally standardized interchangable hardware modules with documentation so precise and complete that the very silicon could be clean room reconstituted from only the published docs... and that this demonstrated by doing it. New traditions will be required too, such as one mandating that this hyper complete documentation along with all source code (even 'closed source' code) always be loaded onto each machine so that maintenance, decades after the original manufacturer and programmer are not just gone but perhaps even unknown, always remains possible. UI conventions will need to be thought out to a point that they could be standardized and frozen to a level that fifty year old interfaces would still be comprehensible without a rewrite.

    And all these things and more will be done. Just as soon as a hack of the existing insecure 'roach motel' software infrastructure causes a large enough loss of human life. Imagine a near future when millions of Google Cars get a bogus update by the 'Cyber Jihad' and on a preset date they all pair off and head end each other at max speed. Just one incident like that will force the end to current programming practice. Agile my ass. Ship something now and patch em in the field is more like it.

    • (Score: 0) by Anonymous Coward on Friday April 10 2015, @05:29AM

      by Anonymous Coward on Friday April 10 2015, @05:29AM (#168636)

      This is similar in justification, if not implementation, to Urbit [github.com] and its idea of Martian Programming [blogspot.com].

      Even if it leads nowhere, there are people attempting this today.

  • (Score: 2) by Dunbal on Friday April 10 2015, @02:01AM

    by Dunbal (3515) on Friday April 10 2015, @02:01AM (#168589)

    I can provide this software, it will cost 5 trillion dollars and will never work. Kind of like all the other defense projects in the US. I'll take the first tranche now please.

  • (Score: 4, Funny) by Anonymous Coward on Friday April 10 2015, @02:24AM

    by Anonymous Coward on Friday April 10 2015, @02:24AM (#168602)

    is already 14% there.

  • (Score: 3, Insightful) by arslan on Friday April 10 2015, @02:38AM

    by arslan (3462) on Friday April 10 2015, @02:38AM (#168606)

    Was there a concept of software systems 100 years ago? Who knows what the future will look 100 years from now, software systems could be the rotary phone of yesteryear..

  • (Score: 5, Informative) by tibman on Friday April 10 2015, @03:12AM

    by tibman (134) Subscriber Badge on Friday April 10 2015, @03:12AM (#168614)

    Sounds like Multics is excellent at live hardware changes.

    At the MIT system, where most early software development was done, it was common practice to split the multiprocessor system into two separate systems during off-hours by incrementally removing enough components to form a second working system, leaving the rest still running the original logged-in users. System software development testing could be done on the second machine, then the components of the second system were added back onto the main user system, without ever having shut it down.

    It also had dynamic linking that is still superior to today's implementations (so WP says).
    https://en.wikipedia.org/wiki/Multics [wikipedia.org]

    --
    SN won't survive on lurkers alone. Write comments.
  • (Score: 1, Insightful) by Anonymous Coward on Friday April 10 2015, @03:35AM

    by Anonymous Coward on Friday April 10 2015, @03:35AM (#168616)

    It's not the software. It's the hardware that don't last. Who runs DARPA these days? Fucking morons.

    • (Score: 0) by Anonymous Coward on Friday April 10 2015, @02:30PM

      by Anonymous Coward on Friday April 10 2015, @02:30PM (#168750)

      i think you're more on the spot than others. the problem is that there isn't enough financial incentive for a company to make a product that will last for a hundred years. what can you buy today that will last that long? a house (maybe), a gun, i can't think of anything else.

      Example: XP could last another 10 years, but MS needs to make money!

  • (Score: 4, Insightful) by anubi on Friday April 10 2015, @03:57AM

    by anubi (2828) on Friday April 10 2015, @03:57AM (#168619) Journal

    I have been using C++ for quite some time now. Across several platforms.

    I believe damned near anything can be done with C++ with proper libraries.

    I still see old DOS systems cranking along on old Borland C++. A good 25 years old.

    Even the latest Arduino incantations still speak C++. So does NetBurner's Micrium uCOSII.

    As far as the computer architectures go, we really need a good stable open source platform. YMMV, but I believe the Motorola 68000 series had about the most elegant architecture I had ever seen in a machine.

    I used to work for Chevron, and if any company took long-term views on things, they did. We had quite a bit of stuff 100 years old still running.

    Edison is another company which places value on the long term.

    Its a concept you consider during design...how long do you expect it to last? If you are designing a toilet, you may design it to last a hundred years ( *especially* if YOU are the one who is going to have to fix the thing if it leaks! ); if you are designing an information retrieval system for a bean-counter, designing one to last just long enough for your paycheck to clear is probably good enough these days. I have worked for these bean counters and know a lot of them are far more concerned with the numbers for the quarter far more than the numbers ten years from now... its a "tragedy of the commons" thing. Get while the getting is good. I have seen enough of this short-sightedness in the industry to make me nauseous. Push the latest boutique language-of-the-day on these guys. They will pay for it. It will go right back out of style and you can rinse, lather, and repeat ad nauseum. The investor class will freely pay for the damndest things.

    Talk about old software still being useful, I still use my old copy of Futurenet Dash-2 occasionally. Same with Borland Turbo C and Eureka. Mathcad. However my Futurenet and PADS stuff is giving way to EAGLE, now that I know I can keep an offline system up and running thanks to a friend who showed me how to keep XP running without it having to be "activated". I was far to afraid to trust anything that had to phone home to get permission to run... I had already been burned with Circuit City Divx disks, and already knew adopting such ephemeral technology was mostly for appeasing companies with far more cash than common sense with eye candy.

    I have already seen a lot of very successful companies still using ancient technology, because it did the job. Just as one would still use an operable pelton wheel in a hydroelectric plant - even if it was a hundred years old. Trouble is with a lot of the old technology - its like the old Maytag ad - the support people are unemployed, cuz the thing just keeps working.

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 3, Insightful) by lentilla on Friday April 10 2015, @12:28PM

      by lentilla (1770) on Friday April 10 2015, @12:28PM (#168717)

      If you are designing a toilet, you may design it to last a hundred years ( *especially* if YOU are the one who is going to have to fix the thing if it leaks! )

      There is a fundamental difference between those who build toilets and those that count beans. The "builders" create things that are used by other people - they build one toilet today, and it get used tomorrow, and the next day, and so on. The "counters"; on the other hand; do some counting today, they do some more counting tomorrow, and so on. The fundamental difference is that a "builder" creates something to perform a function whereas the "counter" only performs a function. That's why toilets last a while whereas the quarterly report only lasts to next quarter.

      I'm probably a bit biased and fit in the "builder" category - any time I repeat a function I'm accumulating notes on how to automate it.

      its a "tragedy of the commons" thing

      Perhaps not. Perhaps it's just a different mindset. There is a large proportion of the population to whom "good enough" is indistinguishable to them from "properly done" - as long as the answer or result is what is required the path taken to arrive there is not important.

  • (Score: 4, Informative) by NotSanguine on Friday April 10 2015, @05:04AM

    The UCSD P-System [wikipedia.org] FTW!

    Just port the execution environment [wikipedia.org] to new hardware platforms with performance tweaks and you're good to go!

    I think I just saved the world! thank you, 20th century!

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 2) by HiThere on Friday April 10 2015, @06:06PM

      by HiThere (866) on Friday April 10 2015, @06:06PM (#168801) Journal

      That's not a bad answer, but it needs a bit of generalization. ANY virtualized environment can last as long as people are willing to keep it running. The p-system may have the longest history, but that's also true for the java environment. If you pick a version and freeze it, it can easily be ported. And it's also true for the Python virtual machine. And...

      The critical abstraction here is that there's a software implementation layer between the specified code and the hardware. And also you need to pick one particular version, and allow no changes except bug fixes. Personally I'm tempted to say the best choice is a VM, say the machine behind qemu. But that's just because it's more flexible.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Interesting) by VortexCortex on Friday April 10 2015, @07:35AM

    by VortexCortex (4067) on Friday April 10 2015, @07:35AM (#168659)

    Every problem in Computer Science can be solved by adding another layer of indirection.

    So you just place a shim between the software and the operating environment/hardware: A virtual machine.

    You write code to the VM's opcode and every system it runs on needs a VM implementation (like how you need a compiler on new platforms for native languages). This solves the need to rewrite software (including the compiler if it's self hosting). The problem with this approach is that it requires additional computation. Interpreted bytecode uses additional CPU at runtime for each operation translated. JiT (Just in Time) compilers use additional CPU to translate parts or whole programs into native code at runtime. AoT (Ahead of Time) compilers translate bytecode once before runtime.

    My compiled language's VM uses Install Time linking with cached AoT translation, so it only uses additional CPU once at install time. Like LLVM my VM can target multiple hardware types. Unlike LLVM my VMs "intermediary opcode"(bytecode) can also run as-is interpreted too. This allows sandboxing even when hardware lacks the feature (e.g. 8x86 [un]real mode). I can also translate code into another language, like C, C++, Delphi, Perl, PHP, Lua, Java, Javascript (ASM.js), (and more) and run it given the specific small core runtime for the target lang is implemented. I can build and deploy my entire product line on a new platform in two to four weeks while the fools who are still NOT writing code in a meta language are still trying to port "Hello Database Connection" to $NEW_LANG. A self hosting VM is a much different beast than a self hosting compiler. It took a over a decade to build, and still gives me the edge over teams of young whipper snappers. When I retire I'll release the code in damn near every lang including $ANY: A bootstrap from a simple subset of opcodes which once implemented (in "native" code) can emulate a full VM stack that's been translated to said subset (Thus allowing one to evade the Ken Thompson Compiler Hack [bell-labs.com]).

    I also have a VM based on Babbage’s Analytical Engine. In the early 1840s Ada Lovelace wrote the first algorithm meant to run on a machine and it still runs today, even though the machine she wrote it for was never completed, thanks to VMs. There's one requirement met. It's time to stop reinventing wheels. VM langs like Java and Perl6 exist, along with their API libraries. What more could they want? To take it to the next level? Then a self hosting Turing complete machine is the only fundamental computational algorithm required. Just look in any living cell... On computers the parallel would be a self hosting VM -- a compiler is only subset of this mechanism and is tied to the hardware unless its for a VM (which is tied to hardware unless the VM is self hosting, hence the dual interpreted and compilable bytecode). All programs (even DNA) need an environment to run in. With C, GNU+Linux, and an emulator written in C you can a kludge together a self hosting VM. I've tightened and optimized this loop to more efficiently harness the power of the same fundamental & essential cybernetic system that life itself uses.

    IMO, Code should never be distributed as native machine opcodes (Hence MRNA vs DNA). Bytecode as an intermediary format can be translated & linked into OPTIMAL native code for the specific machine once upon install. Source based distros like Gentoo can be said to use C as an intermediary opcode. If your OS is not a compiler, you're doing it wrong since you let hardware specifics (machine code specs) leak into user-space (you'll make a virus-like ecosystem instead of a single self procreating life). The main problem with todays OSs is that the last step from compiled intermediary code should be the OSs job; Otherwise the hardware / software barrier has been breached, and the OS can not ensure its programs are valid (like DNA's per-duplication error correction does). The OS / VM level code between the two I dub neither hard nor soft but Wetware.

    Nature taught me everything I need to know about what works for sustainable cybernetic systems.

    • (Score: 1) by lizardloop on Friday April 10 2015, @12:38PM

      by lizardloop (4716) on Friday April 10 2015, @12:38PM (#168720) Journal

      I like your post although I feel like I didn't full understand.

      You're essentially saying Darpa needs to start writing everything in Perl and Java?

  • (Score: 2) by darkfeline on Friday April 10 2015, @06:06PM

    by darkfeline (1030) on Friday April 10 2015, @06:06PM (#168802) Homepage

    Open source (open specification systems and programming languages) and plaintext, what more do you need? Architecture-wise, we aren't moving beyond 64-bit either, which IIRC has more than enough address space to address every single particle in the universe.

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 2) by maxwell demon on Friday April 10 2015, @06:27PM

      by maxwell demon (1608) Subscriber Badge on Friday April 10 2015, @06:27PM (#168808) Journal

      The number of particles in the universe is about 1082 [answers.com] while the number of addresses in a 64-bit machine is 264 ≈ 1.8×1019 — that's quite a bit smaller. Indeed, it wouldn't even be enough to address each star individually.

      Even the IPv6 address space (128 bits) isn't sufficient to address each particle in the universe; however it would at least provide 1000 addresses per star.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by maxwell demon on Friday April 10 2015, @06:35PM

        by maxwell demon (1608) Subscriber Badge on Friday April 10 2015, @06:35PM (#168809) Journal

        Err ... I forgot the prefactor; it's indeed almost 8000 addresses per star.

        --
        The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 0) by Anonymous Coward on Friday April 10 2015, @07:58PM

    by Anonymous Coward on Friday April 10 2015, @07:58PM (#168823)

    This will never happen. Money has to be involved somehow.

  • (Score: 0) by Anonymous Coward on Saturday April 11 2015, @06:07AM

    by Anonymous Coward on Saturday April 11 2015, @06:07AM (#168907)

    Don't they use Forth on probes? What's on Pioneer, Voyager, etc.? Those things have already run for decades.