Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Friday April 10 2015, @01:21AM   Printer-friendly
from the stay-on-my-lawn-for-a-long-long-time dept.

From the phys.org article:

As modern software systems continue inexorably to increase in complexity and capability, users have become accustomed to periodic cycles of updating and upgrading to avoid obsolescence—if at some cost in terms of frustration. In the case of the U.S. military, having access to well-functioning software systems and underlying content is critical to national security, but updates are no less problematic than among civilian users and often demand considerable time and expense. That is why today DARPA announced it will launch an ambitious four-year research project to investigate the fundamental computational and algorithmic requirements necessary for software systems and data to remain robust and functional in excess of 100 years.

The Building Resource Adaptive Software Systems, or BRASS, program seeks to realize foundational advances in the design and implementation of long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate. Such advances will necessitate the development of new linguistic abstractions, formal methods, and resource-aware program analyses to discover and specify program transformations, as well as systems designed to monitor changes in the surrounding digital ecosystem. The program is expected to lead to significant improvements in software resilience, reliability and maintainability.

DARPA's press release and call for research proposals.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Funny) by meisterister on Friday April 10 2015, @01:41AM

    by meisterister (949) on Friday April 10 2015, @01:41AM (#168583) Journal

    Given that the BSD developers care about functionality and stability more than pandering to the lowest common denominator, I would fully expect a BSD install to last for several decades if not a century (barring component failures).

    They should also use a KISS approach, since I don't expect that anyone 100 years from now would want to maintain this clusterf*ck http://en.wikipedia.org/wiki/Systemd [wikipedia.org]

    --
    (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Funny=1, Disagree=1, Touché=1, Total=4
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Interesting) by sigma on Friday April 10 2015, @01:59AM

    by sigma (1225) on Friday April 10 2015, @01:59AM (#168586)

    Given that the BSD developers care about functionality and stability more than pandering to the lowest common denominator, I would fully expect a BSD install to last for several decades if not a century (barring component failures).

    Then you're completely missing the point of BRASS. Their goal is to have software that is ADAPTIVE - software that can modify itself to cope with hardware and other resource changes and developments. BSD's stability (stagnation?) is the opposite of the dynamic system DARPA are envisioning, and like it or not, systemd looks much more like a step down that adaptive path than any other init system.

    The Building Resource Adaptive Software Systems, or BRASS, program seeks to realize foundational advances in the design and implementation of long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate.

    • (Score: 1, Insightful) by Anonymous Coward on Friday April 10 2015, @02:01AM

      by Anonymous Coward on Friday April 10 2015, @02:01AM (#168588)

      fuck you and systemd

      • (Score: 3, Funny) by sigma on Friday April 10 2015, @02:05AM

        by sigma (1225) on Friday April 10 2015, @02:05AM (#168591)

        Fuck me?

        Sorry, AC, but I don't go in for these backdoor shenanigans. Sure, I'm flattered, maybe even a little curious, but the answer is no!

        • (Score: 4, Interesting) by tynin on Friday April 10 2015, @02:23AM

          by tynin (2013) on Friday April 10 2015, @02:23AM (#168600) Journal

          I'm pretty sure you don't give a toot about systemd, because that isn't what this is about. It is about truly adaptive software that can integrate in the face of changing hardware. One of the places these systems will make sense is in infrastructure that just needs to do 1 thing well, and for a long long time. These systems will not be as modern as the new tech of that day yet to come, but they don't need to be, they just need to work. Some things shouldn't need to have a staff of admin's constantly relearning the latest init systems of the day to keep the machine working after the next patch. Having a solid high tech infrastructure that can be repaired and perhaps scaled with the hardware tech of the day would be a boon across the board for the entire baseline of civilization.

          • (Score: 1, Insightful) by Anonymous Coward on Friday April 10 2015, @07:49AM

            by Anonymous Coward on Friday April 10 2015, @07:49AM (#168664)

            You mean like TCP/IP along with the associated alphabet soup of protocols? Packetheads figured that stuff out decades ago. It would be nice to apply that methodology to other things. The track record for networking robustness is amazing.

            • (Score: 0) by Anonymous Coward on Friday April 10 2015, @08:24AM

              by Anonymous Coward on Friday April 10 2015, @08:24AM (#168671)

              Apparently TCP/IP software was not able to automatically adapt to a growing number of connected computers, so a manual update (IPv6) was needed.

          • (Score: 4, Funny) by Gaaark on Friday April 10 2015, @04:48PM

            by Gaaark (41) Subscriber Badge on Friday April 10 2015, @04:48PM (#168773) Journal

            Having a solid high tech infrastructure that can be repaired and perhaps scaled with the hardware tech of the day would be a boon across the board for the entire baseline of civilization.

            And call the software "Harry Seldon"

            --
            --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 3, Funny) by lentilla on Friday April 10 2015, @02:12AM

      by lentilla (1770) on Friday April 10 2015, @02:12AM (#168594)

      systemd looks much more like a step down that adaptive path

      Well put. Slightly further down that road and we'll be calling it "SkyNet".

    • (Score: 2) by c0lo on Friday April 10 2015, @02:35AM

      by c0lo (156) on Friday April 10 2015, @02:35AM (#168605) Journal

      Their goal is to have software that is ADAPTIVE - software that can modify itself to cope with hardware and other resource changes and developments.

      Like what? Write a controller for a caterpillar track robotic tank and have it adapting with no difficulties to starwars walkers?

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0
      • (Score: 3, Interesting) by sigma on Friday April 10 2015, @03:54AM

        by sigma (1225) on Friday April 10 2015, @03:54AM (#168617)

        See tibman's comment below. http://soylentnews.org/comments.pl?sid=6948&cid=168614 [soylentnews.org]

        It's about software that's tolerant to large disruptions to its hardware, potentially including, as you say, different robotics platforms.

        Frankly, it's not that hard to imagine - older platforms like Multics and even commodity Amiga computers had some very good automatic configuration systems. A redesign that included the ability to search and integrate something like OSRS projects [osrfoundation.org] on demand should be able to handle robotic hardware variants.

        Better hardware design standards that included a modern version plug and play of the Amiga's Autoconfig would go a long way to making component changes seamless, as would open hardware with ROM-based self-documenting properties.

        • (Score: 0) by Anonymous Coward on Friday April 10 2015, @07:46AM

          by Anonymous Coward on Friday April 10 2015, @07:46AM (#168663)

          Then it is no longer the software that is adaptable but hardware that is fixed enough through time that software does not need to change itself. Might as well call windows infinitely adaptable because a usb stick can be plugged in with a patching script.

          • (Score: 2) by tibman on Friday April 10 2015, @01:26PM

            by tibman (134) Subscriber Badge on Friday April 10 2015, @01:26PM (#168734)

            A USB stick isn't a piece of hardware the OS is running on. The hardware shouldn't be fixed in time, that is the point. The software should be adaptable enough to recognize that ram, processors, and storage being added and removed from the system. You should be able to bisect the bus and the system still function (end users won't even notice).

            --
            SN won't survive on lurkers alone. Write comments.
    • (Score: 3, Informative) by q.kontinuum on Friday April 10 2015, @08:00AM

      by q.kontinuum (532) on Friday April 10 2015, @08:00AM (#168665) Journal

      systemd looks much more like a step down that adaptive path than any other init system

      If only there was a "flamebait +1"... Som baits are just too entertaining to down-mod them ;-)

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 0) by Anonymous Coward on Friday April 10 2015, @08:21AM

      by Anonymous Coward on Friday April 10 2015, @08:21AM (#168670)

      Their goal is to have software that is ADAPTIVE - software that can modify itself

      Ah, self-modifying code. I thought that was identified as bad practice a long time ago. ;-)

  • (Score: 5, Interesting) by bzipitidoo on Friday April 10 2015, @05:07AM

    by bzipitidoo (4388) Subscriber Badge on Friday April 10 2015, @05:07AM (#168632) Journal

    Not a chance. In the past 30 years, we've moved from 8 bit to 16, 32, and 64 bit systems. Every one of those moves required a lot of reworking. You might think after moving from 16 to 32, we'd have it down, and the shift to 64 bit would be easy, but no. Many programs have an implicit limit on the amount of data they can handle, often restricted to what 32bit addresses allow, and must be extensively rewritten, not just recompiled, to expand their capacity. Systems have changed so much in so many other ways. Hard drives took a big jump from 40M to 500M in the mid 90s, and that killed much of the interest in compressed file systems. The 80486 introduced some new operations that are key to running a multitasking OS. Graphics computations have shifted hugely from CPUs driving primitive VGA graphics without any GPU at all, to dedicated massively parallel GPUs. Took a massive rewrite of software to properly utilize that change, and we're still working on it. That's the reason the code for something like the original Doom game engine is no longer practical or particularly interesting-- just isn't relevant to current graphics. It's also why XWindows so badly needs a redesign, and projects like Wayland have sprung up. The xlib part of XWindows is full of 1980s cruft for having the CPU draw lines and other such primitive operations that GPUs do now.

    OSes have also changed massively. In the days of DOS, everyone provided their own graphics drivers, and programs were quite free to just take over the system and ignore DOS. Protected mode was another huge advance that empowered a massive shift in OS technology, which then drove a big rewrite of a great deal of software to make apps more aware of system facilities. For instance, no DOS program had code to handle the "clipboard", and, without help, can't participate in the copying and pasting between apps that is easy and routine now. Also, socket programming used to be a niche, now, with the Internet everywhere, networking libraries are much more important. Early Linux used this "a.out" executable file format and libc5. Changing to ELF and libc6 was another big move that required much reworking, a simple recompile was often not enough. Relatively new in hardware support is the No Execute bit for virtual pages. There could still be programs that deliberately modify their own machine code, and all those will no longer work on a system that uses a No Execute bit, they must be modified. Who knows what the future will bring in the way of advances? Virtual machine support is still new, and still difficult to do cleanly on a PC.

    I don't think computing is settled enough yet to think of 100 year lifetimes. Programming languages are more numerous and divergent than ever, with only a broad consensus that Structured Programming, OOP, and function programming are all good, but no agreement on the details.

    We're still stuck with a lot of legacy PC design. Shifting away from the antiquated PC platform to finally get rid of that, will require much work.

    • (Score: 3, Interesting) by tftp on Friday April 10 2015, @06:38AM

      by tftp (806) on Friday April 10 2015, @06:38AM (#168654) Homepage

      You are describing existing execution environments. They all are unsuitable, of course, that's why DARPA is asking for a solution.

      I would think that the desired solution will come with its own, sufficiently abstract language and I/O, and all that can run on any hardware that can execute the language (interpreted, or compiled into IL, or whatever.) This might work for tasks that are simple and abstract, like calculation of digits of Math.PI. However any software that operates hardware probably cannot be portable enough to do the job with an acceptable efficiency. Sure, you can render a modern FPS with merely setPixel() API, but that would not be such a great idea - especially if future monitors have not only (X,Y) but Z as well.

      To rephrase a classical joke, you can write software that will remain usable for 100 years. But nobody will want to use it, except few very special applications, like control circuits. You can run Windows 3.1 today, in a VM if you must; but why would you want to do that if the only extermal connection in that OS is a CD ROM and a floppy? It's pretty hard to design software that is not only functional so far in the future, but is also useful. Most of the software today is made for a specific purpose, be it to control a TV set or to decode a compressed audio file and play samples via some audio hardware. They have no value outside of that compression format and that audio wave API.

      This DARPA contract probably will end up taking several years, several million dollars, and will deliver a souped-up VM that will be capable of running a well defined execution environment. Perhaps it will have some abstraction capabilities in the hardware. For example, if it has video cameras, you can enumerate them, you can find out their orientation, resolution, day/night settings... you can poll for LIDARs, propulsion, energy sources - all the stuff that you could find in, say, a robot. You can expand this introspection to batteries, RAM, thermal management. You would be able then to write software that can run in that environment, inspect available functions and make use of those that are relevant. Does it appear to be practical? Hard to say. But it surely will be immediately profitable. It will also be very hard to be certain that the product works correctly in every combination of peripherals that come online and offline as they please.