Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by cmn32480 on Wednesday September 16 2015, @02:47AM   Printer-friendly
from the how-long-is-long-term dept.

I don't like change for the sake of change, but it seems that all major OSes suffer from the same disease.

So I'm curious if you guys share these feelings.

I'm thinking that it would be really great to have a long-term support, backward-compatible OS having a well designed, stable, intuitive and accessible UI. This OS would target PCs (desktop, laptop, netbooks) and tablets.

To define my terms:

By long-term I mean at least 30 years.

Backward-compatible means all applications are able to be run for the whole lifetime of the OS on the same or compatible hardware, as long as they don't have bugs that are revealed by changes in HW or updates of the OS and don't depend on bugs in the OS implementation itself.

The UI will be largely shared by the PC and tablet versions of the OS. Some differences will be unavoidable to accommodate the small-form screen of the latter devices. The user interface will be composed of a collection of all good designs available today and some new ideas that help the users to better perform most common tasks.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Anonymous Coward on Wednesday September 16 2015, @02:58AM

    by Anonymous Coward on Wednesday September 16 2015, @02:58AM (#236828)

    Mainframes

    • (Score: 5, Insightful) by khchung on Wednesday September 16 2015, @04:54AM

      by khchung (457) on Wednesday September 16 2015, @04:54AM (#236856)

      I came in to say exactly this. If you want a platform that can run your programs 30 years without any change, throughout all the hardware upgrades, buy a mainframe.

    • (Score: 3, Insightful) by zocalo on Wednesday September 16 2015, @08:01AM

      by zocalo (302) on Wednesday September 16 2015, @08:01AM (#236893)
      Not really applicable to the OP's end-user requirements though, except as a back-end to a set of portable thin-client based solutions... That said, I've worked in a manufacturing environment where we had Sun SparcStations running SunOS that had been kept in service for over 20 years which *does* meet the desktop requirement at least, but even with hardware with Sun's very high-standards of construction spares were still required and got to be a bitch to source, even from eBay.

      So, with that in mind and very little information on what the user's application requirements are, perhaps something like ChromeOS with most of the data uploaded to the cloud and accessed via Web Apps might be the way to go? Web Browsers have been around for ~25 years with essentially the same core UI and are still capable of rendering the original CERN HTML content, so that's in the ball park - especially since we'd now be starting with the capabilties of HTML5. If you can abstract the hardware away entirely, or maybe just reduce it to a functional set of requirements for a hypervisor that could be reimplemented on any number of hardware platforms, and similarly abstract the applications away from the OS then you can minimise the requirements between OS -> hardware and Apps/UI -> OS. Once you've done that you no longer care about the hardware or the OS so much, just that the apps can be made to run - and if that's trivial enough to do then you should have a chance of making it to 30 years.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @03:52PM

        by Anonymous Coward on Wednesday September 16 2015, @03:52PM (#237007)

        Hiding all the updates in the "cloud" (somebody else's computer) does not make for a reliable computing experience.

        I have had Google docs go from mostly working to being unusable with an old browser I was using: due to a software update on Google's side. In this situation, Google gave no advanced notice, and I have no opportunity to roll back the changes. I was locked out of my documents until I could find a "modern enough" browser.

  • (Score: 4, Insightful) by Anonymous Coward on Wednesday September 16 2015, @03:01AM

    by Anonymous Coward on Wednesday September 16 2015, @03:01AM (#236829)

    The UI will be largely shared by the PC and tablet versions of the OS. Some differences will be unavoidable to accommodate the small-form screen of the latter devices. The user interface will be composed of a collection of all good designs available today and some new ideas that help the users to better perform most common tasks.

    Good luck and get the fuck off my lawn.

    • (Score: 2) by Hyperturtle on Wednesday September 16 2015, @11:48PM

      by Hyperturtle (2824) on Wednesday September 16 2015, @11:48PM (#237191)

      indeed.

      and the commentary is disappointing. I am a network guy. I am not running a router OS on a tablet, nor making the tablet a server, although I have to admit I have done both. But it sucked--it was to prove it was stupid to do so.

      Perhaps I was mistaken in thinking there would be a discussion here regarding things beyond what OS standards people use to entertain themselves.

      Of course, i am all for play... I'm a big gamer even if I have lost that edge over time (I can now afford hardware to make up for what youth I've lost!) -- but there is no way in heck that I would accept a drag and drop interface for the designs that I do. Maybe the "UI will be largely shared" is a suggestion from someone that doesn't do any significant amount of productive work on a computer?

      I would use a surface tablet if visio would integrate strongly with a stylus and let me draw on it easily, but paper is still easier to use (if not later converting that free form diagram into something nice, visio or otherwise). But there is no way that a touch screen is going to work for my line of business, and people like this have no standing telling me how I should be using the UI because it works better on their tablet to do something unrelated.

      If we only could keep the classic and the new... but the classic of anything is only for old people, and always has been.

  • (Score: 2, Insightful) by Anonymous Coward on Wednesday September 16 2015, @03:01AM

    by Anonymous Coward on Wednesday September 16 2015, @03:01AM (#236831)

    The difficulty is that you can not use current OS's for 30+ years because of the security model.

    Currently, Ad-hoc debugging is used for OS (and everything else) development.

    IMO, if you want a secure system, you are going to have to prove the software correct. That also implies proving the API correct as well if you want it to be forward compatible for 30 years.

    Certain versions of the L4 Microkernel have been proven correct. Sub-sets of some common libraries have been proven correct.

    I feel for proving hardware correct, we have to go back to fuse-wire based ROM: at least for development machines.

    • (Score: 3, Informative) by TheRaven on Wednesday September 16 2015, @09:34AM

      by TheRaven (270) on Wednesday September 16 2015, @09:34AM (#236908) Journal

      seL4 was 'proven' correct, for given values of 'proven'. As in, there was a security hole in their system call layer that was found the first day it was released as open source, because that was not part of the set of things that they were proving. They also assume certain things about the behaviour of the MMU that have not been verified.

      Hardware generally gets a lot more formal verification effort than software, because with a CPU you're going to be shipping a few million identical copies with little or no ability to patch in the field. Warren Hunt's group at UT Austin gets a lot of industrial funding for this reason.

      --
      sudo mod me up
      • (Score: 2) by theluggage on Wednesday September 16 2015, @10:09AM

        by theluggage (1797) on Wednesday September 16 2015, @10:09AM (#236913)

        seL4 was 'proven' correct, for given values of 'proven'. As in, there was a security hole in their system call layer that was found the first day it was released as open source, because that was not part of the set of things that they were proving.

        Ah yes, the great "formal methods" fallacy. You can't prove that a program that does a real-world job is "correct", as if it were a mathematical theorem, just that it matches the formal specification. (...any more than you can mathematically prove that the application of a theorem is correct). The main effect is to move the burden of "correctness" out of the code and into the formulation of the specification.

        Also, its only as "proven" as the compiler, the run-time-libraries and the hardware (most CPUs have an 'errata' as long as your arm...)

        It may well be a useful exercise in some cases (ISTR that seL4 is a microkernel, i.e. basically a system to securely pass well-formed messages and reject badly-formed ones) so it sounds like a good application, but trying to formally prove an entire, working operating system would be like solving the wave equation for an elephant...

        • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @04:07PM

          by Anonymous Coward on Wednesday September 16 2015, @04:07PM (#237017)

          My point is that proving something matches a formal specification is useful on a remotely complex computer system.

          Nobody knows the system from top-to-bottom. Abstraction is used over and over again. If you can prove each abstraction layer correct, you can eliminate whole classes of bugs where third-party code or hardware does not act as specified. If it is impossible for the underlying hardware to act as specified, the specification should be changed.

          My belief is that if software is routinely formally specified, we can also start imposing liability for deliberate back-doors. Currently, if a researcher finds a master password in a router, the manufacturer inevitably says they "forgot" to disable testing code. If the testing code makes the software operate outside the formally proven specification: you can now attribute their actions to malice, rather than stupidity.

  • (Score: 3, Interesting) by Anonymous Coward on Wednesday September 16 2015, @03:10AM

    by Anonymous Coward on Wednesday September 16 2015, @03:10AM (#236832)

    Doesn't any OS fit your specifications reasonably well?

    Pick any OS, any version, any age. Make sure everything works. Then, don't change anything for three decades.

    Backward-compatible means all applications are able to be run for the whole lifetime of the OS on the same or compatible hardware, as long as they don't have bugs that are revealed by changes in HW or updates of the OS and don't depend on bugs in the OS implementation itself.

    Check. That one was easy.

    The UI will be largely shared by the PC and tablet versions of the OS.

    If it is the same version, it will be the same. You could always go Windows if you want something both fancy and identical without having touch-screen controls only.

    Some differences will be unavoidable to accommodate the small-form screen of the latter devices.

    Nah, see above. It would be quite nice to have a Linux WM that fit the bill though.

    The user interface will be composed of a collection of all good designs available today

    That is very subjective. I still like command line/win95 interfaces the best.

    and some new ideas that help the users to better perform most common tasks.

    I want that too, but that is the billion-dollar holy grail for every UI/UX professional.

    I get what you are saying and what you want. The best way to go about it is startup your own distro, make it incredibly solid (hint: use BSD), give it that cross-platform WM (this is the hard part), then develop a community to keep it as static as possible (this is the harder part. hint: use BSD). That would just about do it. The new latest and greatest probably wont work with it, as change necessitates change. Seriously though, if you really want something that will likely be supported, work the same, and be backwards compatible decades from now, your best bet is FreeBSD or NetBSD. Everything else I don't expect to be recognizable after another decade of tweaking let alone three.

    • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @09:10AM

      by Anonymous Coward on Wednesday September 16 2015, @09:10AM (#236905)

      Pick any OS, any version, any age. Make sure everything works. Then, don't change anything for three decades.

      Good luck trying to find replacement hardware that your OS supports when the parts inevitably fail 10, or 20 years down the road.

      • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @12:38PM

        by Anonymous Coward on Wednesday September 16 2015, @12:38PM (#236937)

        >> Good luck trying to find replacement hardware that your OS supports when the parts inevitably fail 10, or 20 years down the road.

        I would recommend stock piling a lot of replacement parts.

      • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @12:54PM

        by Anonymous Coward on Wednesday September 16 2015, @12:54PM (#236947)

        Because there are no working Commodore64 or Apple ][s left. There are no PowerPCs no mainframes, no big iron still in production. Son, I really don't care how old you are, there are production batch computers still working in production that are older than you. As the saying goes, "Hardware is cheap."

      • (Score: 2, Interesting) by Eunuchswear on Wednesday September 16 2015, @03:28PM

        by Eunuchswear (525) on Wednesday September 16 2015, @03:28PM (#236993) Journal

        Good luck trying to find replacement hardware that your OS supports when the parts inevitably fail 10, or 20 years down the road.

        Emulation.

        I run ICL George 3 on my laptop, an OS that was written in the 1960's with its last release in 1985.

        --
        Watch this Heartland Institute video [youtube.com]
        • (Score: 2) by Immerman on Wednesday September 16 2015, @05:07PM

          by Immerman (3985) on Wednesday September 16 2015, @05:07PM (#237035)

          This would actually be my preferred solution for a lot of things. I've had the devils own time of it for any remotely modern OS though. Build Windows95/98/XP virtual machine on OS 1, everything works fine. Try to transfer that virtual machine to a new PC though, and as often as not it absolutely refuses to boot. Same VM software (different version obviously, for a different host OS), same settings and virtual hardware, but no dice.

          If anyone can suggest a cross-platform VM that will consistently "just work" when migrating guest machines to a new host, please let me know.

          • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @07:38PM

            by Anonymous Coward on Wednesday September 16 2015, @07:38PM (#237086)

            Personally qemu has worked fine for me since the 0.10 or 0.11 releases, maybe earlier.

            Just make sure you use 'raw' image files, not qcow, as I have inevitably lost whole images due to power failures or kernel faults in the host operating system.

            Beyond that it still supports 0.11 style hardware profiles in the latest 2.x versions, supports kvm for acceleration, has plenty of hard options, and tends to just work once you get a configure you like set up.

    • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @10:14AM

      by Anonymous Coward on Wednesday September 16 2015, @10:14AM (#236915)

      That is very subjective. I still like command line/win95 interfaces the best.

      Wait, you honestly like the Win95 command line best?

      • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @12:57PM

        by Anonymous Coward on Wednesday September 16 2015, @12:57PM (#236950)

        You honestly don't know what a forward slash means or why someone would choose that over a backslash. This site might not be the one for you.

        • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @03:09PM

          by Anonymous Coward on Wednesday September 16 2015, @03:09PM (#236987)

          Obviously it's a path separator.

  • (Score: 5, Insightful) by frojack on Wednesday September 16 2015, @03:23AM

    by frojack (1554) on Wednesday September 16 2015, @03:23AM (#236836) Journal

    30 years?
    We'd still be stuck in DOS prompts for Christ sake!! This sounds like a befuddles Windows XP user suddenly saddled with Windows 10.

    I fail to see the point in this. Everyone manages too learn a new system periodically, whether it is a computer or a car or a household appliance. It doesn't take that much effort to get used to a new system unless you have some learning disability. Further, there is not that much software from 30 years ago that is running unchanged today.

    All that is needed to keep old stuff around is a good emulation capability, and No, I don't mean Wine. I mean just an executable or a system module that would run that old softare from the Pleistocene without installing a mess of finicky software.

    Other than that, there really isn't much point in a 30 year rule. We aren't doing the same things with computers now as we were 30 years ago.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 5, Interesting) by TheGratefulNet on Wednesday September 16 2015, @06:00AM

      by TheGratefulNet (659) on Wednesday September 16 2015, @06:00AM (#236866)

      "its the apps, stupid!"

      the os just supports the apps.

      for 'apps' I am using: emacs, sometimes vi, gcc, fvwm (in twm mode), whatever xterm-alike is still working and a few other things here and there. yeah, some browsers, but those are almost whole os/app combos of their own and they will change over time, no matter what.

      I started using unix in the mid 80's and almost all those things I listed, other than X11, are pretty much the same look, same feel, same keys, same mouse UI. the same fucking colors I started with at DEC with DECterms and DECwindows, I continue to use today on my fvwm1 in twm mode. is that sick, or what? ;) but I don't see the point of change unless it buys me something. and I've been wired to use keyboard/mouse combos to move windows, iconify them, cycle them, raise/lower them, resize them - and I've kept the same set of ui styles for about 30 yrs! maybe 25 if you want to round down.

      I don't use the same os, but it feels mostly the same when I'm in term mode, which is most of the time.

      I have no desktop. my desk is black and my term windows are black bg with green, amber, some other colors in fg. border of active window is clear and obvious, all others are grey. no icon box. no trash icons. no drive icons. no folders. no desktop 'things'. just terms and apps in their window frames. and when I do a 'ps' I see much less shit on my system than a gnome or kde or whatever the hell else 'desktop of the day' is running.

      I've kept the same system for decades. its the power of source ;) because I have source and things have not system-dee'd me (so far), I've been able to run the same look/feel/ui and I'm really speedy and efficient at it. not having to re-re-relearn a UI really is a god-send! so few people have any idea what its like to be able to use a tool that you learned 30 yrs ago and still use it the same way, with full effectiveness, today.

      its the very opposite of the throw-away generation. the 'we dont fix things' generation. sigh - the system-d generation. (sorry, yeah, I 'went there').

      --
      "It is now safe to switch off your computer."
      • (Score: 2) by LoRdTAW on Wednesday September 16 2015, @12:57PM

        by LoRdTAW (3755) on Wednesday September 16 2015, @12:57PM (#236949) Journal

        Exactly. If you want forward and backward compatibility, Unix/POSIX/X windows is the way to go. But I wouldn't bet the farm on it just yet.

        • (Score: 2) by FatPhil on Wednesday September 16 2015, @01:39PM

          by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Wednesday September 16 2015, @01:39PM (#236960) Homepage
          Systemd's shat on linux compatibility.
          Wayland's shat on X compatibility.
          The only real compatibility that is preserved is with command-line tools that need little more than argv, stdin, and stdout.

          Fortunately most of the stuff I do needs little more than a terminal, so I'm good. My desktop looks pretty much identical to how it did 22 years ago, apart from $COLUMNS being much larger.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 1) by Z-A,z-a,01234 on Wednesday September 16 2015, @07:21AM

      by Z-A,z-a,01234 (5873) on Wednesday September 16 2015, @07:21AM (#236877)

      I didn't say that the GUI will remain the same, or the OS itself. Just that it could run old applications if necessary.
      This can be achieved on Windows with some 3rd party software for some classes of programs (like games).

      • (Score: 2) by mr_mischief on Wednesday September 16 2015, @04:10PM

        by mr_mischief (4884) on Wednesday September 16 2015, @04:10PM (#237018)

        XWayland exists. It allows people to run X apps on top of Wayland.

        • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @04:27PM

          by Anonymous Coward on Wednesday September 16 2015, @04:27PM (#237023)

          Is there also a way to run Wayland applications on top of X?

    • (Score: 3, Insightful) by Hairyfeet on Wednesday September 16 2015, @08:01AM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Wednesday September 16 2015, @08:01AM (#236894) Journal

      Not to mention the hardware has changed enough that it would be pointless to keep it. Lets take XP for example, my first PC with WinXP was a 1.3Ghz Pentium III with 256Mb of RAM...would you REALLY want to be trying to run your programs on hardware that old? The IPC sucks, power usage as far as performance per watt sucks, frankly there wouldn't be any positive over just picking up a $50 5 year old PC off of Craigslist.

      This is why I thought the MSFT Windows model (before Win 10) was frankly perfect, you get a decade of support and then if your hardware is still capable? You could always buy a cheap upgrade copy right before release. Of course since we have had quad cores since 07 so that model may no longer work (and of course MSFT shat all over it with Win "just give us all ur data, we're Google now" 10) as most tasks general users have will work just fine on a first gen quad, I've been using my late father's Phenom I 9600 desktop at the shop the past few months and for daily work? Its perfectly fine and I see no reason why it couldn't continue to be used for a netbox in 2020, but a 30 year OS? Yeah...no, hardware changes too often to make this a good idea. Because while I'm using that 9600 because it works (as well as sentimental reasons) I have zero doubt an Athlon 5350 would be able to do every task it does while using less than a dozen watts to the 50w-60w the 9600 averages, I don't even want to know how little power a PC in 2037 will use compared to this 2007 PC. Hell I still throw in the garbage every Pentium IV that crosses my desk simply because of its power wasting, can you imagine how much power you'd be wasting using a PC of that age 30 years from now?

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
    • (Score: 2) by HiThere on Wednesday September 16 2015, @07:07PM

      by HiThere (866) Subscriber Badge on Wednesday September 16 2015, @07:07PM (#237073) Journal

      It's not that simple. What if a needed app requires the ability to "call home"? No way will that keep working for 30 years. Also, if you want the same system to keep working, you won't be able to adapt to new hardware. The summary explicitly mentioned tablets, and they haven't even been around 15 years, much less 30. No pre-tablet OS could work on them, because it couldn't support multi-touch. And this is true even under emulation.

      My best answer is pick something like a really basic Linux version, or a BSD, and disconnect it from the web. Make backups, and get backup hardware that you can stockpile in case of need. And DON'T EVER connect it to the web. Perhaps, since security wasn't mentioned, you could connect it indirectly via a relay system that had tight filters. And definitely don't ever connect it over WIFI. And accept that you aren't going to be updating any of your software, ever.

      Remember, 20 years ago is MSWind95. 30 years ago puts you back at the first Apple Macs and S100 computers.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 5, Interesting) by gman003 on Wednesday September 16 2015, @03:26AM

    by gman003 (4155) on Wednesday September 16 2015, @03:26AM (#236837)

    You know what interfaces haven't changed in decades? Hardware interfaces. ATA still works the same. PCI still works the same. VGA still works the same. USB is backwards-compatible all the way back to 1.1. I think modern OSes are even ISA-aware, due to LPC being software-compatible with ISA.

    What you want, then, is a virtual machine that can be run on whatever future hardware you have. Presumably x86 will still dominate in 30 years, but by then emulation should be easy anyways. Set the machines up with strictly-defined parameters - this CPU capability, this video processor, this sound chip, these storage devices on this bus, etc.

    If you really want to get fancy, put each application on a VM with an absolutely minimal OS, with interaction between VMs handled either via the host machine's OS, or virtual networking. Maybe make each VM render to a "window" with a fixed resolution, optionally pixel-doubled on HDPI screens. You wouldn't have to keep the host machine's OS compatible, since you can change the VM software without actually changing the contents of the VM images.

    • (Score: 2) by shortscreen on Wednesday September 16 2015, @05:45AM

      by shortscreen (2252) on Wednesday September 16 2015, @05:45AM (#236862) Journal

      ATA compatibility is no longer a given, since being replaced by AHCI.

      I heard something about recent video cards too but I can't remember what (dropped support for some low resolutions or scanrates??)

      • (Score: 2, Funny) by Anonymous Coward on Wednesday September 16 2015, @07:48AM

        by Anonymous Coward on Wednesday September 16 2015, @07:48AM (#236888)

        I heard something about recent video cards too but I can't remember what

        Yeah! I heard that too! Amazing, huh?

        • (Score: 1, Funny) by Anonymous Coward on Wednesday September 16 2015, @10:18AM

          by Anonymous Coward on Wednesday September 16 2015, @10:18AM (#236916)

          I heard something about recent video cards too but I can't remember what

          Yeah! I heard that too! Amazing, huh?

          That's interesting. I heard exactly the opposite.

    • (Score: 1) by Z-A,z-a,01234 on Wednesday September 16 2015, @07:27AM

      by Z-A,z-a,01234 (5873) on Wednesday September 16 2015, @07:27AM (#236878)

      There are two problems with VMs:
      1. they will not support old software (OSes), so you cannot find drivers for your latest version of VM software
      2. performance is highly degraded. Not to mention difficult, if not impossible to get 3D acceleration and so on.

      Neah, I want my native solution.

      • (Score: 2) by VLM on Wednesday September 16 2015, @12:14PM

        by VLM (445) on Wednesday September 16 2015, @12:14PM (#236932)

        Pick something to emulate that's popular in the computer crowd. So a PDP-8, or a PDP-11, or a VAX, or 60s IBM mainframe. They find them artistically appealing, which is why emulators have been around continuously with continuous support since they were physically possible. The PDP-8 emu I compiled in '93 was "old" back then and worked fine last week and as long as people study architecture and "software archaeology" or WTF its called people will STILL have compilable working PDP-8 emus.

        Its 2015, performance doesn't matter. Nothing new has happened in many years (decades?) and only insiders can tell the difference.

        • (Score: 1, Funny) by Anonymous Coward on Wednesday September 16 2015, @01:58PM

          by Anonymous Coward on Wednesday September 16 2015, @01:58PM (#236966)

          In the worst case, you'll run the PDP-8 emulator on a Commodore 64 emulator on a DOS emulator on your Linux system. ;-)

      • (Score: 2) by turgid on Thursday September 17 2015, @05:41PM

        by turgid (4318) Subscriber Badge on Thursday September 17 2015, @05:41PM (#237583) Journal

        Don't use a hardware VM, use a software emulator like quemu or bochs. That way you get to speicify precisely the hardware you want to present to your legacy software (OS and applications).

    • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @01:01PM

      by Anonymous Coward on Wednesday September 16 2015, @01:01PM (#236951)

      Presumably x86 will still dominate in 30 years

      Surprisingly x86 has been quickly shoved out the door over the last five years in favor of AMD64. In just a few short years it will likely be difficult to buy a new 32-bit desktop CPU anymore.

      • (Score: 2) by gman003 on Wednesday September 16 2015, @02:36PM

        by gman003 (4155) on Wednesday September 16 2015, @02:36PM (#236980)

        The currently favored nomenclature is for "x86" to refer to the entire family of instruction sets, with x86-16, x86-32 and x86-64 being used to differentiate between the different major operating modes.

        Seeing as even the newest i7 will power on in x86-16 real mode (or so I'm told), I don't foresee x86-32 software becoming incompatible anytime soon.

    • (Score: 2) by Immerman on Wednesday September 16 2015, @05:15PM

      by Immerman (3985) on Wednesday September 16 2015, @05:15PM (#237039)

      I quite agree. Now if only I could find a VM that would reliably support the same guest OS image across multiple VM versions and host OSes I'd be happy.

  • (Score: 5, Insightful) by Appalbarry on Wednesday September 16 2015, @03:28AM

    by Appalbarry (66) on Wednesday September 16 2015, @03:28AM (#236838) Journal

    The single biggest problem is that an OS does not exist in a vacuum. For instance:

    - hardware changes, evolves, and wears out. Most of the hardware that would have been used thirty years ago is pretty much extinct, and if you can find a working item you can't necessarily repair it.

    - Even if your primary computer can be maintained unchanged, your peripherals probably can't. Or, in the case of printers for instance, your consumables will stop being manufactured.

    - This all also assumes that you'll find one set of applications that will remain static for thirty years. If they change significantly your hardware and your OS will eventually also need to change.

    - As noted, security needs may force changes to software and the OS. I suppose if your machine is not connected to any network, and doesn't ever see USB devices, floppy disks, or other virus carriers you could ignore this.

    - UI choices are fluid, and not entirely predictable. People who twenty years ago typed c>wp to start their word processor now expect to double click an icon. Or, these days, tap it on screen. You can't assume that the lovely Ui that you create today will be at all user friendly to people in ten, twenty, or thirty years.

    I started with a Commodore 64, and eventually moved through multiple DOS and Windows machines, with one side trip to Apple, then to Linux. I've had all sorts of hardware, big and small.

    Yet today probably 60% of my computer use happens on a tiny little device that I carry in my pocket. Twenty years ago that could have been imagined, but the technology didn't exist to make it happen.

    Which is a long way to say: if you plan to lock down your computing system for thirty years, you're by definition rejecting the next thirty years of evolution and improvement - a significant part of which you really can't imagine right now.

    • (Score: 2) by Common Joe on Wednesday September 16 2015, @04:01AM

      by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Wednesday September 16 2015, @04:01AM (#236845) Journal

      And what happens when the 30 years are up? Upgrading apps to a modern O.S. along with apps will be extremely difficult to make a 30 year jump. More likely impossible.

      Interestingly enough, we have an experiment going on right now with a very similar scenario: our banking software. It is running COBOL code that is decades old. The only reason why this is happening is because the banking industry would collapse without it. They are afraid to update the code because of introduction of bugs and how money it would cost. Of course in the process, a lot of money is spent on it too. And interestingly enough, I'm pretty sure they keep their hardware and operating systems up to date throughout all of this. (Someone else can correct me if I'm wrong. I'm not a banking nor hardware guy.)

      In 200 years (wild guess), when hardware and software technology is more mature and more stable, we might be able to squeeze out the 30 year O.S.

    • (Score: 1) by Z-A,z-a,01234 on Wednesday September 16 2015, @07:35AM

      by Z-A,z-a,01234 (5873) on Wednesday September 16 2015, @07:35AM (#236880)

      That is simply wrong. There is Z/OS from IBM that can still run applications from the 60s. Do you want to say that they didn't upgrade, or changed the OS?

      I'm thinking more in terms of two separate cases:
      - being able to run on old HW, but getting updates
      - being able to run on new HW, but capable of running old applications.

      Both premises are possible and to some large extent done in practice (Windows 7 32bit could still run some DOS applications)

      The problem is that Microsoft and Apple are more interested in making money while Linux developers are more interested in building new stuff than supporting the existing software (and I totally understand that)

    • (Score: 4, Interesting) by bzipitidoo on Wednesday September 16 2015, @07:40AM

      by bzipitidoo (4388) on Wednesday September 16 2015, @07:40AM (#236885) Journal

      30 years ago was 1985. The most advanced x86 processor available was the 286. The 386 didn't hit the markets until summer of '86.

      The 8086 was never meant to support multitasking. The 286 was the first of the line with hardware support for multitasking, but it was so bad at it that running a multitasking OS on a 286 simply wasn't practical. The 286 was really clunky and slow at task switching. As I understand it, the 286 CPU has to be reset in order to switch tasks, which means that all the registers need to be saved, and a jump instruction set up to resume processing where it left off. The 386 handles task switching much better, but it still can't do semaphores easily. That's why the Linux kernel developers dropped support for the 386, and never even tried to support the 286. It wasn't until the 486 that the x86 line had all the pieces needed to run a multitasking OS without awkward workarounds.

      CPUs are still evolving. We moved from 16 bits to 32, and now 64. It's only in the last decade that the x86 line has included capabilities for better virtualization, something mainframes have done for decades. There's the whole pack of SSE, SSE2, SSE3, and SSE4 instructions to enhance graphics performance.

      Would anyone today really prefer to use a 10Mhz 286 with 1M of RAM, rather than modern hardware? None of the major web browsers will run on such limited hardware. Last time I tried anything like that was on a 133MHz Pentium MMX with 96M of RAM, a vastly more powerful computer than the above mentioned 286, and Firefox 3.5 was glacial on it. Took 30 seconds just to come up. I also tried to run Stellarium on it, and it was so slow it was unusable. 5 minutes just to load. Makes the old Commodore 64 floppy drive seem speedy by comparison.

      Alternatively, does anyone want to run 16 bit DOS on modern hardware? You won't be able to use much of the capabilities. Might have 8G of RAM, and be able to use only a fraction of it, maybe only 16M. Imagine trying to partition a 1T hard drive into 33M chunks. That's 30,000 partitions to use all the space. You'd run out of drive letters long before.

      I agree that asking for 30 years of "stability" in OS functionality and UI design is, at this point, impractical.

      • (Score: 2) by TheRaven on Wednesday September 16 2015, @09:58AM

        by TheRaven (270) on Wednesday September 16 2015, @09:58AM (#236910) Journal

        The 286 was the first of the line with hardware support for multitasking, but it was so bad at it that running a multitasking OS on a 286 simply wasn't practical

        Yes it was. iRMX ran very well on the 286. The problem was that running a multitasking OS that could run legacy DOS programs was not really feasible because Intel didn't really think legacy software was a thing back then. iRMX also ran well on 8086 and 8088 machines: you're conflating [preemptive] multitasking (which requires being able to save and restore contexts quickly and receiving timer interrupts - something even the 8088) with memory protection (which requires an MMU: the 286 had one, though the segmented memory system was not very much fun for operating systems).

        As I understand it, the 286 CPU has to be reset in order to switch tasks, which means that all the registers need to be saved, and a jump instruction set up to resume processing where it left off.

        This was only for switching back to real mode from protected mode to real mode. Task switching within protected mode programs (or within real mode programs, as DOS TSRs did) worked fine. The 286 was designed with the intention that you would either run a legacy OS, or a modern one. If you ran a modern OS, you'd switch to protected mode and never switch back. Real mode programs could not run in protected mode though, so Windows had to be able to switch back to real mode, which it did via a processor reset.

        This was solved in the 80386 by adding VM86 mode, which provided a virtualised real mode environment (including a linear 20-bit address space) that was still backed by paged memory. DOS programs on Windows 3.1 used this mode, so memory errors in a DOS program would not affect the rest of the system (but memory errors in Windows programs would, because Windows did not use different page tables for each application).

        --
        sudo mod me up
      • (Score: 3, Informative) by tibman on Wednesday September 16 2015, @01:48PM

        by tibman (134) Subscriber Badge on Wednesday September 16 2015, @01:48PM (#236964)

        I'd argue that the Intel 80286 was not the best processor in 1985. The Motorola 68020 [wikipedia.org] could address 4 GB of ram compared to the 286's 16 MB. the 68020 could also be clocked higher at 33 MHz compared to the 286's 12 MHz. Even by today's standards, 4 GB is plenty of ram. Though at max speed, 33 MHz is stupid slow. I'd like to blame the infinite levels of abstraction we are attempting to achieve in software as the reason for slowness : )

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 2) by bzipitidoo on Wednesday September 16 2015, @05:53PM

          by bzipitidoo (4388) on Wednesday September 16 2015, @05:53PM (#237055) Journal

          You're right, the Motorola architecture was better. Intel won anyway, same as VHS beat out BetaMax. The textbook for the computer architecture class I took had an appendix in which they ripped apart the x86 architecture for being terrible.

          x86 needed more general purpose registers, and what it had was too special purpose (only AX can do a multiply, but it can't do indirect addressing, have to use BX or one of the few index registers for that, etc.). A bunch of registers were underused as specially devoted "segment" registers for its widely criticized segmented memory model. Not only is it CISC rather than RISC, it's full of cruft. There are the sets of pretty much useless instructions for working with unpacked and packed decimal numbers, such as AAA and DAA, the specialized looping and string search and comparison instructions such as LOOP and REPNE CMPSB, the whole business of POP and PUSH and CALL and RET, and the IN and OUT instructions. Naive character by character comparisons to search for strings are only acceptable for very small search spaces, there are much better algorithms known today for that, and in fact Boyer-Moore was developed in 1977, a few years before the 8088. Stacks are a useful data structure, but often not the best for particular uses, and storing the registers one by one is inefficient. The x86 does have PUSHA, to store all the registers, but it's of limited help. It's often better to just store register contents with a bunch of MOV instructions and not even use PUSH. That way, if the stored contents are no longer needed, nothing need be done other than freeing the memory, whereas if the stack was used, it all has to be POPped off, or the stack pointer directly manipulated to clean up. CALL and RET are simply special cases of jumps combined with stack manipulation, and likewise it can often be better to use jumps. If compiler writers decide to eschew the stack altogether, it frees them to use an instruction like PUSHA to store register contents anywhere they like, trampling upon the stack pointer to do that without worrying about messing up the call stack. As for IN and OUT, why even have such instructions, why not just use memory mapping and do the I/O with the MOV instruction?

          I thought perhaps the x86 wasn't so bad for the 1980s, but no, they should have known better even then.

          • (Score: 2) by tibman on Wednesday September 16 2015, @07:57PM

            by tibman (134) Subscriber Badge on Wednesday September 16 2015, @07:57PM (#237091)

            Sounds like you really know your stuff. So glad you didn't want to argue : ) I've been studying the original 68k and attempting to build a minimal computer. That's the only reason i was able to chime in at all. Assembly seems like a lot of fun but probably more-so on resource constrained systems. Do you still work with systems at that low-level?

            --
            SN won't survive on lurkers alone. Write comments.
            • (Score: 2) by bzipitidoo on Thursday September 17 2015, @01:33AM

              by bzipitidoo (4388) on Thursday September 17 2015, @01:33AM (#237229) Journal

              Thank you. Yes, but as a hobby. Some time ago, the notion of taking RISC to the ultimate extreme gained some popularity. Build a CPU that has just one instruction. SUBtraction was a popular choice. NAND or NOR were of course known as sufficient to achieve universality, while XOR was not. I thought of having the one instruction be MOV, making a dataflow computer. The registers, of which there'd be lots, at least 256, would be connected in various ways, like having R3 always be the sum of R1 and R2. To perform an add, the computer moves a value into R1 and another value into R2, then R3 would automatically update, and the sum could be copied out of R3. A subtraction could be performed by moving values into R3 and R2, then R1 would automatically update to contain the difference.

              I know a fair bit about the x86 architecture, but many of the finest details I do not know. If I needed to know, I could pick it up easily enough. But the first CPU I learned was the 6502, used in the Apple II and Commodore 64. It also had a stack. Call stacks were very popular back in the day. I forget just when Intel made the move to RISC for the x86, maybe that was the Pentium. Think the 486 is still straight CISC. Those Pentiums are actually RISC under the hood. They take each x86 instruction and translate it to the underlying RISC instructions they now use to do the work. For operands that are now rarely used because they're obsolete, such as the aforementioned AAA, they've set up a vectoring process to load the several RISC instructions necessary to carry out those complicated and rare CISC instructions. Consequently, such instructions are relatively slow on these modern CPUS, but if backwards compatibility was to be kept, it had to be done somehow. It's a mess. The x87 math coprocessor is an even worse mess. The x86 is basically operations combined with load and store, with some stack manipulation cobbled on, but the whole x87 portion was stack based. Why they opted for that, rather than sticking with the same style as in the x86, is a mystery. The architecture should be scrapped, really. If someone wants to run an old x86 binary, just use an emulator.

              And all this time, ARM quietly built their own simpler RISC architecture. I don't know ARM much, but from what I hear it seems to have an inherent advantage in power usage.

              • (Score: 2) by tibman on Thursday September 17 2015, @02:57AM

                by tibman (134) Subscriber Badge on Thursday September 17 2015, @02:57AM (#237260)

                You may enjoy TIS-100, an assembly programming game on a unique architecture: http://www.zachtronics.com/tis-100/ [zachtronics.com]

                --
                SN won't survive on lurkers alone. Write comments.
                • (Score: 2) by bzipitidoo on Thursday September 17 2015, @12:53PM

                  by bzipitidoo (4388) on Thursday September 17 2015, @12:53PM (#237415) Journal

                  Funny you mention that, as I am currently enjoying an older Zachtronics game, SpaceChem.

  • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @03:41AM

    by Anonymous Coward on Wednesday September 16 2015, @03:41AM (#236841)

    Until recently, Enlightenment 16 + Debian fit the bill ;-)

    • (Score: 2) by janrinok on Wednesday September 16 2015, @07:16AM

      by janrinok (52) Subscriber Badge on Wednesday September 16 2015, @07:16AM (#236876) Journal

      I agree with you to a point - but Debian was released in the early 1990's, with the 1.0 release being around [debian.org] 1995, so that only notched up 20 years. I will further contend that that particular release bore little similarity to that which we know today, so it is unlikely to have been suitable for doing what the submitter is requesting. E16 release was a a year or so after Debian 1.0 [wikipedia.org]

      However, I do agree that the combination is probably the best example that you or I can think of without conducting hours of research - no doubt someone will chip in with an alternative and 'better' match. The pace of change is accelerating so I think that a 30 year old OS will very quickly become unsupportable or unable to interface with many of today's devices.

      • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @11:14AM

        by Anonymous Coward on Wednesday September 16 2015, @11:14AM (#236923)

        FreeBSD + fvwm or WindowMaker.

      • (Score: 2) by HiThere on Wednesday September 16 2015, @07:17PM

        by HiThere (866) Subscriber Badge on Wednesday September 16 2015, @07:17PM (#237080) Journal

        Actually, the pace of real change in computer hardware has been slowing for the last 5 years or so as attention has shifted to smart phones. You could, of course, argue that that's the new personal computer, but it isn't really. Not yet. It needs a better method of data entry and a better way to display lots of data. Currently it's more like a super-AppleNewton crossed with a Blackberry. I'm sure this will be solved within 5-10 years, probably with a more advanced voice recognition coupled with a more advanced Google Glass-style screen+camera (one that can adjust, e.g., to people who wear glasses). And it will probably use retina prints rather than passwords.

        Once that thing becomes useable development will probably speed up again.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 4, Interesting) by archfeld on Wednesday September 16 2015, @03:55AM

    by archfeld (4650) <treboreel@live.com> on Wednesday September 16 2015, @03:55AM (#236844) Journal
    --
    For the NSA : Explosives, guns, assassination, conspiracy, primers, detonators, initiators, main charge, nuclear charge
    • (Score: 2) by mechanicjay on Wednesday September 16 2015, @01:25PM

      by mechanicjay (7) <reversethis-{gro ... a} {yajcinahcem}> on Wednesday September 16 2015, @01:25PM (#236957) Homepage Journal

      Came to say this. When I boot my Dec 3000, an Alpha based work station from the early 90's. I'm greeted with a message that says HP OpenVMS 8.4 copyright 1976-2015. 8.4 is the latest version. The 7.x series is still supported on VAX.

      The system calls are so stable, I have a number of binaries that were compiled under VMS 6 or 7 and run without issue on the newer OS.

      That said, there's a lot of goofy stuff that belies it's 40 years in the market place, though the security model isn't terrible, just a bit limiting. I'd sing the virtues more, but have to run.

      --
      My VMS box beat up your Windows box.
  • (Score: 2) by aristarchus on Wednesday September 16 2015, @04:24AM

    by aristarchus (2645) on Wednesday September 16 2015, @04:24AM (#236848) Journal

    Back on Samos, in 300BC, we used the Astrolabe of Antikithyra. Not sure what software it was running, but it seemed to be "young slave boy educated in mathematics". Good enough to calculate the dimensions of the universe, roughly. But the point is, for all our fanbois of the modern age (who, unfortunately, are not as good as ancient slave boys trained in math), that the GUI is not the OS. Just because Microsoft moved the "Start" button to someplace you cannot find does not mean they altered the operating system. And to frojack, we would be back at the DOS prompt? Well, we never left it, except that Microsoft replaced it with NT, and they were worried about violating intellectual property rights with that. No, we do have continuity, of much more than 30 years. If someone could produce a working Antikithyra today, it would still run. (Sadly, that is not one of my many areas of expertise, or I would make one!) So, basics, people! Cross compiling! Emulation! The only reason that operating systems are different is to screw you over and lock you into an app garden of you wildest desires, because you are too stupid to understand what an OS is. Aristarchus, out.

  • (Score: 5, Funny) by Anonymous Coward on Wednesday September 16 2015, @04:50AM

    by Anonymous Coward on Wednesday September 16 2015, @04:50AM (#236853)

    It still runs the same programs it did when Stallman announced it 30 years ago.

  • (Score: 3, Insightful) by ramloss on Wednesday September 16 2015, @04:50AM

    by ramloss (1150) on Wednesday September 16 2015, @04:50AM (#236854)

    I totally get what you are trying to say. I envision it as the windows 7 interface, or os x leopard, or ...replace with your favorite UI, on top of the latest hardware technologies. It's not about freezing all hardware support. I see it as keeping the user-facing interface intact and changing the underlying hardware support to adapt to the current hardware.

    Think about it: an UI that has been the same for 30-ish years (I thing that's a lot of time), unless vocal star-trek-like interface becomes widespread and we all can say in our native language "computer: search for 'microsoft' in the 'ancient os' section of the internet".

    The key being "change for the sake of change". Of course, a new hard drive technology would need new drivers, or an entire IO subsystem, but it does not need to change the way we navigate a filesystem. For that matter, if a new, more efficient filesystem is invented, we don't need to change the way we navigate it. If a new display technology displaces LCD, we don't need to change the way the contents of the storage system are displayed, on the other hand, if it is to be displayed in a 3D room we have to do some changes to the interface; if it is still displayed in 2D, we don't need to change it.

    The key part is, how we tell between necessary change and superfluous change? It has some leeway, but I think that focussing on this 'avoiding change for the sake of change' concept would produce an interesting OS.

    • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @05:00AM

      by Anonymous Coward on Wednesday September 16 2015, @05:00AM (#236858)

      The key being "change for the sake of change".

      Ahem ... systemd ....

  • (Score: 2) by c0lo on Wednesday September 16 2015, @04:52AM

    by c0lo (156) Subscriber Badge on Wednesday September 16 2015, @04:52AM (#236855) Journal

    I'm thinking that it would be really great to have a long-term support, backward-compatible OS having a well designed, stable, intuitive and accessible UI.

    Long term support implies the OS is in (constant?) evolution. There need to be a reason for that evolution; maybe because, at any point in time, it is not well designed/stable/intuitive/accessible enough?

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 1, Interesting) by Anonymous Coward on Wednesday September 16 2015, @02:07PM

      by Anonymous Coward on Wednesday September 16 2015, @02:07PM (#236968)

      Long term support implies the OS is in (constant?) evolution. There need to be a reason for that evolution

      Sure:

      • New hardware is created that needs to be supported.
      • Bugs that have been found need to be fixed.
      • New programming interfaces get added, for stuff that wasn't possible before.
      • New optional features are provided.

      Note that none of those things necessitates to break old stuff, or to force UI changes onto the user.

  • (Score: 3, Insightful) by Anonymous Coward on Wednesday September 16 2015, @06:18AM

    by Anonymous Coward on Wednesday September 16 2015, @06:18AM (#236867)

    Just run Emacs.

  • (Score: 3, Insightful) by shortscreen on Wednesday September 16 2015, @06:22AM

    by shortscreen (2252) on Wednesday September 16 2015, @06:22AM (#236868) Journal

    One can always run an old OS on old hardware.

    Sometimes one can run an old OS on new hardware, although drivers can be a problem. At least we can fall back on VESA. And it seems there's an RTL8139 driver for nearly every OS in existance.

    The biggest problem with running an old OS is that the latest programs don't work. Sometimes there's a good reason for this, other times it's just because developers play fast and loose with dependancies. You will have to download 250MB of .NET crap to play their TicTacToe game and they see no problem with this.

    • (Score: 2) by vux984 on Wednesday September 16 2015, @05:55PM

      by vux984 (5045) on Wednesday September 16 2015, @05:55PM (#237058)

      You will have to download 250MB of .NET crap to play their TicTacToe game and they see no problem with this.

      It means the tictactoe game is developed in 4 hours and is by itself 25kb.

      What do you suggest? They write it in C, directly against the win32 api defined in windows.h... and statically link it to the c-runtime (otherwise you need to download and install Microsoft Visual C runtimes 20XX...) And then, as a developer, spend more time on just the Windows messaging event dispatch loop than the C# developer spent on the entire project? All so you can avoid downloading a framework provided and updated by the OS maker, bundled with releases of windows newer than the framework, and preinstalled on most oem systems?

      WTF?

      • (Score: 2) by shortscreen on Thursday September 17 2015, @05:09AM

        by shortscreen (2252) on Thursday September 17 2015, @05:09AM (#237301) Journal

        What would I suggest? I'm glad you asked. My first suggestion would be to reuse the message handling code that you already wrote 20 years ago, since the win32 api is older than dirt and you've had plenty of time to learn it by now if you are creating win32 programs. In the context of this SN submission, about a long-lived OS, you would have the added bonus of your game running under any 32-bit Windows all the way back to NT 3.1 or whatever.

        Alternatively, if you must use a third-party framework to make a TicTacToe game, then include the part of it that you actually used with your game (not the entire 250MB). If you can't redistribute part of it, find a better one.

  • (Score: 1) by loic on Wednesday September 16 2015, @08:02AM

    by loic (5844) on Wednesday September 16 2015, @08:02AM (#236896)

    I guess it is now all about emulation. Pick a platform that has well rounded and well supported emulation, open source preferably for its long term continuation, then enjoy it running for decades on a variety of physical hardware. You can have a look at Bochs, Mame which now includes Mess emulator with hundred of emulated computers. Even old mainframes are emulated faithfully, now (Hercules emulator, for instance, there: http://www.hercules-390.eu/ [hercules-390.eu])

  • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @10:33AM

    by Anonymous Coward on Wednesday September 16 2015, @10:33AM (#236918)

    You can't create a stable user interface for general purpose computing if the users keep changing the way they interface with software; All the while viewing distances, monitor sizes and resolutions keep changing.

    Nowadays, you have keyboards, mouse, touch, digitizes, laser pointers and voice all used at resolutions scaling from centimeters to meters. The only way to accommodate all these is to settle on the lowest common denominator: the CLI over keyboard\voice.

    Just imagine what if there's a big hologram breakthrough and Apple's a-holes start selling off iHolopounds 3d-smartwatches with a monitor size and movement capture interface you can actually replace your smart-phone and tablet with...

  • (Score: 3, Interesting) by Gravis on Wednesday September 16 2015, @01:05PM

    by Gravis (4596) on Wednesday September 16 2015, @01:05PM (#236952)

    the only way to get what you want and have it be a native solution is to make a computer in an HDL and then put it on an FPGA. this will require that you update the computer yourself for all the latest features but it will do what you insist on having.

    perhaps it might be better to have a lower expectations.

    • (Score: 0) by Anonymous Coward on Wednesday September 16 2015, @04:38PM

      by Anonymous Coward on Wednesday September 16 2015, @04:38PM (#237029)

      Actually Windows for a long time matched the description quite well: no matter whether your software was for DOS, Win-3.1, Win95 or WinNT, it likely would work on Windows XP. And also the DOS command line was still available, and while I don't know if XP still had the program manager, I'm pretty sure Win9x had it, although it was not started by default.

  • (Score: 2) by number6 on Wednesday September 16 2015, @05:54PM

    by number6 (1831) on Wednesday September 16 2015, @05:54PM (#237056) Journal

    Here are some useful resources:

     
    Windows 2000 blackwingcat - Google Search [google.com]

     
    Unofficial Windows 2000 Update by blackwingcat - English Homepage
    http://win2k.dyniform.net/ [dyniform.net]
    Microsoft stopped providing updates for Windows 2000. We didn't. .......Here you can find KB ans MS bulletin updates in English for Windows 2000 SP4. These updates are from internet user "blackwingcat" and are not made by us. However, we have update packs organized from blackwingcat's Windows 2000 security updates, and we will post about updated security bulletins in English here. .......Why make this site? Not only does it organize all English Windows 2000 downloads in one place, it also will be in 100% English. It is difficult to use blackwingcat's site because it is in Japanese. Because we can translate some basic Japanese, posting about the security updates should be fairly easy.

     
    blackwingcat's blog/homepage (lang=Japanese) [livedoor.jp]

     
    site:msfn.org/ blackwingcat - Google Search [google.com]

     
    blackwingcat torrent - Google Search [google.com]

     
    KDW - Known DLLs API Wrapper by blackwingcat
    FROM THE README: The Win2k wrapper pack is a collection of DLLs (dynamic loaded libraries) that 'wrap' the Windows API. These wrapper DLLs 'target' original copies of the system DLLs. Most of these fixes were chosen to get newer games working on 2K, but I've added a lot of application fixes too. .......... It is the abbreviation of Known Dlls Wrapper, the following function is offered by installing to Windows 2000. Alias XP API Support Tool for Win2K. ..........1.The API for Windows XP is offered and application stabilizes is operated. ...2. Emulating the version of Windows freely, supporting the operation and installation of 9x/Me/XP/Vista private application. ...3.Turning the server mode of Windows 2000 Server to a OFF, Server installation impossible application is made installation possible. (With this, NTSwitch unnecessary. ...4. Bilingual of Japanese /English. ...5. fcwin which is Support soft, you can also use on Windows XP/2003/Vista/7 .......... For more info on DLL search paths see this awesome page: DLL Loading Rules in Win32-[http://home.att.net/~raffles1/older/dll_loading_rules_in_win32.htm]

     
    BlackWingCat's KDW API Wrapper & Tools - Windows 2000 Gaming Forum - Page 1 [prophpbb.com]

     
    kdw097a.zip - Win2000@wiki - KDW[English] [atwiki.jp]

     
    dll_loading_rules_in_win32.htm - Google Search [google.ru]

     
    Checking Windows OS Version in Code - What are 'Shims'
    http://blogs.msdn.com/b/patricka/archive/2010/01/14/....-checking-os-versions-in-code.aspx [msdn.com]
    When a new OS is released, a high percentage of applications don’t install or don’t run because they check for a specific OS version. The application or installer checks for a specific version number and exits if it isn’t what is expected. I’ve heard claims of up to 50% of incompatibility issues are due to a simple version checks. So many applications check version that this influences how Microsoft increments version number. ..........Let’s say you only want your application to install on XP. Your installer checks the OS version and exits if it’s not equal to 5.1. There’s a compatibility feature that has been in Windows since Windows 2000 called “shims”. Shims “trick” API calls by acting like a legacy OS. The "version lie" shim returns whatever OS version you want. Therefore, the installer can be tricked and the application can be installed regardless of the version check. The "version lie" shim can be implemented several different ways -- by the user via the Compatibility tab in properties, the built-in "shim database", by enterprise administrators via the "ACT Toolkit", and by the "Program Compatibility Assistant".

     
    Shims - How they work in Windows, App Compatibility and Version Checking
    http://blogs.msdn.com/b/cjacks/archive/2007/07/05/windows-vista-shim-internals-....-the-ramifications.aspx [msdn.com]
    I want to talk a little bit today about shims, specifically addressing how they work to address compatibility issues, and what the security ramifications are when you use a shim to address a compatibility issue. Merriam Webster defines a shim as: "a thin often tapered piece of material (as wood, metal, or stone) used to fill in space between things (as for support, leveling, or adjustment of fit)" [...] the name shim comes exactly from this definition -we jam a thin piece of code between things- specifically, between the application's code and Windows code. This works specifically because we implement an Import Address Table (IAT) to link to DLLs - specifically, to Windows DLLs. Rather than talk about this, I figured I would just show this to you in a debugger.....

     
    Change Program Compatibility Mode 'Not a vailid Win32' System Requirement - Hexedit Hack:
    »» My EXE will not run.....I get the error message:
    »» "C:\...\blah.exe is not a valid Win32 application"

    The NT5 linker settings in Visual Studio were not set properly before compiling (or the developer doesn't give a fuck about NT5).
    You need to patch the EXE yourself using a hex editor.
    Look for hex sequence "06 00 00 00 00 00 00 00" (usually repeated only one time)
    Change to "05 00 00 00 00 00 00 00" or "05 00 01 00 00 00 00 00" (if two consecutive series, change both of them)
    Note: if this is a 64-bit EXE file, you’re supposed to instead change to "05 00 02 00 00 00 00 00" (both times).
    Save the EXE file ...Win2000/XP/2003 should now run the file ;-]

     
    Changing Compatibility Mode of Program in Registry:
    »» I'm using WinXP and I needed to run this game:
    »» "The Incredible Machine 3" (Win95 game). The README in the package says
    »» that you need to set WINDOWS95/98 compatibility mode and 256 COLORS.
    »» How do I set this in WinXP? I see that compatibility tab is missing when
    »» I right click on the .EXE file...

    Do it like this example, make sure to edit paths and names first:
      -------------------------code
      REGEDIT4
      
      [HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
      "C:\\Program Files\\The Incredible Machine 3\\Machine 3.EXE"="~ 256COLOR WIN95"
      
      ---------------------------/code

    And here is the registry if all compatibility modes are enabled:
      ---------------------------code
      REGEDIT4
      
      [HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
      "C:\\Program Files\\The Incredible Machine 3\\Machine 3.EXE"="~ RUNASADMIN HIGHDPIAWARE 640X480 256COLOR WIN95"
      
      ---------------------------/code

     
    Editing .MSI files - Force Software to Install onto unsupported OS Versions using ORCA tool - Dave's Blog, 2012
    http://david-merritt.blogspot.com.au/2012/08/force-blocked-software-to-install-onto.html [blogspot.com.au]
    Sometimes programs are blocked from being installed onto certain versions of operating systems even though those programs may actually run fine without any issues once installed. Perhaps it is newer software and the vendor doesn’t support (either intentionally or unintentionally) an older version of the operating system with a new release of software i.e. ST5 Solid Edge License Manager on Windows Server 2003. .......or maybe the software is older software created before a newer version of the operating system existed i.e. ST2 SEEC Administrator on Windows 2008 R2. .......Blocked installs because of operating system version can be quite easily overcome by using Orca, a free Microsoft tool that allows you to quickly and easily modify and edit the .msi installer files.

     
    PowerCalc (Windows XP PowerToy) - hacked to work on NT6 systems (Vista, 7, 8, etc)
    http://blog.red-stars.net/technology/software/hacking-windows-xp-powertoy-calculator-to-run-in-vista [red-stars.net]
    Article which takes you step-by-step through the process of removing the built-in flags which prevent the Windows XP PowerToy from running on NT6 systems. You could apply the same knowledge to hack other software to run on other Windows versions.