Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Sunday June 22 2014, @03:17PM   Printer-friendly
from the XML-on-Stone-Tablets dept.

I am sure there are many experienced professionals here who can give great suggestions ...

Currently, as a part of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. They are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database.

What design considerations will make the system more "future proof" (if that's even possible)?
What questions should be asked to the client/Project Manager to make the system more "future proof?"

http://programmers.stackexchange.com/questions/215764/advice-on-designing-web-application-with-a-40-year-lifetime

http://arstechnica.com/information-technology/2014/06/how-to-design-a-web-application-with-a-40-year-lifetime/

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by cosurgi on Sunday June 22 2014, @03:21PM

    by cosurgi (272) on Sunday June 22 2014, @03:21PM (#58711) Journal

    Plain text file is the winner here. It never gets old, you can always read it. And the overhead to read such files diminishes as SSD drives get faster and RAM gets cheaper - there is no reason to optimize storage format.

    --
    #
    #\ @ ? [adom.de] Colonize Mars [kozicki.pl]
    #
    • (Score: 3, Informative) by JoeMerchant on Sunday June 22 2014, @05:14PM

      by JoeMerchant (3937) on Sunday June 22 2014, @05:14PM (#58741)

      Plaintext was the winner from 40 years ago to today.... the winner 40 years into the future will be very difficult to predict for the next 35 years.

      If you don't mind your interface being 80 years old at the end of the project's lifetime, then plaintext it is.

      Personally, I'd like my medical records to contain some image files, too - maybe a vanilla DICOM format?

      --
      🌻🌻 [google.com]
    • (Score: 2) by davester666 on Sunday June 22 2014, @05:28PM

      by davester666 (155) on Sunday June 22 2014, @05:28PM (#58746)

      Yes, that seems totally reasonable, since there aren't that many records generated, even by a single hospital.

      Why not just say "keep all the records in RAM". The text file can be a format for backups.

    • (Score: 3, Informative) by Hairyfeet on Sunday June 22 2014, @06:58PM

      by Hairyfeet (75) <{bassbeast1968} {at} {gmail.com}> on Sunday June 22 2014, @06:58PM (#58769) Journal

      I'd say RTF myself. Rich text has been around since 1987, the specs are published and supported by pretty much everybody, and it takes hardly anything to render and process while supporting more advanced features like date stamps and picture embedding.

      So I'd say RTF is the winner, more advanced features than TXT while supported by everybody.

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
    • (Score: 2) by mendax on Monday June 23 2014, @02:33AM

      by mendax (2840) on Monday June 23 2014, @02:33AM (#58851)

      Plain text is fine.... unless it's in IBM's EBCDIC. Keep in mind that there has not a standard character set until fairly recently. But, having said this, I suspect that Unicode will be around for a long time to come, long after we're all dust in the wind.

      --
      It's really quite a simple choice: Life, Death, or Los Angeles.
    • (Score: 1) by Immerman on Monday June 23 2014, @02:23PM

      by Immerman (3985) on Monday June 23 2014, @02:23PM (#59003)

      I would mostly agree, though a simple, well-designed XML schema might be worth considering to facilitate parsing. Something designed to still be human-readable, with just enough markup to impose a rigorous, easily validated and well-understood file format - you don't want a stray piece of bitrot, or a poorly-written module added 20 years later to be able to silently corrupt a data file.

      Also, if using user-generated forms (and depending on exactly what you mean by that) it would probably be wise to incorporate something to identify near-duplicate fields between records. For example if your data is essentially freeform key/value pairs then keep a comprehensive list of all keys used throughout the database and, whenever some user creates a form incorporating a new key be sure to alert them: "Key Abc has never been used before and may obscure future data retrieval. Possible alternate key names that have been used before include Xyz, Tuv, Qrs,..., would one of them be suitable?

  • (Score: 3, Interesting) by kaszz on Sunday June 22 2014, @03:38PM

    by kaszz (4211) on Sunday June 22 2014, @03:38PM (#58717) Journal

    As above poster. Plain text file format and suitable documentation on format and software. Backup using two different methods (try reading an 8" floppy these days..).

    Data integrity and web seems like trying to get oil and water to mix. It won't be a good outcome as "standards" change and the environment is complex and full of quick solutions by mindless people. But keeping your software simple and sightly paranoid helps.

  • (Score: 4, Insightful) by Horse With Stripes on Sunday June 22 2014, @03:54PM

    by Horse With Stripes (577) on Sunday June 22 2014, @03:54PM (#58721)

    Well, 40+ years is a long time for something like this so you need to make sure that your data - or at least a copy of it - is in a human readable form at all times. XML, JSON, or some other similar format will always be parseable so you know it will be able to migrate to the next platform. Text files (as your backup) are going to be important in the future. Daily exports to your emergency text-based data format will be essential.

    You'll need to pick a database platform because text files will not provide acceptable performance as the primary data store. Whatever OS, programming language and database platform that you choose make sure that they are well established, currently have a good percentage of industry use, and are not proprietary.

    And you need to have quarterly meetings for the next 40+ years to evaluate support and availability for:
    - the version of OS, language and DB that you are currently using.
    - the latest stable version available of OS, language and DB that you are currently using.
    - any OS, language or DB that may be a better long-term solution for your project.
    - any export or migration utilities that may be available for your current OS, language and DB just in case you need them.
    - that your current failover site(s) are functioning properly, have a copy of your current source code and can import your current database (or text files).
    - that your current Disaster Recovery / Business Continuation plan allows for you to switch to, or set up, a new instance of your current OS, language and DB.
    - that your current DR / BC copies of your data & source code work on a fresh install.

    It would also be nice to know that you have a piece of hardware in your possession that is configured to handle your web app just in case you need it.

    • (Score: 2) by JoeMerchant on Sunday June 22 2014, @05:18PM

      by JoeMerchant (3937) on Sunday June 22 2014, @05:18PM (#58743)

      My experience has been that the "living" (basically cloud based, like yahoo & gmail) systems have had the best longevity. They're continually backed up and periodically migrated to new hardware. Anything that's unplugged and stuck in a closet for a decade becomes a challenge to deal with when it is dusted off for re-animation.

      --
      🌻🌻 [google.com]
      • (Score: 2, Insightful) by Horse With Stripes on Sunday June 22 2014, @07:17PM

        by Horse With Stripes (577) on Sunday June 22 2014, @07:17PM (#58771)

        It shouldn't be stuffed in a closet - that's why I recommended quarterly reviews. Since this is health care related data HIPPA regulations require that they need to be careful where they store it.

    • (Score: 3, Insightful) by HiThere on Sunday June 22 2014, @07:23PM

      by HiThere (866) Subscriber Badge on Sunday June 22 2014, @07:23PM (#58772) Journal

      So are you recommending XPM for pictures? What about sound? au isn't particularly human readable. I think that text, sound, and pictures covers all current medical information, and actually sound isn't particularly important yet.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 1) by Horse With Stripes on Sunday June 22 2014, @08:54PM

        by Horse With Stripes (577) on Sunday June 22 2014, @08:54PM (#58791)

        I don't think sound is important yet, certainly not as important as images, but they can protect themselves by saving sound and images in multiple formats.

        The quarterly reviews will give them plenty of time to identify industry wide format changes so they can convert away from formats that are going to be deprecated into newer formats that support their needs. A checklist for the statuses of currently used and available formats will help make sure they don't end up with data in formats, or on platforms, that have been abandoned.

    • (Score: 2) by egcagrac0 on Monday June 23 2014, @02:29PM

      by egcagrac0 (2705) on Monday June 23 2014, @02:29PM (#59005)

      It would also be nice to know that you have a piece of hardware in your possession that is configured to handle your web app just in case you need it.

      It may seem stupid to point this out, but this should be run on a virtualization technology, just so that it's ready to move to new hardware more painlessly.

      Unfortunately, this makes "virtualization technology" another discussion point at the quarterly meeting, along with OS, language, and DB.

  • (Score: 3, Insightful) by Theophrastus on Sunday June 22 2014, @04:22PM

    by Theophrastus (4044) on Sunday June 22 2014, @04:22PM (#58725)

    Take a look at the ancient software still kicking about today, vi, emacs, sendmail, TeX, ... on the user side of things the reason these persist over so many years is that they can be adapted for current needs. that is, consider a solid well-documented extension language, (preferably one already available and popular; not something specific to your software. python or lua, for example) try to make as many patterns of usage as changeable as possible.

    oh, and the plain text base of config/support files which one of the other commenters mentioned is vital for longevity. (i'm looking at you systemd)

  • (Score: 5, Insightful) by lgsoynews on Sunday June 22 2014, @04:25PM

    by lgsoynews (1235) on Sunday June 22 2014, @04:25PM (#58726)

    I think that the question is badly framed.

    The "web application" part is a red herring, as we can't know if the web as it exists today will make sense then, 40 years is a long time for this field.

    I think that the real problem is to identify what is the heart of any application. I identify 2 obvious things (nothing surprising if you are a bit experienced):

    • the data (work with an expert in DB modelisation, I've done it once, the guy was GOOD, it saved a LOT of trouble later on)
    • the business rules (the same: work with an expert designer, use a rule engine, or write one)

    Whatever you do, do NOT let the programmers design the scheme of the data. Few are those who have the required experience, ask someone who works on that topic only, it is very time-consuming (full-time job). Failing to design this layer right is guaranteed to cause pain for years (I've seen it before).

     

    To which I add several major concerns:

    • work on a data import/export mechanism (for the exchanges with the data providers)
    • documentation (the big picture & use cases especially)
    • avoid everything specific & be careful of proprietary lock-in (it is deadly)
    • work on interfaces (abstract the layers) to ease the inevitable future migrations
    • use standards (don't reinvent the wheel when it's not necessary)

    You note that the display part is left aside, because it is the easiest to change (and it will).

    Basically, there is nothing new here: you separate as much as possible the concerns, so that the underlying technologies/modules can be changed -or emulated- without too much pain. Every project should try to strive toward that, there's nothing new here. In the case of such a long lived project, you need to put more though and means upfront.

    Basically, ask an experienced architect to do his (her) thing.

    Ex: the database. It's almost certain you will have to change it at least once. I've done it on moderatetely large projects, it's not that hard if you remain in the same kind of DB -relational typically-. Of course, which specific parts of the database engine you use must be balanced against future migrations (ex: full-text search those never work the same among vendors). Data migration is a topic in itself, but with a good data scheme and someone who really knows his database, it is feasible, I've worked with people who did that very well (don't let your average developer do it, again: ask a specialist! It will pay itself very fast.).

    As for emulation, I remember for instance a very large telecom project that was migrated to an emulator (VMware-like) because the OS was obsolete. There were some issues, but it worked: you had the very old green-on-black text screen working in the emulator on modern computers used by the salespeople.

    In summary, there is nothing new here: as for any big project, you need the help of an experienced architect to set up the project. And work the architecture upfront, don't start coding at once, it will pay dividends later.

    • (Score: 1) by lgsoynews on Sunday June 22 2014, @04:27PM

      by lgsoynews (1235) on Sunday June 22 2014, @04:27PM (#58727)

      you need to put more though

      you need to put more thought, of course, as I should have :-)

    • (Score: 3, Interesting) by Nerdfest on Sunday June 22 2014, @06:26PM

      by Nerdfest (80) on Sunday June 22 2014, @06:26PM (#58761)

      Whatever you do, do NOT let the programmers design the scheme of the data

      Conversely, don't let a database designer create the business objects used by the application. Database designers tend to have expertise in optimization of the persistence layer, but many of the design paradigms in a database cause horrendous trouble in application objects. Personally, I start with the business model (as we're generally trying to solve a business problem) and them create an ideal persistence model from that (which may be completely different). I think a big mistake a lot of people make is starting from the database model ... they generally end up with database flavoured implementations details leaking into the object models.

      In many cases you'll be lucky to find a person with significant skill in only *one* of these areas, much less both of course. Learning the best ways to do things in both domains is a great skill to have if you have the time to develop it.

    • (Score: 0) by Anonymous Coward on Sunday June 22 2014, @09:40PM

      by Anonymous Coward on Sunday June 22 2014, @09:40PM (#58799)

      Would you be offended if this post was sent to Atlassian as a job application? Because they could really use you.. if for nothing else than applying everything here to their software

  • (Score: 4, Insightful) by jackb_guppy on Sunday June 22 2014, @04:56PM

    by jackb_guppy (3560) on Sunday June 22 2014, @04:56PM (#58736)

    Older architects - who have seen everything. Looking to 50+ age group - 30 years or more in the business.
    Older developers - again same as above
    Older languages - shown stability and usefulness.
    Older databases - DB2 comes to mind. Also ISAM would be good - think simple single index with data, which could be a pointer to document.
    Older OS - more likely stable and "portable"
    Older transfer formats - correctly formatted CSV, Flat or Text files - both smaller footprint and universally accepted

    Think long term - what as proved itself over 30 to 40 years already? Every thing else a flash-in-the-pan.

    Web front end, "who cares?".
    App, "who cares?".
    Terminal or "green" screen, "who cares?".

    THEY ARE NOT THE SYSTEM. They can be changed and improved over time as standards change and improved.

    Other transfer formats are nice to haves (say XML or Excel). They are like front ends - actually they are front ends! Just machine versus human interfaces. THEY ARE NOT THE SYSTEM.

  • (Score: 4, Interesting) by TheLink on Sunday June 22 2014, @05:37PM

    by TheLink (332) on Sunday June 22 2014, @05:37PM (#58749) Journal

    Don't design the application to be used for 40 years. The data has to last 40 years, some workflow may remain similar for 40 years, but I doubt the app will. So design the schema and other DB stuff so it can be used for 40 years. Design it so you can migrate from it bit by bit over 40 years when necessary, change parts of it without too much difficulty and cost.

    For example, don't lock your app in to vendor specific stuff. SQL Server, Oracle etc have DB specific stuff, if you have to, use them in ways (wrappers etc) where you can replace the DB without lots of work.

    Same for the webservers/platforms/frameworks - don't get locked in. I won't be surprised if webservers are still around in some for 40 years later (gopher and ftp are still around). But betting ASP.Net will still be viable in 40 years is a different matter. So make the layer between the DB and the user interface modular and not too dependent on vendor specific stuff.

    You can have a fancy web front end (so that function keys and other shortcuts work for fast mouseless data entry), but as long as things are not "locked in" and the backend isn't too crazy, 40 years later the users could be using a different client, on a different vendor's DB, but the 40 year old data is all there AND not too corrupted.

    And the data brings us to this - design your database to not make too many assumptions that may not hold true for 40 years - e.g. male, female. You probably need unicode support too. Then there are date, time and timezone considerations (people may not be on the same timezone, some might not even be on Earth[1] ;) ). Also beware that some things will overflow in 2038 ( https://en.wikipedia.org/wiki/Year_2038_problem [wikipedia.org] ) pick stuff that will be fine and use them in ways that will be fine.

    You may also wish to use UUIDs ( https://en.wikipedia.org/wiki/Universally_unique_identifier [wikipedia.org] ) for some primary keys instead of increasing numbers, since that allows more easy scale-out (you could have DBs on different locations, even different planets[1] and merge them more easily).

    [1] A bit optimistic but if you weren't optimistic you wouldn't be designing this sort of app to last 40 years ;).

    • (Score: 2) by c0lo on Monday June 23 2014, @04:26AM

      by c0lo (156) Subscriber Badge on Monday June 23 2014, @04:26AM (#58876) Journal

      You may also wish to use UUIDs for some primary keys instead of increasing numbers, since that allows more easy scale-out (you could have DBs on different locations, even different planets[1] and merge them more easily).

      Overkill.
      64-bits integers will do just fine: 40 years=1.26e+9 secs. Assuming the system needs to deal with a production of 1e+6 records/sec for every "planet" (let's call it shard, shall we?), it means 1.26e+15 records/shard. This can be represented by a 51 bits number, so you still could scale your database horizontally to 8192 shards.

      You wouldn't be the first to deal with unique-ides and sharding, have a look on snowflake [twitter.com] like [myemma.com] schemes [tumblr.com]
      The problems with UUID-es:

      • don't cluster your indexes two well - so records inserted close in time may land into very different areas of your storage (try to run a "total at the end of month report" and you'll discover quite significant running times between sequenced-ides vs spread-spectrum-ides)
      • (maybe minor) until the CPU-es will get to 128-bitness (may take quite a while, the age of the universe approx 4.3e+17 secs - still fits into 59 bits, no "2k/2038 problems"), comparing UUID-es may still more CPU-cycles costly than comparing 64-bit longs
      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by TheLink on Monday June 23 2014, @06:44AM

        by TheLink (332) on Monday June 23 2014, @06:44AM (#58891) Journal

        The main point was the submitter may not want to use the "conventional" and popular increasing integer primary key method due to scaling issues. UUIDs were suggested because there are some standards, and they are likely to last 40 years or even more, without having to resort to too much "trickery and deceit".

        The examples you gave seem to mainly be good examples of how to do things when you already screwed up,
        e.g.: "Sharding PostgreSQL sequences through trickery and deceit":

        Or we could use some­thing like a UUID. Unfortunately that would require some sig­nif­i­cant changes to our table def­i­n­i­tions, and we have a lot of tables.

        If they picked UUIDs at the start they wouldn't have had to change their table definitions. Same if they picked 64 bit unique IDs in the first place they wouldn't have this problem.

        Whether the submitter picks UUIDs or 64 bit unique IDs, it's better to not start off doing things wrong like Twitter and your other examples.

        [UUIDs] don't cluster your indexes two well - so records inserted close in time may land into very different areas of your storage (try to run a "total at the end of month report" and you'll discover quite significant running times between sequenced-ides vs spread-spectrum-ides)

        What DBs work the way you mentioned? I'm assuming you're not doing things wrong like clustering tables on the UUID! If you don't do that, barring fragmentation of the file system, records inserted close in time should tend to be close to each other on storage (at a logical level - might not be close at a physical level).

        (maybe minor) until the CPU-es will get to 128-bitness (may take quite a while, the age of the universe approx 4.3e+17 secs - still fits into 59 bits, no "2k/2038 problems"), comparing UUID-es may still more CPU-cycles costly than comparing 64-bit longs

        Would the extra CPU-cycles really matter for this particular app? How much slower would using UUIDs be? 10%, 20% slower? It's a healthcare app not a real-time MMO with millions of users. Seems more likely that most database performance problems would be due to other things - bad algorithms, bad queries, no indexes, wrong indexes, DB misconfiguration, clustering the table on the wrong index. Such problems can cause things to be many times slower than the difference between using UUIDs vs 64 bit unique IDs.

        • (Score: 2) by c0lo on Monday June 23 2014, @01:12PM

          by c0lo (156) Subscriber Badge on Monday June 23 2014, @01:12PM (#58978) Journal
          Yeah, mate, whatever floats your boat.
          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 1) by kwerle on Sunday June 22 2014, @07:54PM

    by kwerle (746) on Sunday June 22 2014, @07:54PM (#58780) Homepage

    Both the web and healthcare [management] look a lot like they did 40 years ago, so I can't see any problem with trying to design a system for the next 40.

    Seriously. If you could not have predicted the present from 40 years ago, how on earth do you expect to predict the next 40? Whoever suggested this requirement does not understand how tech changes *or* healthcare changes.

    The only way to design a 40 year system for something like this is to design a team and workflow that will maintain the project for 40 years. If you can't make management understand that, then it's time to find new management.

  • (Score: 2) by Common Joe on Sunday June 22 2014, @08:14PM

    by Common Joe (33) <{common.joe.0101} {at} {gmail.com}> on Sunday June 22 2014, @08:14PM (#58785) Journal

    I see you're posting around and you've already got some good answers. Something else to consider: HTML is only about 20 years old [w3.org] and you want something that wll last twice as long as that. Let me ask you a quick question: How many cars built in 1974 do you see still running around? Of those that are, why are they still running? I think you already know your answer. You're just looking for any little gems that can help you out greatly in the far future. You're doing it right and I wish you good luck. Here's my two cents.

    You'll need to focus first on data storage: format of your data. Will you risk putting something in a database? Or will you use a flat file and jpg and txt files? Personally, I think you can use databases and files, but I'd stay away from anything proprietary. Microsoft and Apple is about as long lasting as computer companies get and I'm not convinced either of them will still be around in 10 or 20 years. Data is your most important part to protect. Be prepared to move your data from time to time.

    How long will formats lasts? The more basic and less pretty your data can be stored, the longer lasting it is.

    Next is how long anything you code will last. Something needs to retrieve your data from the database and present it to the user. Those technologies will be completely different in five years. Hell, they'll probably be completely different in two years.

    If your bosses want this, then they'll have to fund maintenance for 40 years. No ifs, ands, or buts. They've agreed to fund you for now, but will they continue the funding in five or ten or twenty years? Or will they cut back the money and just let things coast?

    A mistake companies make is they let technologies coast until the hardware breaks. When that happens, it may not be possible to get the old hardware and old software to retrieve the old data. Whoever is going to be responsible for this over the next 40 years will have to stay modern. That car from 1973 will work, but you have to change the oil often. You also have to do major overhauls from time to time. What you're looking at is no different. You also better be prepared for those accidents too.

    Good luck.

  • (Score: 3, Insightful) by Marneus68 on Sunday June 22 2014, @08:49PM

    by Marneus68 (3572) on Sunday June 22 2014, @08:49PM (#58789) Homepage

    This is how you do it. [motherfuckingwebsite.com]

  • (Score: 2) by khallow on Monday June 23 2014, @01:03AM

    by khallow (3766) Subscriber Badge on Monday June 23 2014, @01:03AM (#58835) Journal

    The 40+ year nature isn't actually all that relevant. Just look at all the crap that's been supported for many decades simply because it works. Nobody is going to forget how a website works in forty years. Use something that works, document it well, and use the advised human readable files. You just future-proofed your system.

  • (Score: 0) by Anonymous Coward on Monday June 23 2014, @12:47PM

    by Anonymous Coward on Monday June 23 2014, @12:47PM (#58967)

    The organization you're working for doesn't know what it's doing and this project will fail. They relied on a contractor to architect this "oh so important" system that will last a laughable 40 years, rather than use an FTE to retain in-house knowledge and realize the bulk of myriad techs used in a web app change every 10-12 years any way? Basic misuse of contractors. Fundamental misunderstanding of technology. Firing the contractor won't change any of this, at least the PM and likely at least one manager must be fired, but it's likely all downhill from here anyway since incompetence begets incompetence and incompetence hires incompetence.

    Ensure the data are well built and secure and notify management that web interfaces change routinely and plan accordingly.

  • (Score: 2) by Aiwendil on Monday June 23 2014, @01:22PM

    by Aiwendil (531) on Monday June 23 2014, @01:22PM (#58983) Journal

    As many other has stated the notion of keeping an interface the same for 40 years is bordering on insane, but in case you must do this then consider another approach.

    Write it as a script.

    And with that I mean have the script-engine's functions _clearly_ defined by one team, written by another team, and used for the interface by a third team, and disallow any communication with the teams that isn't thru formal channels [or the specs], and all this communication will end up in hardcopy backups.

    Also add as a requirement that the same script should work on different platforms if only the engines are are changed. And require the testing of the engines to be performed on at least two different platforms - to break assumptions/quick hacks.

    [Testing - this is the important part]
    For instance, allow the primary platform to be a raspberry pi (ie, non-x86) linux with X.org and internet-connection and test everything there, and then set the secondary platform to be a DOS (ie, x86) with non-internet serial connection. Doing it like this will enforce that engines are kept up to snuff, and not allowing the engine-writers and the interface-writers to talk will enforce a documentation that are sufficient.

    The point of this approach is to separate the interface and data from the underlying platform and thereby reducing it to "only" having to reimplement the enviornment when it is lost.

    (Or just write the entire thing in Ada, set it to _very_ strict compiler-settings, and disallow any hardcoded paths and any use of libraries that isn't in the standard - and keep a hardcopy of the ada-specs around - it amounts to the same thing in the end. But remember the testing above)

    Just know that any non-plaintext database _will_ be rendered obsolete in less than 40 years (so if you want to use this then write it according to the requirements above)

    And keep in mind that even doing the common assumptions of the machines being MSB or LSB will break how a WORD is handled, so specificy every datatype down to the byte (and not even this is good enough in some instances, I'm currently cursing "smart" compression of a fileformat [developed in the early 80s] which means that a DATATYPE can be anything between 6 and 64 bits (and no, the next DATATYPE does not start on a byte-boundary) depending on how the bitstream is shaped)

  • (Score: 1) by pendorbound on Tuesday June 24 2014, @02:12PM

    by pendorbound (2688) on Tuesday June 24 2014, @02:12PM (#59415) Homepage
    FTFA:

    Migrating from one version control system to another will almost always lose your check-in comments.

    WTF? If you can't migrate history with comments, dates, and users, you're doing it wrong! I've migrated several repositories from CVS->SVN->GIT over the years and also StarTeam->SVN. No loss of history in any of the migrations.

    Step one for a long-lived application: Keep a competent system administrator on staff (or at least on call) at all times!