Ars technica looks at Fortran, and some new number crunching languages in Scientific computing's future: Can any coding language top a 1950s behemoth?
This state of affairs seems paradoxical. Why, in a temple of modernity employing research instruments at the bleeding edge of technology, does a language from the very earliest days of the electronic computer continue to dominate? When Fortran was created, our ancestors were required to enter their programs by punching holes in cardboard rectangles: one statement per card, with a tall stack of these constituting the code. There was no vim or emacs. If you made a typo, you had to punch a new card and give the stack to the computer operator again. Your output came to you on a heavy pile of paper. The computers themselves, about as powerful as today's smartphones, were giant installations that required entire buildings.
(Score: 3, Interesting) by Dr Ippy on Friday May 09 2014, @05:15PM
I started out with Fortran (1970s) but later (1985) switched to C, as Fortran was no longer "flavour of the month" (as we used to say in those days).
As a scientific / engineering programmer, just about all my programs have had this structure:
1. Read input data from file(s)
2. Crunch data
3. Write output data to file(s)
Sometimes this is inside a loop, until the input data runs out.
With this form of program, things like objects, functional programming and recursion leave me cold. They're just not necessary for what I do. Essentially my style is imperative programming with subroutines.
My language of choice for the past 15 years or so has been Perl, usually ignoring its kludged-on object orientation. Perl is simple (though it can be difficult to read if not well written and well commented -- but that's true for a lot of languages), predictable and efficient.
I've rarely used libraries. Who reinvents the wheel understands the wheel.
Yes, I'm old fashioned. That's what comes of being born before Fortran. ;-)
This signature intentionally left blank.