Ars technica looks at Fortran, and some new number crunching languages in Scientific computing's future: Can any coding language top a 1950s behemoth?
This state of affairs seems paradoxical. Why, in a temple of modernity employing research instruments at the bleeding edge of technology, does a language from the very earliest days of the electronic computer continue to dominate? When Fortran was created, our ancestors were required to enter their programs by punching holes in cardboard rectangles: one statement per card, with a tall stack of these constituting the code. There was no vim or emacs. If you made a typo, you had to punch a new card and give the stack to the computer operator again. Your output came to you on a heavy pile of paper. The computers themselves, about as powerful as today's smartphones, were giant installations that required entire buildings.
(Score: 5, Interesting) by c0lo on Friday May 09 2014, @10:17AM
All memory allocation was static - yes, all the necessary memory was determined at compile time. No structures (just arrays), no pointers - this means absolutely trivial memory addressing arithmetics.
Want more? Fortran 77 did not allow recursion (introduced only in the F90 spec) - thus juggling parameters and return addresses on the stack was also relatively minimal and known after the compilation.
Even more? If you didn't have enough memory, though luck... no memory swapping/paging as such - you'd need to spit your precrunched data on tapes and reload them later which, understandably, you would hate to do; after looking at MIX [wikipedia.org], why do you think there are so many pages dedicated to computing algo complexity and memory footprint in TAoCP - measure zillions of times and cut once
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford