Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday November 30 2015, @08:19AM   Printer-friendly
from the thought-provoking dept.

In 1999, Butler Lampson gave a talk about the past and future of "computer systems research" [PDF]. Here are his opinions from 1999 on "what worked".

Yes

Virtual memory
Address spaces
Packet nets
Objects / subtypes
RDB and SQL
Transactions
Bitmaps and GUIs
Web
Algorithms

Maybe

Parallelism
RISC
Garbage collection
Reuse

No

Capabilities
Fancy type systems
Functional programming
Formal methods
Software engineering
RPC
Distributed computing
Security

Basically everything that was a Yes in 1999 is still important today.

The article is a current snapshot on those issues. Do you agree?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by mendax on Monday November 30 2015, @08:39AM

    by mendax (2840) on Monday November 30 2015, @08:39AM (#269636)

    RPC works as anyone here who is worth his or her salt knows. It continues to evolve and is very useful, although SOAP and WSDL are things that Satan shit out upon the world.. Oh, and functional programming is quite useful in certain limited circumstances but in some circumstances can simplify code. But a lot of functional programming is just plain nasty.

    --
    It's really quite a simple choice: Life, Death, or Los Angeles.
    • (Score: 2) by Hairyfeet on Monday November 30 2015, @11:20AM

      by Hairyfeet (75) <{bassbeast1968} {at} {gmail.com}> on Monday November 30 2015, @11:20AM (#269672) Journal

      " But a lot of (insert language) programming is just plain nasty"...FTFY. I don't care if its VB or JAVA, Functional or OO, if you have a crappy programmer writing the code? Then you WILL get crappy code, the language has jack and squat to do with it! As long as the code is being written by humans? Then you will have good and shitty code, i don';t care what language or style it is.

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 1, Interesting) by Anonymous Coward on Monday November 30 2015, @01:25PM

        by Anonymous Coward on Monday November 30 2015, @01:25PM (#269702)

        True, but even for a good programmer writing something in a purely functional way can be quite nasty to do. This has less to do with the quality of the end result and more with the mental gymnastics needed to get there.

        Though lots of mental gymnastics during coding often leads to ununderstanable code, so pure functional languages are not always useful imho. Thouh they can be very very useful sometimes (especially if you need to parallellize stuff) and it is imho required to have at least once done someting in a purely functional way since it makes you a better programmer.

        • (Score: 3, Interesting) by Anonymous Coward on Monday November 30 2015, @05:42PM

          by Anonymous Coward on Monday November 30 2015, @05:42PM (#269807)

          Interestingly enough, I've seen many elementary kids learn how to program and they usually pick up functional programming better than they pick up iterative. Their brains are already wired to break things down into easier to understand parts and side effects tend to be confusing. First example, when they programmed factorials, every single kid did something to the effect of:

          def fact(number):
              if number == 1:
                  return 1
              else:
                  return fact(number - 1)

          another example is when they were doing Fibonacci sequences they almost all had the line "fib(x-1)+fib(x-2)" and only 2 kids did it iterative and it was obvious it took a lot more trial and error. Even things that are easier in iterators (like sorting) they usually try to do in a functional manner first, before just looping through the elements again and again. I think I've only seen 1 kid ever sort functionally with a divide and conquer algorithm.

          • (Score: 2, Disagree) by maxwell demon on Monday November 30 2015, @08:43PM

            by maxwell demon (1608) on Monday November 30 2015, @08:43PM (#269900) Journal

            Your function can be simplified to:

            def fact(number):
               if number >=1:
                  return 1
               else:
                  stackoverflow()

            SCNR

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 3, Informative) by Marand on Monday November 30 2015, @09:15PM

              by Marand (1081) on Monday November 30 2015, @09:15PM (#269911) Journal

              Sure, if your language of choice doesn't optimise tail call recursion. In the context of functional programming languages, this is a common form of recursion that is optimised to not blow the stack, even in languages that can't do more general tail call elimination (such as JVM-based functional langs like Clojure and Scala). Bad tail recursion in one can cause an infinte loop, '10 goto 10' style, but won't overflow.

              As this is a core concept of most functional programming, you apparently tried to be witty about a topic you don't know much about because you're used to imperative languages and assumed all languages have the same limitations of your preferred one(s).

              • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @10:33AM

                by Anonymous Coward on Tuesday December 01 2015, @10:33AM (#270116)

                Sure, if your language of choice doesn't optimise tail call recursion.

                The language he used was very obviously Python. Python does not do tail recursion elimination.

                As this is a core concept of most functional programming, you apparently tried to be witty about a topic you don't know much about because you're used to imperative languages and assumed all languages have the same limitations of your preferred one(s).

                You apparently like jumping to conclusions.

                1. As I wrote above, the code was very obviously Python code, which implies having no tail recursion elimination.
                2. Contrary to your assumptions, Python is not my preferred programming language. It is, however, the language used in the post I replied to.
                3. Contrary to your assumptions, I do know about tail recursion elimination. Indeed, initially I did indeed put an infinite loop there; but as I said, Python does not do tail recursion elimination, and therefore I did change it to stack overflow before posting.
                4. I'm not sure what to make of the fact that you, of all the possibilities, chose to use "10 goto 10" to describe an infinite loop, but it certainly doesn't speak for you. I would have chosen something like while (true) {} or, keeping in the Python language, while True: pass — but actually I would not have written explicit code for the loop at all, as I assume everyone here knows what an infinite loop is.
                5. What I intended to be witty about was that the function as given obviously does not calculate the factorial (which I assume was intended, given the name). All that tail recursion elimination talk is actually a red herring.
                • (Score: 2) by Marand on Tuesday December 01 2015, @11:46AM

                  by Marand (1081) on Tuesday December 01 2015, @11:46AM (#270130) Journal

                  I probably shouldn't even respond considering the above post reeks of all kinds of mad, so you likely won't take any further response civilly either, but here goes anyway...

                  1. AC never specified a language and was talking about functional programming in a general context, so I treated the snippet as pseudocode. It's written in a generic enough way (and Python has that generic psuedocode look to it in small chunks) that, without mentioning Python explicitly, it shouldn't really be assumed either way.

                  Likewise for the followup that had a call to "stackoverflow()", since that reads more as pseudocode than as a commonly implemented call in any language. (Does Python actually have a stackoverflow() function? I've never heard of such.)

                  2. See #1

                  3. See #1

                  4. Was there something incorrect in my statement about the infinite loop? I was describing a process, not writing Python code, so I chose the simplest construct/example possible with the intent of being understandable to anybody. It's basically a jump, sharing some similarity to what happens to recursion after TCE. It also had the benefit of being short to type, which is always a bonus when using a tablet touchscreen. All I see here is nitpicking with a vague insinuation that my statements aren't valid because I mentioned the dreaded goto.

                  5. It's not a red herring. Nobody explicitly mentioned Python, the only discussion was "functional programming" in a general sense, and your response -- assuming you're the same poster but using AC to avoid a potential karma hit for the combative response -- as a followup gave the impression that you were implying tail recursion in FP is problematic because of overflows.

                  Maybe it's all just a communication issue because of a lack of clarity and not enough thought about how it would read to others. Though, the AC response afterward makes it look more like an attempt to save face after a mistake by arguing and insinuation, if it's really the same person.

                  Still, I have no way to verify that the AC post is maxwell demon posting anonymously, so I'll take the charitable option and assume it's all a communication problem for now.

            • (Score: 0) by Anonymous Coward on Monday November 30 2015, @09:33PM

              by Anonymous Coward on Monday November 30 2015, @09:33PM (#269916)

              Oops, that else return should be:

              return number * fact(number - 1)

              You also need to keep in mind that elementary students have no concept of negative numbers.

          • (Score: 0) by Anonymous Coward on Monday November 30 2015, @09:57PM

            by Anonymous Coward on Monday November 30 2015, @09:57PM (#269931)

            thats recursive. or is recursion just a special case of functional...

            • (Score: 2) by vux984 on Tuesday December 01 2015, @12:40AM

              by vux984 (5045) on Tuesday December 01 2015, @12:40AM (#269966)

              thats recursive. or is recursion just a special case of functional...

              Functional doesn't have "looping" except as a form of recursion. So anything that needs to loop in functional ends up being recursive.

          • (Score: 2) by vux984 on Tuesday December 01 2015, @01:03AM

            by vux984 (5045) on Tuesday December 01 2015, @01:03AM (#269968)

            Despite your attestation I have never seen kids implement factorials recursively by default. I have always seen them doing it with a loop. To get them to do it recursively; I think you'd have to really lead them by the nose to a recursive definition.

            For example, if you told them that

            1! = 1
            n! = n*(n-1)!

            And then demonstrated that:
            2! = 2 * 1!
            3! = 3 * 2!
            4! = 4 * 3!

            Then I wouldn't be surprised if the reached a recursive solution.

            But if you told them n! was defined as (where II is standing in for capital Pi aka the product symbol)
              n
              II i
            i=1

            And that therefore:

            1! = 1
            2! = 1 * 2
            3! = 1 * 2 * 3
            ...
            n! = 1 * 2 * 3 * ... * n

            You will probably end up with a loop with a counter.
            Look up factorial on wikipedia; its gets around to a recursive definition evenually; but it starts out with with the (in my opinion) much more common iterative one first.

            As for Fibonacci, sure, because the usual definition for fib is always recursive; I don't think I've ever seen it ever initially presented as anything but a recursion. Again, wikipedia bears this out... it shows the recursive definition almost immediately.

            I think I've only seen 1 kid ever sort functionally with a divide and conquer algorithm.

            I generally only ever see kids sort via a variation of:

            list1 = items to be sorted
            list2 = empty

            while set1 is not empty
                find smallest item in set1 (seek all elements)
                remove it from set one
                append it to list2
            end

            Although I've also seen a bubble sort; or a variation on it.

            • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @05:50PM

              by Anonymous Coward on Tuesday December 01 2015, @05:50PM (#270272)

              Could be the difference in how they are taught factorials, but the elementary school around here does the count down down method. This means that 4! is always written as 4*3*2*1 so maybe that consistency in writing and doing a countdown loudly makes the recursive version easier to click.

              As to the second point, that was to reference that they don't do everything recursively. Things like sorting are usually done, as you suggested, with an insertion sort in a new list or bubble sorting, but those require looping over the lists again and again. Only once did I see a recursive functional way to sort, basically with a line like "return MoveHighest(sublist),highest" and that just stuck out in my mind because it was so odd and I feel like if I saw it again, I'd remember that one.

          • (Score: 2) by bzipitidoo on Tuesday December 01 2015, @06:05AM

            by bzipitidoo (4388) on Tuesday December 01 2015, @06:05AM (#270047) Journal

            Are you aware how bad "fib(x-1)+fib(x-2)" can be? Try that in C, and you can easily run into trouble. If the language does not implement some sort of memory for function results (which is most languages), and the programmer does not know to do it for those languages, the most obvious implementation will generate n^2 calls to compute the nth Fibonacci number, instead of only n calls. That will quickly exhaust memory even for relatively small Fibonacci numbers.

            • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @05:32PM

              by Anonymous Coward on Tuesday December 01 2015, @05:32PM (#270264)

              Yes I am, that is why I program things to wrap around what they do, like parsing the input and implement things like an LRU cache.

      • (Score: 3, Insightful) by Thexalon on Monday November 30 2015, @03:48PM

        by Thexalon (636) on Monday November 30 2015, @03:48PM (#269756)

        if you have a crappy programmer writing the code? Then you WILL get crappy code, the language has jack and squat to do with it!

        The language does have something to do with it. That's because some languages have constructs that make it easy to write good code, and other languages have constructs that make it easy to write lousy code.

        For what its worth, my assessment of functional programming: It's a very good for teaching one very important lesson - passing data up and down the stack (i.e. function arguments) is much easier to control and test and debug than managing data on the heap (i.e. global state). Why is that easier to manage, you ask? Because when you hit a bug that is caused by an erroneous data value being used in your function (a not-uncommon situation), you can find out how any value ended up there by working your way up the call stack to learn where the bad data was introduced, rather than searching your code base for all manipulations of a global state variable and having to dig through to figure out which of your 3,056 results was the culprit.

        But once you understand that lesson and do everything in your power to minimize global state, then it's time to abandon functional programming and instead write procedural or object-oriented programs taking advantage of the techniques you learned when writing functional programs.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 2) by sjames on Monday November 30 2015, @06:17PM

          by sjames (2882) on Monday November 30 2015, @06:17PM (#269824) Journal

          I still think there is value in functional programming beyond that, though not really in the pure form.

          For example, Python has made threaded programming safer and easier for pretty much anyone, but at the cost of the Global Interpreter Lock. Alas, the GIL means that unless your program calls into functions written in C that explicitly release the lock, it will be logically concurrent but serialized in practice.

          But Python does have a number of immutable types even if many programmers hadn't noticed. To the point that if you write a="foo" and b="foo", there will be only one "foo" that a and b both reference. If you then write a+='!' a will refer to a new object and b will still be a reference to the original "foo".

          Somewhere in that there is probably room in a future implementation of Python to allow functions declared as functional to release the GIL until they return, allowing true concurrency on a multi-core system but maintaining the easy thread programming. Or perhaps even a general re-arrangement so the GIL is only taken when altering a global reference or operating on a mutable type.

          Like many concepts in programming, if you relax the purism and don't try to apply it to everything, academic concepts finally become useful.

          • (Score: 2) by Marand on Monday November 30 2015, @11:57PM

            by Marand (1081) on Monday November 30 2015, @11:57PM (#269961) Journal

            I still think there is value in functional programming beyond that, though not really in the pure form. ... Like many concepts in programming, if you relax the purism and don't try to apply it to everything, academic concepts finally become useful.

            Yup, like I said in another comment, I found the hardcore academic "pure functional" thing off-putting, but have since found the more practical, impure functional approach to be very nice. Clojure is, IMO, a good example of how to approach FP while still retaining access to mutability and creation of impure functions. With Clojure, purity and immutability are the defaults, requiring you to make a deliberate decision if you want something else. It even takes the concept beyond primitives: all data structures are immutable, so even with hash-maps and vectors, your functions return copies instead of doing in-place mutation.

            However, it doesn't prevent you from creating impure functions or creating mutable data structures if you need them. That means it's easier to take a functional approach, without excessively complicating matters if you run into a situation where you need mutability or side-effects. Once you determine there's a performance bottleneck or need IO, you can limit your "impure" code to a small part of the program, which helps a lot with limiting the need for locks and reducing potential bugs.

            I think that, for anyone interested in concurrency, Clojure is a good place to start. Erlang's made for it too, though I think it's not as easy to get into as Clojure.

        • (Score: 2) by Wootery on Monday November 30 2015, @08:01PM

          by Wootery (2341) on Monday November 30 2015, @08:01PM (#269883)

          Why is that easier to manage, you ask? Because when you hit a bug that is caused by an erroneous data value being used in your function (a not-uncommon situation), you can find out how any value ended up there by working your way up the call stack to learn where the bad data was introduced, rather than searching your code base for all manipulations of a global state variable and having to dig through to figure out which of your 3,056 results was the culprit.

          What you're really saying then is that iterative languages' debuggers need to be better at finding the origin of a value. I agree, I just don't think it's really a criticism of iterative programming itself. (I believe some modern IDEs can do this, but I forget which.)

          • (Score: 2) by Thexalon on Monday November 30 2015, @08:18PM

            by Thexalon (636) on Monday November 30 2015, @08:18PM (#269888)

            There are plenty of other advantages of stack storage over heap storage though:
            - Unit testing becomes a lot easier.
            - Thread-safety becomes a lot simpler.
            - Much greater visibility of internal dependencies.

            And I'm not arguing that iterative programming sucks, I'm arguing that there are valuable lessons to be learned by studying functional programming. Well, that, and that a big blob of global data is a terrible design idea.

            --
            The only thing that stops a bad guy with a compiler is a good guy with a compiler.
            • (Score: 2) by Wootery on Tuesday December 01 2015, @01:29PM

              by Wootery (2341) on Tuesday December 01 2015, @01:29PM (#270159)

              I don't follow. We weren't discussing memory-management, we were discussing functional vs imperative.

              Anyway, functional languages' implementations almost always make use of a garbage-collected heap, and are not purely stack-based. See https://en.wikipedia.org/wiki/Funarg_problem [wikipedia.org]

              Also, it's imperative programming, not iterative (though I see I made the same mistake in my comment above).

              - Unit testing becomes a lot easier.

              You mean to say pure functions are easier to test than imperative-style functions with side-effects? I agree.

              - Thread-safety becomes a lot simpler.

              Sure -- you can't really go wrong if your language has no mutation -- but function programming doesn't have a good track record in real-world high-performance highly-parallel applications.

              Much greater visibility of internal dependencies.

              How do you mean?

              Personally I'd add to your list the enhanced ability of functional languages to leverage type systems. Everything you do accepts a type and returns a type. Compare that with, say, C, where there's no way the type system can ensure you've called (for instance) the necessary initialisation functions.

              a big blob of global data is a terrible design idea.

              Of course, but no-one contests that. It's simply bad design.

    • (Score: 4, Informative) by jcross on Monday November 30 2015, @02:13PM

      by jcross (4009) on Monday November 30 2015, @02:13PM (#269723)

      In the presentation it reads "RPC (except for Web)", so I think he's noting that it does work, but hadn't at the time been a great fit for desktop-style software. I think that assessment may no longer be true, however, as an increasing number of desktop apps depend on remote APIs, even if those APIs are usually sourced over HTTP, and hence could be seen to fall under the "web" umbrella.

    • (Score: 5, Funny) by VLM on Monday November 30 2015, @03:02PM

      by VLM (445) Subscriber Badge on Monday November 30 2015, @03:02PM (#269745)

      But a lot of functional programming is just plain nasty.

      When OO was exclusively nearly 100% dominant you ended up with a lot of "everything can only be programmed OO" even if its a terrible idea. See this example and many others:

      https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition [github.com]

      (the above link is why I refuse to ever take a Java job, never have, never will, its the COBOL of a new generation)

      I don't think "programming in general" will ever be so screwed up again that everything will have to be written functional even stuff that doesn't fit the functional paradigm. Or given that functional owns about 2% of the programming world, being worried about it staging a coup and taking over 100% of the market including where it doesn't fit is unlikely to be a serious concern.

      • (Score: 5, Insightful) by bradley13 on Monday November 30 2015, @03:54PM

        by bradley13 (3053) on Monday November 30 2015, @03:54PM (#269757) Homepage Journal

        FizzBuzz Enterprise Edition :-)

        Overengineering has nothing specifically to do with Java; it's an industry problem. Hand a problem to a developer group that has a set of frameworks they always use, have the QA department enforce documentation and testing standards regardless of applicability, have the bosses apply strict develop methodology to anything that doesn't move fast enough, and there you go...

        That said, it does seem to be affecting Java a lot lately. I recently had the misfortune of discovering the "Optional" class in Java 8 [oracle.com]. No longer do we have to write

              if (x != null && x.doSomething())

        No, now we can write

              if (opt != null && opt.isPresent() && opt.get().doSomething())

        Of course, variables of type Optional are never supposed to be null. That and five bucks will get you a cup of coffee. Yeah, I know Optional is prettier with lambdas. Don't get me started on the idiocy of introducing lambda expressions into an imperative language... I'd rather just switch to Scala.

        --
        Everyone is somebody else's weirdo.
      • (Score: 2) by termigator on Monday November 30 2015, @06:44PM

        by termigator (4271) on Monday November 30 2015, @06:44PM (#269836)

        Very funny, including the lack of javadoc, like most enterpise source code.

      • (Score: 2) by mendax on Monday November 30 2015, @08:20PM

        by mendax (2840) on Monday November 30 2015, @08:20PM (#269889)

        (the above link is why I refuse to ever take a Java job, never have, never will, its the COBOL of a new generation)

        Having taken a Java job in the past what can I say? I love Java in principle, but only for projects where it makes sense. If you're forced to use the JVM and JRE, there's Groovy, Scala, and Jython. With Groovy, you can write old-fashioned procedural code to your heart's content without a hint of the OO underpinnings. In fact, I love Groovy because of that simplicity. It makes it easy to write something "in Java" that is quick and dirty without having to deal with the other shit. Oh, and Groovy and its closures let you do all the functional programming you want, even if it makes absolutely no sense.

        --
        It's really quite a simple choice: Life, Death, or Los Angeles.
      • (Score: 2) by darkfeline on Tuesday December 01 2015, @01:58AM

        by darkfeline (1030) on Tuesday December 01 2015, @01:58AM (#269985) Homepage

        I think functional programming is more relevant than OOP.

        There simply isn't that many places where OOP code reuse is applicable. Designing frameworks, event handlers, and such, OOP inheritance and subclassing make sense. Everywhere else, code reuse is better done via libraries and function composition (a.k.a. functional programming).

        When you start writing stuff like AbstractFooer, you're really writing functional style code in the guise of OOP.

        --
        Join the SDF Public Access UNIX System today!
      • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 01 2015, @11:28AM

        by Anonymous Coward on Tuesday December 01 2015, @11:28AM (#270126)

        If that intentionally bloated code can get you to avoid Java, then I easily can get you to avoid C as well. Because, of course, I can also write a ridiculously complicated C program for FizzBuzz.

        Here's a start:

        #include <stdio.h>
        #include <stdlib.h>
        #include <string.h>
        #include <assert.h>
         
        typedef enum fizzbuzz_enum { FIZZBUZZ, FIZZ, BUZZ, NUMBER, FIZZBUZZ_END } fizzbuzz_enum_t;
         
        #define UNIT_FACTOR 1
        #define FIZZ_FACTOR 3
        #define BUZZ_FACTOR 5
        #define FIZZBUZZFACTOR (FIZZ_FACTOR * BUZZ_FACTOR)
         
        #define FIZZ_STRING "Fizz"
        #define BUZZ_STRING "Buzz"
        #define FIZZBUZZ_STRING FIZZ_STRING BUZZ_STRING
         
        #define FIRST_NUMBER 1
        #define LAST_NUMBER 100
         
        #define ONE_PAST_LAST_NUMBER (LAST_NUMBER + 1)
         
        int fizzbuzz_factors[FIZZBUZZ_END] = {
          FIZZBUZZ_FACTOR,
          BUZZ_FACTOR,
          FIZZ_FACTOR,
          UNIT_FACTOR
        };
         
        int is_divisible_by(int n, int k)
        {
          return n % k == 0;
        }
         
        fizzbuzz_enum_t determine_output_type(int n)
        {
          int i;
         
          for (i = FIZZBUZZ; i < FIZZBUZZ_END; i++)
          {
            if (is_divisible_by(n, factor[i])
            {
              return (fizzbuzz_enum_t)i;
            }
          }
          assert(!"This should never be reached");
        }
         
        int required_characters(int n)
        {
          int chars;
         
          chars = 1;
          if (n < 0)
          {
            chars++;
            n = -n;
          }
         
          while (n > 10)
          {
            chars++;
            n /= 10;
          }
         
          return chars;
        }
         
        char* num_to_string(int n)
        {
          int size;
          char* string;
         
          size = required_characters(n) + 1;
          string = malloc(size);
          if (string)
          {
            snprintf(string, size, "%d", n);
          }
         
          return string;
        }
         
        char* fizzbuzz_string(int n)
        {
          fizzbuzz_enum_t output_type;
          char* text;
         
          output_type = determine_output_type(n);
          switch(output_type)
          {
          case FIZZBUZZ:
             text = strdup(FIZZBUZZ_STRING);
             break;
          case BUZZ:
             text = strdup(BUZZ_STRING);
             break;
          case FIZZ:
             text = strdup(FIZZ_STRING);
             break;
          case NUMBER:
             text = num_to_string(n);
             break;
          default:
             assert(!"This should never be reached");
          }
         
          return text;
        }
         
        void print_fizzbuzz(int n)
        {
          char* str = fizzbuzz_string(n);
          printf("%s\n", str);
          free(str);
        }
         
        int main()
        {
          int n;
         
          for (n = FIRST_NUMBER; n < ONE_PAST_LAST_NUMBER; n++)
          {
            print_fizzbuzz(n);
          }
        }

        And in true corporate tradition, I did not properly test the code; bugs are to be found by the customers. ;-)

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @03:41PM

      by Anonymous Coward on Monday November 30 2015, @03:41PM (#269755)

      You may want to skip to the end of the article. He says 'yes' now for RPC.

      Back in 1999 RPC was not really there yet. In fact it was a rather massive level of pain to get to work correctly. I know I built a couple of those systems at that time. Now you slap some soap/json interpreters in there and call it a day. XML was the key to getting RPC to work correctly.

  • (Score: 2, Interesting) by driverless on Monday November 30 2015, @08:41AM

    by driverless (4770) on Monday November 30 2015, @08:41AM (#269637)

    Some of the categories are a bit vague though, how would you define "software engineering" or "algorithms"? More to the point, what's the definition of "not working" for the category "algorithms"? I'm guessing for "software engineering" the justification is that we still write software that's buggy and unreliable, therefore the discipline of software engineering isn't very mature/helping much (compared to something like civil engineering, where we can say with some certainty that if we do X, Y, and Z then the result will hold up to A, B, and C, with a lifetime of 50 years).

    • (Score: 4, Insightful) by VLM on Monday November 30 2015, @02:55PM

      by VLM (445) Subscriber Badge on Monday November 30 2015, @02:55PM (#269740)

      The CivEng analogy of what programmers call software engineering would be making heavy construction equipment operators spend half their day attending ISO9000 communist-style struggle sessions and doing Tai Chi together, for no reason other than someone in management read in a magazine that someone else did it, and then spend the other half of the day pondering why the hole isn't getting dug to schedule. The schedule that was made up by sales telling the client whatever they wanted to hear, of course.

      "OK so you'd like 150 miles of new interstate highway complete with overpasses and onramps installed by next Monday, and you're asking will that be a problem, well I'm not seeing any problems with cashing the commission check for that, no problem at all, so just sign right here"

    • (Score: 2) by sjames on Monday November 30 2015, @06:52PM

      by sjames (2882) on Monday November 30 2015, @06:52PM (#269841) Journal

      There are a number of reasons for that. Imagine if Civil engineers had to design a building such that it could be 'installed' anywhere. And the building needed to be equally suitable as a residence, a restaurant, and a business office. Not sure if electricity will be available there, so better design an optional power plant module out back.

      BTW, since we don't know the difference between design and physical construction, we'll be wanting you to tell us up front the exact day the building can open and how much it will cost.

      And BTW, we're not listening to any of that crap about a so-called PE signing off. If we decide it will be 500 floors high and made of spun sugar you damned well better provide or we'll get those guys in India to do it!

  • (Score: 4, Interesting) by Anonymous Coward on Monday November 30 2015, @08:49AM

    by Anonymous Coward on Monday November 30 2015, @08:49AM (#269640)
    Security is useful. But nobody really cares in real life.

    It's just like a determined thief can steal most cars. Just secure your stuff better than the average person and if "stuff" still happens, go work it out with insurance (and restore from backups). Everyone is so used to "confidential" stuff leaking out anyway.

    And if you ask me, the phrase "Identify Theft" is to brainwash more people into thinking it should be their problem, when in actual fact it should be "Fraud" and the Bank's problem (or whoever got fooled). If someone is stealing money from a bank account by using publicly available data it should be the bank's responsibility to fix.
    • (Score: 1, Touché) by Anonymous Coward on Monday November 30 2015, @09:21AM

      by Anonymous Coward on Monday November 30 2015, @09:21AM (#269646)

      But nobody really cares in real life.

      Plenty of people care.

      • (Score: 4, Interesting) by anubi on Monday November 30 2015, @10:20AM

        by anubi (2828) on Monday November 30 2015, @10:20AM (#269655) Journal

        I believe if we cared as much about our software infrastructure as much as our physical infrastructure, we would be having all these security breaches about as often as we have bridges and buildings collapsing.

        Would we buy a concrete foundation if the vendor insisted we agree to a "hold harmless" clause and had his crony congresscritters make it illegal to verify the concrete? We will not pick up food from the sidewalk and eat it, but we will quite happily download executables into our machines - matter of fact most of us will tell our machines to eat whatever someone we have never met and agree to hold completely harmless serves us ( enabling javascript and visiting God-knows-what website ).

        We are in this mess because WE are not making a stink about it.

        The parties interested in forcing us to eat whatever they serve are winning because they will lobby the politicians, while WE are flat dropping the ball by not organizing and getting people voted into power who will represent US, not the Lobbyist. Although WE have the vote, most of us still think words spoken through corporate-sponsored microphones deserve our vote. Its high time we drop all this "the honorable mr. so-and-so" crap and ask blunt pointed questions to the congressmen as to why they are imposing the wish-list of a few onto the rest of us, while letting them off of the same hook they want us to swallow.

        My top pet peeves are:

        * Can't verify what's in it, but rightsholder of it held harmless against what it does.
              ( would you eat food you are not allowed to verify what it is? )

        * Their hidden tinyprint is legally binding, but if I put tinyprint on my payment check... its not.

        * Why is it "property" for keeping others out of the playing field, but not "property" as in "property tax"?

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
        • (Score: 4, Insightful) by Geezer on Monday November 30 2015, @10:42AM

          by Geezer (511) on Monday November 30 2015, @10:42AM (#269660)

          Your questions and observations can easily be explained by applying the age-old maxim, "Follow the money."

          We can stink the place up to high heaven, but the powers that be will just invest in air fresheners until security becomes more monetarily significant to the industry than just another pesky cost center in their budgets.

          Does that make it right? Of course not, but since when is anything in business driven by being right?

          Statutory requirements and regulatory protocols? Not as long as Wall Street owns everybody in Washington except maybe Bernie Sanders and my nephew (he's a veterinarian).

          Always follow the money.

        • (Score: 4, Insightful) by zocalo on Monday November 30 2015, @12:05PM

          by zocalo (302) on Monday November 30 2015, @12:05PM (#269680)

          I believe if we cared as much about our software infrastructure as much as our physical infrastructure, we would be having all these security breaches about as often as we have bridges and buildings collapsing.

          There's a difference in scale, sure, but there is a big problem with crumbling infrastructure in the US and several have outright collapsed with little or no warning. The reason for that isn't due to the construction - it's due to lack of maintenance and failure to replace things before they have outlasted their planned for lifespan and are essentially existing on borrowed time. We have exactly the same problem with software; maintenance (fixing bugs) is expensive, and far too many vendors will only do it when forced to by an exploit in the wild (and sometimes not even then), and users also continue to use software long after it is determined to be out of support by the vendor, such as with all the Windows XP holdouts. I actually don't see there is any difference in the amount of care - there are just more people not caring about the software that they have installed, and hence more problems, than there are people who ought to be caring about the bridges but are not (or just don't have the money to deal with the problem).

          --
          UNIX? They're not even circumcised! Savages!
        • (Score: 2, Interesting) by Anonymous Coward on Monday November 30 2015, @12:17PM

          by Anonymous Coward on Monday November 30 2015, @12:17PM (#269684)

          I believe if we cared as much about our software infrastructure as much as our physical infrastructure

          But we don't really care about our physical infrastructure either. When the US decided that billions needed to be pumped into the economy when it was on the verge of collapsing (and this was not long after several very high profile infrastructure failures), there was a desire from the President and some others to put that money into physical infrastructure (remember "shovel ready"?). It never happened because the Republicans fought it tooth and nail because that would be "giving money to Democrats" in the form of working class jobs that, GASP!, might even be union jobs (though it didn't stop them from taking credit for any projects that were funded [politifact.com]). So here we are still with aging and shitty infrastructure because people didn't hold politicians feet to the fire for that then they do for anything else.

        • (Score: 3, Insightful) by sjames on Monday November 30 2015, @07:19PM

          by sjames (2882) on Monday November 30 2015, @07:19PM (#269862) Journal

          A more apt comparison might be burglary. We regularly put strong locks on glass doors and use alarm systems that can be disabled with a sledgehammer before they make a peep.

          The real issue is that it's much easier to get away with computer intrusion than burglary, mostly because you can do it from foreign jurisdictions that don't really care.

          • (Score: 0) by Anonymous Coward on Monday November 30 2015, @11:21PM

            by Anonymous Coward on Monday November 30 2015, @11:21PM (#269950)

            alarm systems that can be disabled with a sledgehammer

            If you try to drill through a UL-certified bell box, your bit will short-circuit the outer box to the inner box and will set off the system.
            Dent it significantly and the same thing happens.
            (Good luck denting that super-duty thickness steel.)

            Every properly-installed remote control for an alarm system has an anti-tamper switch behind its faceplate.
            Try to remove the screws securing it to the bulkhead and you will set off the system.
            Hit it with a hammer and you will get the same result.

            Even if you somehow manage to gain physical access to the alarm panel (the main thing that everything reports to) and you try to crowbar the door open, there is an anti-tamper switch there that will set off the system.

            glass doors

            If you have an expanse of glass that, if smashed, leaves a large enough void to get a small child through and you don't have that protected with lead foil [google.com] or a glass bug, [google.com] then your alarm installer is clueless.

            .
            ...and, getting back on topic, a M$.com page that mentions security (which MICROS~1 clearly thinks can be pasted onto the side after the software development is all finished) is just hilarious.

            ...and big props to the "He's talking out of his rear end" AC (below) as well as the response of jcross.

            -- gewg_

            • (Score: 2) by sjames on Tuesday December 01 2015, @12:09AM

              by sjames (2882) on Tuesday December 01 2015, @12:09AM (#269964) Journal

              There are alarms like you speak of, but I've seen an awful lot more of the kick the door in and smash the PLASTIC panel within 60 seconds. to silence type, including in commercial settings.

            • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @06:53AM

              by Anonymous Coward on Tuesday December 01 2015, @06:53AM (#270061)

              The one I thought was so funny is one business put up a very obvious alarm bell box to thwart some thieves that were breaking into his pawn shop. The shop was broken into again, and his bell box was full of hardened spray-foam. Something like "Great Stuff".

              The way I see it, if I am going to wire a place to catch them in the act... I put up something obvious that will attract them, and them messing with the bait is what triggers the covert system. The fancy-looking "camera" blatently facing the register does not even have to work. But if it is as much as touched, they are in the exact position I would need to get pretty good shots of their faces with a covert pinhole camera.

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @04:08PM

      by Anonymous Coward on Monday November 30 2015, @04:08PM (#269761)

      > It's just like a determined thief can steal most cars.

      If a determined thief could steal all cars in the country overnight, then yeah, its just like that.

      Networking and automation are such enormous risk multipliers that it just isn't in the same class as physical exploitations.

    • (Score: 3, Insightful) by sjames on Monday November 30 2015, @07:07PM

      by sjames (2882) on Monday November 30 2015, @07:07PM (#269853) Journal

      And if you ask me, the phrase "Identify Theft" is to brainwash more people into thinking it should be their problem, when in actual fact it should be "Fraud" and the Bank's problem (or whoever got fooled).

      Exactly right! "Identity theft" is one person defrauding a creditor by fooling them into thinking they were dealing with someone else. It is not that someone else's responsibility to fix the problem. If the creditor continues to pursue the debt with the third party once it is denied, they are then guilty of fraud and harassment. Meanwhile, if that fraud damages that someone else's credit rating, then the credit reporting companies have committed libel. That is, they published disparaging information knowing that it would cause material harm. Given how common "identity theft" is, they cannot claim reasonable belief that it was true as a defense.

  • (Score: 5, Informative) by Anonymous Coward on Monday November 30 2015, @08:52AM

    by Anonymous Coward on Monday November 30 2015, @08:52AM (#269641)

    His history is wrong. His knowledge of tech is wrong. I only got as far as his "RISC" section before I was pulling my hair out and shouting "you haven't got a clue what you are talking about". Parallelism is a great success - every GPU, which means pretty much every dektop PC and every phone or tablet, demonstrates the success of parallelism. DEC's Fx32 emulation was faster at integer, but slower at FP, as Alphas didn't have 80bit FPUs. NT was first demonstrated on Alphas, not x86. IBM did not pull out of the Power Consortium, which the article implies. He appears to not know that the "R" in ARM stands for "RISC", and that there are way more ARM processessors out there than x86 by a significant factor, and therefore RISC is an enormous success. And those are just the things I can remember without switching back to the other tab to refresh my memory of the other things he said which were bogus.

    • (Score: 2) by mth on Monday November 30 2015, @10:55AM

      by mth (2848) on Monday November 30 2015, @10:55AM (#269667) Homepage

      This was a talk from 1999 though. Parallellism hadn't made it into the mass market yet. RISC was being looked at for high performance, while it succeeded at low power instead.

      I also don't know what "worked" is supposed to mean exactly. Software Engineering certainly helps in managing large projects, but it is often done poorly. Security is starting to be taken seriously, but it is also often done poorly. So the execution of these topics leaves a lot to be desired, but that doesn't mean they're not important topics.

      • (Score: 0) by Anonymous Coward on Monday November 30 2015, @11:10AM

        by Anonymous Coward on Monday November 30 2015, @11:10AM (#269671)

        This was a talk from 1999 though.

        But the "article" itself, which is what I assume the GP meant, was from just a few days ago.

        Or at least I assume it was, since there's no fscking date listed anywhere on the blog post.

    • (Score: 4, Interesting) by theluggage on Monday November 30 2015, @01:07PM

      by theluggage (1797) on Monday November 30 2015, @01:07PM (#269693)

      He appears to not know that the "R" in ARM stands for "RISC",

      He also omitted (...and the modern article mentions but dismisses as 'nitpicking' because it doesn't fit their argument) that the Pentium Pro (the granddaddy of the current core i chips) replaced the traditional CISC design with an x86 instruction decoder feeding a RISC core. The only reason this sort of over-engineering was feasible was the huge mass of legacy x86 code stemming from an Intel monopoly so big that even Intel couldn't compete with it with their own Itanium. Microsoft didn't help by pulling the Alpha/SPARC/PPC versions of Windows. Todays x86 chips may bear little resemblance to RISC, but historically RISC was a key influence on their design.

      Paralellism - well, it certainly isn't dead, but I don't think the quad core in your phone represents the sort of massively parallel wonderland that was being contemplated in the 90s with the transputer and all that. The GPU in your phone, now, that's another matter, but using GPUs for anything other than graphics and image processing hasn't quite gone mainstream yet.

      • (Score: 2, Funny) by Anonymous Coward on Monday November 30 2015, @02:43PM

        by Anonymous Coward on Monday November 30 2015, @02:43PM (#269734)

        Imagine a Beowulf Cluster of phone GPU's.

    • (Score: 4, Informative) by VLM on Monday November 30 2015, @02:29PM

      by VLM (445) Subscriber Badge on Monday November 30 2015, @02:29PM (#269728)

      RISC was a success everywhere. But you have to be a real old timer to see the trendline.

      There's a classic graph I can't seem to find of transistor count vs instruction set size and its a crazy correlation until the 80s. So a tiny PDP-8 has like 7 machine language opcodes (and admittedly some crazy hardware accelerated IO which is kinda opcode-ish but whatever). Believe it or not, although it smells turing tarpit like, you can do damn near anything with a PDP-8, just in a somewhat convoluted manner. And a Z80 has a lot of transistors and hundreds of opcodes. And something like an IBM/360 series mainframe has machine language support for insane complicated stuff (which people mostly don't use).

      So the RISC theory is a modern 2015 processor with a bazillion transistors, on the CISC trajectory, should have a million opcodes and be directly simultaneously executing intermixed LISP and tokenized basic and javascript in silicon, if those trends had continued, which they hadn't.

      Two things broke CISC permanently. First was scaling, you can lead a horse to water but ... so even something like an IBM mainframe isn't taken total advantage of and some dumb bastard will write his own software floating point because he can't be bothered to learn the built in floating point or WTF.

      The second thing that broke CISC permanently was not understanding legacy code would require that a 2015 processor can't use a bazillion transistors to implement Prolog-in-silicon because there's two dozen shitty legacy addressing modes and operations modes that all have to be 100% supported to sell the thing, so in theory you could execute unchanged a binary first compiled on an original genuine 64K PC in the early 80s, and everything in between. Legacy support is where the bazillion transistors went. And non-x86-legacy chips put their transistors toward speed so your phone can spend 99.99999% of its time in zero current idle mode by being really Fing hardware accelerated fast the 0.000001% of the time its awake and running full blast.

      The instruction set of a x86 2015 processor isn't that much more complex than a 1978 x86 series processor. The possible addressing modes and virtualization modes are where the insane complexity now lie. Thats the insight of RISC, beyond a certain point we're not gonna put more transistors into microcode decoding, more or less.

      Another example of RISC being a success everywhere can be seen in some long living processor families, look at something like the PIC family for two (or more?) decades the instruction set is not expanding, although on chip flash has gone from 256 bytes to 256K and on chip ram has gone from something like 16 bytes to 128K. (probably more) Yet the instruction set complexity is nearly constant, not increased by a factor of a thousand or so.

      • (Score: 2, Interesting) by pinchy on Tuesday December 01 2015, @01:13AM

        by pinchy (777) on Tuesday December 01 2015, @01:13AM (#269971) Journal

        That transistor count is skewed waay too much by the cache.

        What I would like to see is someone actually take the cache and cores into account with these total transistor counts and see whats dedicated to the actual cpu logic. I bet it hasnt grown all the much.

        • (Score: 2) by VLM on Tuesday December 01 2015, @12:11PM

          by VLM (445) Subscriber Badge on Tuesday December 01 2015, @12:11PM (#270136)

          Another way to phrase RISC won. In the 60s/70s your opcode set was defined more or less by the number of transistors you could afford, so a 7094 had a slightly more elaborate instruction set than a pdp8. One transistor count was no serious limit, you'd assume the "starved" pdp8 type people would gorge themselves and decades later with orders of magnitude more transistors we'd have hyper-cisc but no there seems to be a rational limit.

    • (Score: 4, Insightful) by jcross on Monday November 30 2015, @02:32PM

      by jcross (4009) on Monday November 30 2015, @02:32PM (#269732)

      Yes, and I think the reason for this is that in a Microsoft-centric worldview, many of these things did not work well. The original presentation is hosted on an MS website, and if you look at the about section of the blog, the blogger works for MS as well. I guess there's little excuse for ignoring GPUs, but the ARM revolution is not running Windows much. Also the "Reuse" category is fairly ridiculous, since reuse is (and was in 1999) extremely successful in the open source world, it just never worked will via OLE/Com because dependency management sucked and interfaces made by anyone but MS were rarely spec'd well enough. If you read the whole article with a Windows bias, it starts to make more sense.

    • (Score: 2) by sjames on Monday November 30 2015, @07:30PM

      by sjames (2882) on Monday November 30 2015, @07:30PM (#269865) Journal

      Parallellism was supposed to make everything faster and become as easy to use as serial programming. We have the hardware because it was the only way to keep moving forward, but most software is still single threaded or it calls a carefully crafted library written by a specialist to do the parallel stuff in the GPU. Most of it is "embarrassingly parallel" stuff.

      We need a sort-of column.

  • (Score: 2, Insightful) by Anonymous Coward on Monday November 30 2015, @09:06AM

    by Anonymous Coward on Monday November 30 2015, @09:06AM (#269644)
    Yeah formal methods seem to be trying to fix the wrong problem.

    The main problem isn't that programs/designs aren't doing what programmers/designer specified. Unless there's a compiler bug the compiled code is doing what the programmer specified through the source code.

    The problem is usually the programmer/designer specified something that was wrong. Or the requirements were wrong or "incomplete".

    Formal methods aren't going to solve that.

    Making it easier for a programmer to notice or avoid common mistakes might help more than creating new esoteric formal languages to describe stuff.

    Attempting to write the same stuff in a different enough language could also help a programmer notice mistakes just as well as using formal methods.
    • (Score: 2) by mth on Monday November 30 2015, @10:42AM

      by mth (2848) on Monday November 30 2015, @10:42AM (#269659) Homepage

      The main problem isn't that programs/designs aren't doing what programmers/designer specified. Unless there's a compiler bug the compiled code is doing what the programmer specified through the source code.

      Source code is a very low-level specification. Complex programs will have a high-level specification as well and formal methods can help in determining whether the source code satisfies the high-level specification.

      The problem is usually the programmer/designer specified something that was wrong. Or the requirements were wrong or "incomplete".

      Formal methods aren't going to solve that.

      I partially agree: formal methods indeed won't catch wrong requirements or specifications, but there are also plenty of bugs that are introduced in the implementation step.

      I think the main problems with formal methods is that they require a lot of training to use and applying them takes a lot of effort. So in practice they are only used in situations where avoiding bugs is allowed to cost extra. Which is unfortunately quite rare.

      Maybe things will change when application of formal methods is automated more. It would still require writing a formal specification, but if the code can be generated automatically from there, it would be like using a higher-level language and could save time rather than cost time.

  • (Score: 4, Insightful) by zugedneb on Monday November 30 2015, @09:40AM

    by zugedneb (4556) on Monday November 30 2015, @09:40AM (#269651)

    Article has no value.
    It is still to early in the "history" of computation, to say anything.
    Most of the people who were around in the "beginning" are still around...

    There is no reason for functional programming, and fancy types and shit.
    C is a functional programming language.
    Even the book on the subject says: write small, well defined and efficient functions. Let the main program consist of calls to these functions.
    If people would actually do this, we would have no problems today.
    And, yes, even boundary checks are included in "well defined"...
    And, also, a new language is actually functions implemented in a different language...
    So C + some parsers should cover every need...

    As for hardware: as long as the average consumer did not need fluid dynamics in games, we were stuck with what we had. Now, there is a difference.

    The next step, the true beginning, is neural interface, and "neural" computation - with some sort of dedicated hardware, perhaps.
    There is where the serious math and parallelism will show up...

    Lets talk in 50 years, if anyone is still around... I will be hitting 90 :D
       

    --
    old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 2) by Runaway1956 on Monday November 30 2015, @11:03AM

      by Runaway1956 (2926) Subscriber Badge on Monday November 30 2015, @11:03AM (#269669) Journal

      Agreed. It takes time for any technology to mature. A single lifetime is nothing, really. Which event in history was the tipping point? Enigma? IBM and NASA's work in the race to the moon? The development of Unix? Which event made today's computing world inevitable? Whichever event one chooses, it was just yesterday. Even in 1945, with Enigma being mothballed, even IBM couldn't have predicted that every home in the developed world would have a cell phone, digital televisions, and 64 bit computers today. My house contains more computing power today, than all of mankind possessed in 1945. Or, 1965 for that matter.

      Computing is still in it's infancy. None of us can guess where it's going to be in another 50 years.

      How many generations of mankind crafted pottery, before someone stumbled over the idea of crafting glass? How many generations crafted bronze, before they stumbled over iron? How many more, before the secrets of steel were discovered?

      I have to admit that an awful lot of important work has been done in the last 150 years, but the technology remains in it's infancy. CPU cooling for instance. It was pretty much unnecessary, only 25 or 30 years ago. My 8088 didn't even have cooling fins. Then came heatsinks with fins, then fans were added. Tomorrow? No fins again, or maybe carbon nanotube fins. http://spectrum.ieee.org/computing/hardware/intelled-team-demonstrates-first-chipscale-thermoelectric-refrigerator [ieee.org]

      But, that's not even the end of the road. I can't find the article about putting holes right through the CPU, and pumping a refigerant through the chip. This article is closely related - https://forums.geforce.com/default/topic/438178/the-geforce-lounge/tiny-refrigerator-taking-shape-to-cool-future-computers/ [geforce.com]

    • (Score: 2, Informative) by Anonymous Coward on Monday November 30 2015, @01:40PM

      by Anonymous Coward on Monday November 30 2015, @01:40PM (#269708)

      C is a procedural programming language, not a functional programming language.

      Your comment has no value.

      • (Score: 1) by zugedneb on Monday November 30 2015, @02:18PM

        by zugedneb (4556) on Monday November 30 2015, @02:18PM (#269726)

        There is always one amongst the sheep...

        --
        old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @02:24PM

      by Anonymous Coward on Monday November 30 2015, @02:24PM (#269727)

      C is a functional programming language.

      You have no idea what "functional programming language" means. How do you write a higher order function in C?

      • (Score: 3, Insightful) by zugedneb on Monday November 30 2015, @02:42PM

        by zugedneb (4556) on Monday November 30 2015, @02:42PM (#269733)

        What the fuck is this onslaught of Anon Cowards?

        You do it through a parser that you define for any specific task...

        Let me show you how in 3 easy steps:
        1: write wtf u want according to your über rules
        2: parse it
        3: execute the resulting function

        You could even say, for trendy kiddies, a program writes a program. =)

        So, no, by and large I am not a trendy person. In the end, everything will be a string of symbols.

        --
        old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
        • (Score: 5, Insightful) by lentilla on Monday November 30 2015, @03:24PM

          by lentilla (1770) on Monday November 30 2015, @03:24PM (#269754)

          zugedneb, you are not making a strong case to back up your assertion that "C is a functional language". The challenge in my grandparent's post was valid: How do you write a higher order function in C?

          One could; of course; write "a parser that you define for any specific task", but then you wouldn't really be programming in C, would you? (Cross reference Greenspun's Tenth Rule [wikipedia.org] for amusement.)

          C is a functional programming language at the same level that my bicycle is a motor vehicle. It has a motor (me), and is a vehicle. Perhaps technically correct, but practically only after a lot of tinkering.

          By the time you've written a parser, you would have been better off using another language. I was especially amused by your "a parser that you define for any specific task"... by which you have highlighted another hallmark of functional programming: abstraction.

          So; barring the introduction of much more compelling evidence; we might just have to conclude there was a miscommunication. C is not a functional language.

    • (Score: 2) by VLM on Monday November 30 2015, @02:46PM

      by VLM (445) Subscriber Badge on Monday November 30 2015, @02:46PM (#269737)

      C is a functional programming language.

      C is basically formalized and formatted and theoretically portable PDP-11 assembly language. Its actually pretty funny how for 40 years the compilers are all large for C idioms but on the original -11 its ridiculously more like an assembler than like a compiler. From memory you can do x++ and stuff like that in assembly on an -11, or at least I think you could. I distinctly remember a freakish way the -8 could do x++ using a memory mapped range for indirect addressing but not ++x or x-- or others, and I'm pretty sure the -11 was more orthogonal as it was generally better in all ways, being more or less a more orthogonal hexadecimal version of the -8, sorta kinda conceptually handwavey.

      Anyway PDP-11 assembly language aka "C" being executable on any other Turing complete machine ever made, at some admitted minimal performance hit, and anything being writable in C, the only distinction among languages aside from syntactic sugar is the (now politically incorrect) BDSM limitations or turing tarpit games.

      So instead of pretending all values are immutable in a C program to write in a functional paradigm, you actually have a BDSM preprocessor violently enforce that limitation on the programmer and call it "real functional". Or instead of not doing stupid pointer arithmetic leading to universally to errors, you have the BDSM preprocessor force you not to use pointers and call it Java.

      The syntactic sugar sugar side is if you're just going to write most of a LISP in C, which isn't that hard, and then write your code in the lisp like DSL, you may as well write the damn thing in LISP directly. Or C++ OO is basically an enforced formalized documentation and pointer data structure for C, C++ mostly lives in the C preprocessor level of the compiler. So you can write perfectly object oriented paradigm code on C, although its a pointless PITA.

      Anyway all languages are just a syntactic sugar or BDSM variation on C, which is just PDP-11 assembly code. Which makes sense because any turing complete language can compute on any OTHER turing complete language, there is no hierarchy of computation (at that level).

      So yeah you can write in any paradigm in C, it just might be a huge PITA but it can be done.

      • (Score: 2) by zugedneb on Monday November 30 2015, @03:05PM

        by zugedneb (4556) on Monday November 30 2015, @03:05PM (#269747)

        You know you are old when you start to hate for real... Like, really really hate...

        So yeah you can write in any paradigm in C, it just might be a huge PITA but it can be done.

        So you take some knowledge of abstract algebra, logic and C.
        You define some rules of how to build a function from other functions, and write a parser for it that is written in C.

        The New Kids Of The Block says: whoa, that's not C anymore.

        I say, no bro, that is me knowing how to use C.

        By and large, this is how communism fails. People did not feel special enough, so they were not motivated. It would have failed without the cold war and the animal leaders also...

        In the end, people need the geniuses who hold the patents on the rounded corners on a tablet...

        --
        old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
        • (Score: 2) by lentilla on Monday November 30 2015, @04:00PM

          by lentilla (1770) on Monday November 30 2015, @04:00PM (#269759)

          I replied to one of your comments in a related thread but I now I begin to suspect I could have a decent guess at some probable responses might be! You certainly have an interestingly complex world-view!

          The New Kids Of The Block says: whoa, that's not C anymore.

          And I'd probably be with them - whilst I haven't yet approached my "arthritic fingers" stage, I certainly wouldn't I consider myself a New Kid.

          Programming; like many other disciplines; really boils down to "the right tool for the job". Certainly C figures strongly in that mix (and sometimes because it is the only tool for the job). But I would say that writing glorious towers of task-specific C code; worthy of an Obfuscated C Contest; is about as annoying as New Kids harping on about their gee-wizz Zed Triple-Plus language just invented yesterday. Both positions are arrogant and lacking in wisdom. At least the New Kids can be forgiven for not knowing better.

          You know you are old when [...]

          To echo a common refrain in many of these comments: programming is in its infancy. The Curmudgeons and the New Kids both have something to offer - after all, the ranks of the Curmudgeons are filled from yesterday's New Kids. No doubt there were similar arguments about C, circa 1980, except the opposition invoked paper tape and hand-optimised machine code.

          The future of programming will be a continuous iterative process of refactoring. There will be dead ends. We should look both forwards: to see what the New Kids come up with, and backwards: to prevent repeating past mistakes and to always ensure we build on a solid foundation. Not everything the New Kids come up with is rubbish - new ideas are the future and part of the growth and evolution of computing.

          • (Score: 2) by zugedneb on Monday November 30 2015, @04:59PM

            by zugedneb (4556) on Monday November 30 2015, @04:59PM (#269787)

            I like u =)

            But let me give an example of name-for-vanity: Lisp.

            A list is an algebraic structure.
            U define how to merge lists, put a list into a list, extract a list, or subtract a list.
            U use recursion.

            Now u got lisp. A language that deals with the syntax of list-within-lists.

            But, you can also do Trees.
            Graphs within graphs, where every node is a graph in itself. Useful when describing situation with processes within processes, as biology, or other simulation.

            All these are algebraic structures, and you can make a syntax dealing with them.
            If you find them sufficiently general, you can make a new programming language based on them. It is ok.

            But, do not think that you are into the new shit, or that you are inventor.
            Everything a computer does is discrete math in all its glory...
            There is no need to pretend to be trendy or cool =)

            --
            old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
        • (Score: 2) by sjames on Monday November 30 2015, @08:42PM

          by sjames (2882) on Monday November 30 2015, @08:42PM (#269899) Journal

          The problem is, by your definition, Pascal, BASIC, FORTRAN, COBOL, and assembly are all functional languages.

          They are languages that can be used to implement a functional language compiler or interpreter (with varying degrees of difficulty and sanity). That's not the same as BEING a functional language.

          Java may be implemented in C. It has very C-like syntax. That does not mean Java == C.

    • (Score: 2, Funny) by nitehawk214 on Monday November 30 2015, @04:36PM

      by nitehawk214 (1304) on Monday November 30 2015, @04:36PM (#269778)

      It is still to early in the "history" of computation, to say anything.
      Most of the people who were around in the "beginning" are still around...

      So you are saying that if we kill Donald Knuth he will become more powerful than we could possibly imagine?

      --
      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
      • (Score: 2) by zugedneb on Monday November 30 2015, @04:52PM

        by zugedneb (4556) on Monday November 30 2015, @04:52PM (#269785)

        No, but computation has some paradigms...
        Knuth is a mathematician within what is called discrete and combinatorial mathematics.
        A lot, if not all of his work covers what can be done with a digital synchronous computer.

        There are other forms of hardware, in which the above mentioned hardware is a subset.
        In 100 years, say, we will know what the really hard things are, and what works and not.
        In 500 even more so.

        --
        old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
        • (Score: 1) by nitehawk214 on Monday November 30 2015, @05:13PM

          by nitehawk214 (1304) on Monday November 30 2015, @05:13PM (#269797)

          I don't know if I can agree with you. Who says what the hard problems will be in 100 or 500 years?

          --
          "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
  • (Score: 3, Insightful) by Justin Case on Monday November 30 2015, @10:56AM

    by Justin Case (4239) on Monday November 30 2015, @10:56AM (#269668) Journal

    Obviously GUIs are not yet in the "Yes" column.

    I can't imagine any other reason why it is necessary to upend them every 6 months or so.

    • (Score: 4, Insightful) by Thexalon on Monday November 30 2015, @12:35PM

      by Thexalon (636) on Monday November 30 2015, @12:35PM (#269686)

      I can't imagine any other reason why it is necessary to upend them every 6 months or so.

      I can: Upending UI standards every 6 months or so is an excellent way of keeping UI designers employed! Imagine a world where there were a couple books' worth of things to learn and then anybody could come up with a good UI, and you'll see why the UI folks are constantly changing their mind.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 0) by Anonymous Coward on Monday November 30 2015, @04:57PM

        by Anonymous Coward on Monday November 30 2015, @04:57PM (#269786)

        That's not the reason, UI changes keep consumers buying new versions. No one pays to upgrade to the next version for more stability and security. You can't see stability and security. Customers don't understand under the hood improvements.

    • (Score: 1) by Illop on Monday November 30 2015, @04:42PM

      by Illop (2741) on Monday November 30 2015, @04:42PM (#269781)

      Because flat is in for this winter/spring. Next year the style will be pixelized gradient or some garbage. Does nothing for workflow productivity, infact I was loosing my window edges in kde plasma 5 due to no contrast whatsoever. Can't stand it, sticking with fluxbox, get off my lawn, etc etc etc......

      • (Score: 0) by Anonymous Coward on Monday November 30 2015, @07:26PM

        by Anonymous Coward on Monday November 30 2015, @07:26PM (#269863)

        Next year the style will be pixelized gradient or some garbage.

        That's a fun Soylent poll idea. "What do you predict the next UI design trend will be?"

        I don't think gradients will make a comeback soon, they went out of style in web design only a few years ago.

    • (Score: 2) by meisterister on Monday November 30 2015, @08:26PM

      by meisterister (949) on Monday November 30 2015, @08:26PM (#269892) Journal

      It's actually kind of funny to see them slide into the "no" column over time.

      In the 1990s, aside from possible improvements due to increasing screen resolution or storage space, UI research was done. There were exactly two big paradigms that people emulated: Windows 95 and Mac OS. Windows 95 was designed after years of expensive, iterative user testing and research (rather than some gut feeling or other such bullshit justification) and Mac OS' UI was mostly around due to its own legacy clout.

      The 9x UI lasted as the default for five releases of Windows (and was present all the way up to NT 6.2) and influenced a number of other operating systems as well.

      --
      (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
      • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @08:55AM

        by Anonymous Coward on Tuesday December 01 2015, @08:55AM (#270094)

        Props for using the internal name of the Version of Windows that introduce deep DRM integration and UAC.

        I think they made the internal name match the marketing name for subsequent versions though. Apparently that is why Windows 9 was skipped: badly-coded software would detect it as Win9x.

    • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @08:19AM

      by Anonymous Coward on Tuesday December 01 2015, @08:19AM (#270091)

      GUI's helped "casual" users but hurt power-users. Command-oriented interfaces are on average fewer keystrokes/movements and more script-able (if done right).

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @12:03PM

    by Anonymous Coward on Monday November 30 2015, @12:03PM (#269679)

    What became visibly popular isn't necessarily what worked.

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @12:36PM

      by Anonymous Coward on Monday November 30 2015, @12:36PM (#269687)

      systemd: It works, bitches!

  • (Score: 5, Interesting) by PizzaRollPlinkett on Monday November 30 2015, @12:11PM

    by PizzaRollPlinkett (4512) on Monday November 30 2015, @12:11PM (#269683)

    The issues facing us in 2016 are not technical. We have the technical ability to write software. There's something ... else ... going on.

    I upgraded to Fedora 22 recently. I've had Fedora machines up for months at a time, running KDE4, basically from one extended power outage to another. Since I upgraded, I barely have them up a few days before a crash, freeze, lockup, or something. It's the same hardware as before the upgrade, so it can't be my hardware. Linux and KDE5 are regressing back a decade in stability. What's going on? Why would stuff that works suddenly be ripped out and replaced? Then you get things like Firefox or Gnome which are completely ruined. One or two people don't like System D. (I'm kind of neutral about it, other than it isn't UNIX.)

    Android uses every design pattern, best practice, technique, and so on, but is a steaming pile of beta mess that barely works. Developing for Android is not fun or easy. All the stuff on that guy's list has not helped it one bit. It's ironically both overengineered and impossible at the same time.

    I also upgraded my laptop to the latest Mac OS, which is remarkably unusable. Things that used to be right there on the screen are hidden inside popups and other stuff. Apple used to get UI right, but they're regressing badly.

    To me, as an old guy who has been around a while, the past few yers have seen giant regressions in stability and usability. It's like everything has turned upside down and all the progress over the past few decades is being undone. I haven't even gotten into one-page web sites or anything. We have more technology and better practices, but none of it seems to be helping.

    --
    (E-mail me if you want a pizza roll!)
    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @12:38PM

      by Anonymous Coward on Monday November 30 2015, @12:38PM (#269689)

      There's something ... else ... going on.

      You mean like Reptilians? I knew there was something off about Poettering.

    • (Score: 4, Insightful) by drgibbon on Monday November 30 2015, @02:05PM

      by drgibbon (74) on Monday November 30 2015, @02:05PM (#269719) Journal

      Well, you could always slow down [slackware.com] a little :) Slackware 14.2 is on the horizon, and there will be no KDE 5, simply because it's not ready yet. No systemd because it's not required and probably more hassle than it's worth (and it represents massive change). I use pretty new hardware (X99 board), and my Slack 14.1 system with KDE 4.14 is completely stable. It's not exactly exciting either, but it works. I expect the same will be true with 14.2 also (and no doubt there are other Linux distributions with a serious focus on stability).

      In the software world the "more! stuff! faster!" hamster wheel of the modern world is in general taking over, but honestly, I don't see it bringing all that much to the table.

      --
      Certified Soylent Fresh!
    • (Score: 2, Interesting) by Anonymous Coward on Monday November 30 2015, @02:44PM

      by Anonymous Coward on Monday November 30 2015, @02:44PM (#269735)

      There's something ... else ... going on.

      Yes it is called a lack of engineering.

      I recently contributed to a project (yeaaah! first time!) . But the whole process was rather contorted. The source code is a hot mess. It is hundreds of small hacks upon hacks, upon libraries that are no longer supported, upon more hacks.

      You can build systems that way. They will work. They may even work pretty well. But underneath will be a large mess that no one can understand and few people will contribute to.

      It took me literally 2 months of investigation with this code to create the *tiny* one line fix I had. This bug had been in there for nearly 2 years. People were looking at other subsystems in the program to fix it. EVEN THOSE did not do the job properly.

      I will not name names on the project. Because I think it can be fixed and I do not want to hurt feelings. But there is a large mess of code out there and this project does not stand alone. I can see them as this project pulls in many other projects. In many cases these bits of code are very poorly written. Yet everyone depends on them. Very few actually look into it. Sure I can bring up the code the entire linux stack tools and all. But am I going to go digging into it and make sure it is right? It seems right. But unless something breaks I do not really care much. Then even if it breaks am I going to take the time to fix it? I have 50 other projects I want to work on.

      I have come across major system libraries that we all use. They have descriptive variable names like 'i/j/k', scope control problems, and interfaces that leak details up the stack. What the BSD dudes did to openssl needs to happen across the board. Much of what we depend on was written by a few college kids who did not have the experience to say 'oh do it this way vs that way'. You can apply this problem to *many* of our core libraries that we depend on every day. They 'work' so we dont mess with them. When the truth is they are full of security holes and need to be properly refactored into understandable code.

      When I say understandable I mean someone with a bit of skill can pick up the code base and have a good idea of the lay of the land in 1-2 weeks. Not some sort of style fight. I mean decently named variables and some guide posts of comments. Whatever new age idiot thought comments were a bad idea should be beaten with their own keyboard. Comments tell me what you were thinking and if that bit of knarly code should be fixed. Usually when I hear 'no comments' I know the code will be jam packed with dumb things that no one but the author knows how they work. I am clever enough to figure it out. But why am I spending 3 hours unwinding 2-3 lines of code when a nice if/else would have compiled to the same thing. It is a poor use of my time.

    • (Score: 2) by opinionated_science on Monday November 30 2015, @03:03PM

      by opinionated_science (4031) on Monday November 30 2015, @03:03PM (#269746)

      Not wanting to troll, but what is making you machine so unstable? Let me put something out there, are you using an UPS? So many problems go away, once power spikes are eliminated...

      I sit here typing this on Debian using KDE5 and plasma widgity things...

      • (Score: 2) by PizzaRollPlinkett on Tuesday December 01 2015, @12:11PM

        by PizzaRollPlinkett (4512) on Tuesday December 01 2015, @12:11PM (#270134)

        I don't know what's specifically going on, because in the morning my machine has frozen so there are no error messages. I know the exact same hardware ran older Fedora with KDE4 just fine, but since I upgraded to F22 and KDE5 it's been hard to keep these machines up for any length of time without lockups or crashes. I discovered how to restart KDE5's compositor without reloading my whole session, which helps, but I never had to do that with KDE4 and didn't even know how. I'm not sure why KDE5 is so unstable. After a few days, right-click menus, tooltips, and dialog boxes are solid black. My other machine freezes. Hopefully things will get more stable as time goes on, but KDE5 seems a strange leap backwards considering how great KDE4 was. Did they scrap the code and start over?

        Yes, I do use a UPS, but not for extended power outages because the batteries are so expensive I don't want to drain them.

        --
        (E-mail me if you want a pizza roll!)
        • (Score: 2) by opinionated_science on Tuesday December 01 2015, @05:18PM

          by opinionated_science (4031) on Tuesday December 01 2015, @05:18PM (#270258)

          sorry you have extended outages!!! I think I have only had 1 every few years. I will say though, having a UPS *just* for mobiles, greatly saves the anxiety of an outage. The battery on a mobile device is so much smaller than the draw of a desktop!

          If you want to debug KDE5 (on topic) there are symbol libraries you can install and an option in KDE to "report crashes". If you turn that on, you might at least get an objective measure of "hang" or crash.

          I should add that a hang on my system was caused by webmin (an admin tool) calling parted (a disk partition tool) every 20 minutes, to look for changed devices - I disabled it as this is a workstationb. It took me a year to find, but was definitely causing my machine to lock-up!

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @04:10PM

      by Anonymous Coward on Monday November 30 2015, @04:10PM (#269764)

      It's why they don't make things like they used to anymore.

      That profit model of providing quality was ruined when it was discovered that people will buy cheaply made, cheaper stuff, more frequently when it's described as a value to them. Or, raise the price of the good stuff outside the grasp of most, and few will buy something that lasts a long time and ensure they have a "subscription" to that consumable product which used to enjoy a level of permanence.

      Now, we see software doing the same thing.

      People whined when it took five years for Vista to come out. I enjoyed the lull because it allowed for hardware to come out that could actually run XP well, many things were designed with knowledge on how XP worked as opposed to guessing, etc...

      Vista in hindsight was ahead of its time, by the time windows 7 came out, the hardware to run Vista was common and 7 could run fine as a result (being similar OSes).

      Many of the changes you mention on free stuff is clearly new and different; there is a mix of people wanting to put their own stamp in it, as well as hooks for monetiziation. An amazon search result in local file searches in Ubuntu was one, all of the built in crap into firefox was another.

      People began to praise chrome despite it being a marketing delivery tool that also browsed the web. You couldn't even make a typo on the URL bar without it resulting in advertisements. And google remembering that rather than making a mistake, you were interested in whatever typo you made.

      Yet it was uncluttered and clean, like how people describe the new AMD driver suite for their video cards (most of the complicated customization features are missing but it has social integration! Awesome just what I wanted my video driver to have! FACEBOOK ACCESS!)

      You should really try out windows 10, I think you'll cry and go back to linux.

      As for me, I never outgrew windows 2000, but it was that timeframe that I reached my professional peak learning/data absorption, so I am good with that gui (and the command line as well). I am sure people can work with tiles and ribbons, but I am not even old and I refuse to even try. It will slow me down, and I charge by the hour. It would make me money to be crippled for a while. But since it isnt clear why and what benefits I would get for making these drastic OS changes to myself (aside from needing to then subscribe to the latest offices if I wanted to be consistent)...

      I just wish I was young, rich and good looking, so that I could retire.

      • (Score: 1) by WillR on Monday November 30 2015, @04:59PM

        by WillR (2012) on Monday November 30 2015, @04:59PM (#269788)

        But since it isnt clear why and what benefits I would get for making these drastic OS changes to myself

        "Not getting OMGWTFPWNED by some botnet exploiting a bug that was patched a decade ago, but is still actively targeted thanks to all the other tech luddites who won't just fscking move on from Windows XP or Firefox 4 or whatever other unsupported old software they're clinging to" springs to mind...

        • (Score: 0) by Anonymous Coward on Monday November 30 2015, @06:10PM

          by Anonymous Coward on Monday November 30 2015, @06:10PM (#269817)

          Not the original AC, but:

          The problem is that new systems com pwned right from the factory.

          The GNU/Linux world does not appear to be immune (with systemd and interface redesigns).

          People get tired of being on an upgrade treadmill.

    • (Score: 3, Interesting) by theluggage on Monday November 30 2015, @05:16PM

      by theluggage (1797) on Monday November 30 2015, @05:16PM (#269799)

      There's something ... else ... going on.

      Yes - its this: after 35 years of personal computers, most software ought to be pretty well "finished". Maybe whole new applications, unthinkable a few years ago will come along and demand radical new approaches but your wordprocessor, spreadsheet, text editor, 2d paint program, vector editor are pretty much ready to have forks stuck in 'em. They work. They ain't broke, they don't need fixing, and the ones that DO need fixing only need fixing because some idiot in 2005 gratuitously built in a web browser or email client that is now a security hole.

      Unfortunately, that approach neither keeps big business in profit nor gets enthusiast open source programmers out of bed in the evening. So, lots of things that ain't broke get fixed anyway and (naturally) end up broken.

      I mean, back in the 1990s, when Linux was new and rapidly developing from a hack only a hobbyist could love into a fully-featured OS, then maybe you needed a new major version every 6 months... but it really should be settling down now.

      That, and the idea of automatic software updates (or update nag boxes) has gone mad. Folks: send me an update nag box if there is a real threat of instant remote pwnage. Otherwise I'll go looking for updates if/when I hit a bug or when I have a spare day or to to try out a shiny new version without blowing a deadline.

      Oh and, please: the point of 3D buttons was to distinguish controls from content & decorations, the point of including rarely-used commands on menus was so that you could find them when you did need them (or tell others to use them) and who the f**k cares that the save icon still looks like a floppy disc when everybody knows what it means? I know, lets change the universally-recognised 'speed camera' roadsign - you know the one that looks like something from the 1930s - to a plain black rectangle to reflect the design of modern cameras. Better, lets update the "Ladies' Lavatory" sign to reflect the fact that many women wear trousers or, I know, lets replace emoticons like ':-)'with full-colour icons so we can all get tied in knots about what race/gender they represent... Oh, wait, that last one actually happened.

             

      • (Score: 1) by termigator on Monday November 30 2015, @07:01PM

        by termigator (4271) on Monday November 30 2015, @07:01PM (#269848)

        > Oh and, please: the point of 3D buttons was to distinguish controls from content & decorations

        This. The flat crap style lacks visual cues on what can be activated and interacted with. The lack of boundaries requires a larger cognitive load to distinguish what actions available.

        Simple visual effects that provide a 3D aspect to an interface leverages our innate, 3D-based visual cognition to quickly perceive the different components of what we are looking at and the boundaries of actional items.

        • (Score: 0) by Anonymous Coward on Monday November 30 2015, @08:34PM

          by Anonymous Coward on Monday November 30 2015, @08:34PM (#269894)

          Oh and, please: the point of 3D buttons was to distinguish controls from content & decorations

          This. The flat crap style lacks visual cues on what can be activated and interacted with.

          I have never used a touch OS that wasn't completely unintuitive. Everything is hidden behind obscure multi-finger motions and swipes, or 'if you press this for longer than normal, it does something different'-type buttons. What's wrong with File Edit View?

          • (Score: 2) by soylentsandor on Tuesday December 01 2015, @03:43PM

            by soylentsandor (309) on Tuesday December 01 2015, @03:43PM (#270204)

            I have never used a touch OS that wasn't completely unintuitive. Everything is hidden behind obscure multi-finger motions and swipes, or 'if you press this for longer than normal, it does something different'-type buttons. What's wrong with File Edit View?

            Maybe you should try a desktop OS for a change. And no, Windows 8 doesn't count.

    • (Score: 2) by urza9814 on Tuesday December 01 2015, @05:07PM

      by urza9814 (3954) on Tuesday December 01 2015, @05:07PM (#270252) Journal

      I upgraded to Fedora 22 recently. I've had Fedora machines up for months at a time, running KDE4, basically from one extended power outage to another. Since I upgraded, I barely have them up a few days before a crash, freeze, lockup, or something. It's the same hardware as before the upgrade, so it can't be my hardware. Linux and KDE5 are regressing back a decade in stability. What's going on? Why would stuff that works suddenly be ripped out and replaced? Then you get things like Firefox or Gnome which are completely ruined. One or two people don't like System D. (I'm kind of neutral about it, other than it isn't UNIX.)

      When KDE4 was first released, I recall reading the exact same complaints for months and months. KDE3 was rock stable, and KDE4 was buggy crashing garbage regressing back in stability. And now here you are using KDE4 as the gold standard of stability and saying the brand new KDE5 is crap.

      Is the problem really that the new software sucks, or is the problem that your distro is shipping defaults that you don't feel are ready yet?

  • (Score: 1) by loic on Monday November 30 2015, @12:59PM

    by loic (5844) on Monday November 30 2015, @12:59PM (#269690)

    Just so that this guy knows, nowadays, Intel CPUs are basically RISC CPUs with dynamically loaded microcode which creates some kind of virtual specialization. The high level instructions are CISC, but the microcode stuff is RISCish.

    • (Score: 2) by TheRaven on Monday November 30 2015, @01:18PM

      by TheRaven (270) on Monday November 30 2015, @01:18PM (#269698) Journal
      Given that 'no microcode' was one of the core design goals of RISC, I can't help feeling that you're missing the point.
      --
      sudo mod me up
      • (Score: 0) by Anonymous Coward on Monday November 30 2015, @01:50PM

        by Anonymous Coward on Monday November 30 2015, @01:50PM (#269714)

        The real value of RISC existed when chip real estate meant that you had little room to spare and had to use the existing transistors/gates as efficiently as possible. RISC had a performance advantage over CISC on small chips but pushed a little bit of the complexity of the code to the compiler instead of the CPU.

        Now that there are a bazillion transistors per CPU and even complex instructions can execute in a single clock tick, there is less of an advantage to RISC. If you look at the fact that Intel is making chips with 4 times the areal density that ARM uses, you can see that ARM chips can still leverage RISC to some extent. OTOH, if you want elebenty-bazzillion cores/chip, RISC will still be useful.

        In the future, RISC will have little advantage, except in the eyes of nerds that obsess over minutiae.

        • (Score: 2) by TheRaven on Monday November 30 2015, @06:23PM

          by TheRaven (270) on Monday November 30 2015, @06:23PM (#269827) Journal

          You would be correct, if not for one minor point: power consumption. The decoder is one bit of the CPU that you have to have powered whenever you're executing any instructions. Xeons avoid this slightly by caching micro-op traces in hot loops, but generally you're taking the power consumption hit from a complex decoder all of the time. The only way for CISC to recover this is by having a sufficiently dense encoding that you can get away with a smaller L1 and maintain the same L1 hit rate as a RISC chip.

          This is particularly important if you remember that we hit the end of Dennard Scaling a few generations ago. The number of transistors per die is still going up, but the number that you can power (for a given power budget) is barely moving. This means that the best use of transistors is to provide functionality that gives a big speedup when in use, but can be powered down when not. This means that the current best power/performance comes from SoCs with lots of specialised (RISC) accelerator cores.

          --
          sudo mod me up
      • (Score: 0) by Anonymous Coward on Monday November 30 2015, @03:19PM

        by Anonymous Coward on Monday November 30 2015, @03:19PM (#269753)

        My understanding is that microcode is used to execute the legacy CISC instructions which today's compilers don't generate anymore. Basically the x86_64 is a RISC chip with a smaller-than-usual number of registers, a legacy segmented address model that nobody uses, and legacy instructions which are supported by microcode.

        • (Score: 2) by TheRaven on Monday November 30 2015, @06:25PM

          by TheRaven (270) on Monday November 30 2015, @06:25PM (#269828) Journal
          Your understanding is completely wrong. Very few of the newer x86 instructions map to a single micro-op (especially when you consider things like transactional memory). On top of that, Intel chips also do micro-op fusion, so they're combining RISC micro-ops into more complex operations to actually execute. That's ignoring the really complicated things that the microcode does, such as recognise memcpy-like loops and replace them with microcoded fast paths (unless there is a hardware breakpoint or watchpoint active that may trigger, which is why you can sometimes see very different performance characteristics when code is running under a debugger). Modern x86 chips are still very much CISC chips.
          --
          sudo mod me up
          • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @03:46AM

            by Anonymous Coward on Tuesday December 01 2015, @03:46AM (#270014)

            I'll look into it more then. Thanks.

      • (Score: 2) by theluggage on Monday November 30 2015, @04:20PM

        by theluggage (1797) on Monday November 30 2015, @04:20PM (#269773)

        Given that 'no microcode' was one of the core design goals of RISC, I can't help feeling that you're missing the point.

        Don't think of it as microcode: think of it as a hardware implementation of SoftWindows [wikipedia.org] or whatever other kludgey PC emulator you ended up needing on your lovely RISC machine to run those pesky bits of legacy DOS code that you just couldn't work around.

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @01:14PM

    by Anonymous Coward on Monday November 30 2015, @01:14PM (#269695)

    I knew this place was full of neckbeards but I am shocked to have such solid confirmation that nobody here learned anything about programming since the 70s.

    This is specifically hostile to anything that wasn't hacked together in a basement in nothing higher-level than pre-1989 C

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @04:14PM

      by Anonymous Coward on Monday November 30 2015, @04:14PM (#269767)

      I have been computing since the 80s, and I think the list is pretty stupid.

      Distributed computing and Security are on the no list?

      I guess the harder it is the less people like it.

      It sounds like the guy was an alpha male/prima donna where he worked and would have nothing to do with what he wasn't good at, out of fear he would no longer be any good and forgotten.

      We at least have this list--which I propose should be promptly forgotten.

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @02:54PM

    by Anonymous Coward on Monday November 30 2015, @02:54PM (#269739)

    Distributed systems and functional programming are now in the "Maybe" column.

    RISC is now a "Yes".

    Some would argue that RDB and SQL are now a "Maybe".

    • (Score: 1) by Myrddin Wyllt on Monday November 30 2015, @11:37PM

      by Myrddin Wyllt (5849) on Monday November 30 2015, @11:37PM (#269956)

      Some would argue that RDB and SQL are now a "Maybe".

      Some might argue that they're no longer the only game in town, but it would be hard to make a case for removing them from the 'Yep, that works!' pile.

      Better to stick a new entry for NoSQL etc. in the 'Maybe' pile.

      • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @01:15AM

        by Anonymous Coward on Tuesday December 01 2015, @01:15AM (#269972)

        It's too early to say if NoSql is upending most RDBMS. The fact that NoSql is successful in some cases doesn't give us enough data-points to know if it's a nice niche product or if it's time for RDBMS to go. There's also lot of failure stories as an org matures and realizes they want some of RDBMS's "protections".

        It seems people like some features of RDBMS and some features of NoSql, but neither is a clear winner. Maybe a 3rd kind of thing that takes the best of both will emerge. In that case, RDBMS will have not "died", but rather evolved. (I myself would like to see more experiments with "Dynamic Relational" which is kind of a hybrid between the "stiff" current RDBMS and relaxed structuring.)

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @02:55PM

    by Anonymous Coward on Monday November 30 2015, @02:55PM (#269742)

    And if you had any idea WHO he is, you wouldn't have said something this stupid, kid.

    • (Score: 0) by Anonymous Coward on Monday November 30 2015, @04:32PM

      by Anonymous Coward on Monday November 30 2015, @04:32PM (#269777)

      you wouldn't have said something this stupid, kid.

      Psssh... nothin personnel... kid.

    • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @10:15AM

      by Anonymous Coward on Tuesday December 01 2015, @10:15AM (#270110)

      And here we have the ultimate exemplar for the logical falacy of Appeal to Authority.

      Not just that, but you also demontrate having *precisely no brain* as the comment you refer to is not addressing the Lampson list from 1999, but the article which claims to update it, which is written by someone who refers to Lampson in the third person, so isn't Lampson.

  • (Score: 3, Informative) by purpleland on Monday November 30 2015, @04:09PM

    by purpleland (5193) on Monday November 30 2015, @04:09PM (#269762)

    Functional programming concepts has served me very well for over twenty years, and it is good to see it making its way into more mainstream languages like Java.

    My favorite example is how concisely you can describe quicksort in Haskell compared with C - see http://stackoverflow.com/questions/7717691/why-is-the-minimalist-example-haskell-quicksort-not-a-true-quicksort/ [stackoverflow.com] for example and discussion. If you do a lot of work with algorithms, prototyping in functional programming is a fantastic tool - it helps you think in a more abstract fashion about how an algorithm should behave and not (yet) get bogged down by implementation specifics like memory management.

    Modern functional programming development frameworks like https://www.playframework.com/ [playframework.com] is excellent - well worth tinkering with if you aren't aware of it. If you're doing work in Java, one interesting paradigm is to prototype and unit test in Scala.

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @05:26PM

    by Anonymous Coward on Monday November 30 2015, @05:26PM (#269803)

    If Software Engineering is high level management and delivery processes, sure, that's just a pain. But this constant pretending that what a lot of us are doing is computer science, with the attendant interview questions, isn't helpful either. Engineering is a lofty term, but I'm not talking about trivial web monkeying either; there is some practice of software development that calls for best practices, measurements, comparative study, specialized instruction.

    I know it's not as interesting as showing how terse and immutable you can make a fibonacci generator, but it's what a lot of us do for our day jobs, and it deserves better than random books on learning this or that language, or OO books that are a walk through an idyllic Smalltalk paradise with ideas that shatter against our workaday Java applications.

    This is what I thought the Software Engineering movement was about; providing a structure for understanding this practical application of the fruits of computer science, that stand apart from it and require more understanding about how to structure, plan and implement a working system and very little about red-black trees.

  • (Score: 2) by hendrikboom on Monday November 30 2015, @06:16PM

    by hendrikboom (1125) Subscriber Badge on Monday November 30 2015, @06:16PM (#269822) Homepage Journal

    The trouble with functional programming is fundamentalism. There's a fundamentalist academic stream of functional programmers who have gone to great lengths to manage to program withoug side effects. They have been trying for a long time to monopolise the word 'functional programming' as a marketing term for "side-effect-free", rather than its more natural meaning of 'programming using functions as first-class values'.

    If you go by the fundamentalist meaning, functional programming is dead. The academics are able to achieve great things with enormous effort in languages like Haskell, you have very effective abstractsion, and there are a lot of practical lessons to be learned form it, but it will never take over in an everyday practial way.

    But if you look at the less purist fundamentalist functional programmers, you find programmers that do use side effects. They use variable that change their values, even! And they use languages like Lisp and OCaml, and can achieve comprehensibility and efficiency without hugely arcane and inscrutable optimisation going on behind the scenes.

    So with the fundamentalist view of 'functional', I agree that it is not ready for most practical programming, although it does have a few lessons to teach.

    But with the more liberal view, it *is* ready for most practical programming, although there are niche applications that require more detailed control of the hardware.

    Most programmers are so turned off by the fundamentalist propaganda that they never get to see the value of liberal functional languages like OCaml.

    -- hendrik

    • (Score: 2) by Marand on Monday November 30 2015, @11:05PM

      by Marand (1081) on Monday November 30 2015, @11:05PM (#269947) Journal

      Most programmers are so turned off by the fundamentalist propaganda that they never get to see the value of liberal functional languages like OCaml.

      This was the case with me for a very long time, unfortunately. Early on, before I even knew what functional programming was, I had been using a lot of FP practices and style when writing Perl code. For example, I always made subs that returned values, avoiding side effects where possible, and I got a lot of use out of first-class functions (stuffing sub refs in variables, passing around, etc.) despite never having heard of the term at the time or knowing it was special thing. I just thought every language was supposed to work like that, and was surprised when I started using more languages and realising I couldn't do all the cool things I expected.

      But later, when I learned about functional programming, I dismissed it for a long time because the fundamentalists you mention made me think FP only meant pure-functional, which made the whole thing seem clunky and academic. I've spent years not liking the OOP-heavy style that's taken over because it didn't mesh well with my way of thinking, not realising I could have been using more practical functional languages that work closer to how I think.

  • (Score: 2) by Gravis on Monday November 30 2015, @06:34PM

    by Gravis (4596) on Monday November 30 2015, @06:34PM (#269830)

    Garbage collection has not worked as evidence by every admin who has ever had to restart a daemon written in java because it ate all the memory. it's a pretty common issue if you don't have hawkeyed java coders which are a rare bird for sure.

    • (Score: 2) by hendrikboom on Monday November 30 2015, @07:11PM

      by hendrikboom (1125) Subscriber Badge on Monday November 30 2015, @07:11PM (#269856) Homepage Journal

      And the advantage of not having garbage collection is that your non-hawk-eyed programmer will never reach the level of reliability in storage allocation and deallocation where you would even consider using the code. You'd be restarting the daemon over and over and over again after crashes and rarely getting to use it.

      You'd need to hire hawk-eyed programmers, who might be expensive.

      Seriously, garbage collection does not solve all storage allocation issues. In particular, it will not solve the problem of an unboundedly growing data structure,
      It will solve the problem of algorithms (such as those used in computer algebra) in which the question of when storage is still needed becomes arcanely complicated.

      But even if you are not doing computer algebra, it can save a lot of development time -- the time you are not tracking dangling pointers. But yes, you still have to watch out for building infinite data structures. That'll kill the project on aany machine with finite memory, garbage collected or not.

      • (Score: 2) by Gravis on Monday November 30 2015, @09:51PM

        by Gravis (4596) on Monday November 30 2015, @09:51PM (#269925)

        And the advantage of not having garbage collection is that your non-hawk-eyed programmer will never reach the level of reliability in storage allocation and deallocation where you would even consider using the code. You'd be restarting the daemon over and over and over again after crashes and rarely getting to use it.

        exactly which means it wont get off the dev server. it's sink or swim but garbage collection is a life preserver for someone who is "inexplicably" becoming heavier. it keeps you afloat... until it doesn't.

        You'd need to hire hawk-eyed programmers, who might be expensive.

        i disagree. memory management isn't all that hard and there are many free tools to track down issues in languages like C and C++. however, you are never going to get people to do proper memory management if it seems like it's just magic.

  • (Score: 0) by Anonymous Coward on Monday November 30 2015, @08:24PM

    by Anonymous Coward on Monday November 30 2015, @08:24PM (#269890)

    Duh
    Virtual memory
    Address spaces
    Algorithms

    Yes
    Packet nets
    Web
    Parallelism
    RISC
    RPC/Distributed computing
    Security
    Transactions
    Fancy type systems
    Reuse
    RDB

    Maybe
    Objects/subtypes
    SQL
    Bitmaps and GUIs
    Garbage collection
    Capabilities (?)
    Functional programming
    Formal methods
    Software engineering

    • (Score: 0) by Anonymous Coward on Thursday December 03 2015, @01:51AM

      by Anonymous Coward on Thursday December 03 2015, @01:51AM (#271097)

      In what sense are bitmaps not an unquestionable success? You're looking at a raster display right now, aren't you?

  • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @01:00AM

    by Anonymous Coward on Tuesday December 01 2015, @01:00AM (#269967)

    Regarding objects/OOP: I'd have to give it a "yes" with a big footnote. It turned out NOT to work well when used as originally described, which is mostly domain modelling. However, it has been relatively successful at computation-side frameworks, such as API's for system services, networking, databases, GUI's etc. People used to pull their hair out trying to use it for non-trivial domain modelling.

    (However, large GUI's are too complex for OOP in my opinion and perhaps should look at using RDBMS or something similar for bulk GUI attribute management and event dispatching. Different tasks need different virtual groupings and/or filterings of GUI info. OOP doesn't give you that, at least not without reinventing a database of some kind.)

    Regarding both FP and parallelism, I put them into the category of "usability problems" similar to this quote from the article:

    "I take Lampson’s view, which is that if the vast majority of programmers are literally incapable of using a certain class of technologies, that class of technologies has probably not succeeded."

    Both FP and parallel programming have been difficult for many rank-and-file developers to master. They seem to go against natural human instincts. "You are just not trying and studying hard enough" doesn't fly because other successful technologies didn't require overhauling one's mind.

    (There will always be niches that need experts in such, but I'm generally ignoring niches here unless the technologies were originally presented as being niche-specific, in which case they probably shouldn't make a "general IT" list of the one being described.)

    As far as parallel programming, often in practice it's actually taken care of by the RDBMS. An RDBMS automatically parallelizes many tasks without the query writer having to think much about it, other than perhaps handling A.C.I.D. and transaction-related issues. Not all tasks can take advantage of the RDBMS's built-in parallel-ism, but a good many can, and perhaps even more could if RDBMS added certain features, such as graph traversal, or if one learns how to rework common "computer science" algorithms to be RDBMS-friendly.

    • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @04:54PM

      by Anonymous Coward on Tuesday December 01 2015, @04:54PM (#270242)

      Clarifications:

      Re: "...when used as originally described, which is mostly domain modelling"

      Should be "which mostly was domain modeling". An example of domain modelling would be "composition" modelling of an organization hierarchy such as:

      -Organization
      -- Region
      --- Office
      ---- Department
      ----- Employee
      ------ Projects
      -------- Sub-tasks
      etc...

      Generally, OOP has been messy when used to model such "business objects". One often needs operations or views that don't fit the hierarchy. It's similar to the reason RDBMS supplanted "network" databases and hierarchical databases.

      Re: "...rework common "computer science" algorithms to be RDBMS-friendly."

      An example would be reworking the "A* search algorithm" to use more of RDBMS's native abilities to search, filter, join, and sort in parallel. How to do such, I don't really know. It's an example the kind of research and pondering needed.

      In some cases perhaps a "lossy" version of an algorithm can be devised such that it's not a perfect match to the non-RDBMS version but "good enough". AI and optimization problems, like "traveling salesman", could be an area where some "imperfection" is acceptable in order to take advantage of the abilities of existing RDBMS (and their parallelism) to reduce the calculation time.

  • (Score: 2) by naubol on Tuesday December 01 2015, @01:18PM

    by naubol (1918) on Tuesday December 01 2015, @01:18PM (#270156)

    Capabilities -- What, like OpenGL uses?
    Fancy type systems -- Things like Scala are powering some of the world's finance software now
    Functional programming -- Uhh, functional is used everywhere and being incorporated into C++11 and Java 8
    Formal methods -- Seriously? Used in all sorts of science applications where things have to be right
    Software engineering -- hahahahaha
    RPC -- zer mer gawds, the internets is built on this! of course this works!
    Distributed computing -- google? netflix? amazon? all the banks!
    Security -- ok, well, you can say this works in the same way that real world security works, it doesn't prevent all the crimes, but it probably prevents most of them

    Oh, give me that old time religion.

    • (Score: 0) by Anonymous Coward on Tuesday December 01 2015, @05:01PM

      by Anonymous Coward on Tuesday December 01 2015, @05:01PM (#270246)

      functional is used everywhere and being incorporated into C++11 and Java 8

      In my opinion it's partly because functional is in "fad mode" right now, and because partly Java has a poor OOP model such that it's difficult to "attach" event handlers to objects. If it had a better OOP model, lambda's etc. wouldn't be needed to attach events. FP is compensating for weak areas of the language. (I haven't really looked into this issue with C++.)