Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday July 17 2017, @03:27PM   Printer-friendly
from the swear-on-a-stack-of-K&Rs dept.

At The Guardian, Cathy O'Neil writes about why algorithms can be wrong. She classifies the reasons into four categories on a spectrum ranging from unintential errors to outright malfeasance. As algorithms now make a large portion of the decisions affecting our lives, scrutiny is ever more important and she provides multiple examples in each category of their impact.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by meustrus on Monday July 17 2017, @04:22PM (4 children)

    by meustrus (4961) on Monday July 17 2017, @04:22PM (#540369)

    I agree with your assertion that you can't cover all cases. But you're wrong about one very important thing:

    Automated testing is an adjunct to manual testing and other verification methods, not a replacement

    Automated testing is not distinct from "manual" testing. You run manual tests to discover things about the system, then you turn your manual tests into automated tests. Otherwise, you have wasted your time, because the next release (which for a continuously integrated product like most big companies are producing will be out there tomorrow) could fail the manual tests you just did and you wouldn't know.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Monday July 17 2017, @06:00PM (3 children)

    by Anonymous Coward on Monday July 17 2017, @06:00PM (#540434)

    QA needs to be a separate entity from development, not on a lower level, just a separate level.
    You are implying that QA won't know how the system is supposed to work because it is changing--the automated tests will bear most of that burden. This really takes QA out of the software lifecycle loop. They need to be the final stage, and updating their tests is part of this SDLC. And it's not as if automated tests are always complete and up to date--write them once and that work is finished. OH NO... the automated tests can still
    give a result of "passes" after a code change by coincidence when it should actually fail. This is because automated tests, as I said before, do not test everything. There is no way they can. An important fact that automated test evangelists leave out is the maintenance cost of those tests which must he updated and debugged just like the rest of the code. If you automate TOO much, you lose productivity to excessive test code work instead of the system you are getting paid to deliver. Yes, too much automated testing is as bad as too little. Automated tests are brittle--it's the nature of the beast. The more they test, the more brittle they are. It's a judgement call as to when the optimium test coverage has been reached.

    • (Score: 0) by Anonymous Coward on Tuesday July 18 2017, @03:46AM (1 child)

      by Anonymous Coward on Tuesday July 18 2017, @03:46AM (#540756)

      I felt the the way you did, until I learned about "test driven development".

      Rules:

      • Write the Test case first. Stop when you get an error.
      • Write the code that passes the test case. Stop when you pass the test.
      • Repeat

      The above algorithm, if followed religiously, results in full test coverage.

      • (Score: 0) by Anonymous Coward on Tuesday July 18 2017, @10:02AM

        by Anonymous Coward on Tuesday July 18 2017, @10:02AM (#540887)

        The above algorithm, if followed religiously, results in full test coverage.

        You are aware that full test coverage for

        unsigned times(unsigned a, unsigned b) { return a*b; }

        would, on a 32 bit architecture, consist of 264 test cases?

    • (Score: 2) by meustrus on Tuesday July 18 2017, @02:56PM

      by meustrus (4961) on Tuesday July 18 2017, @02:56PM (#540954)

      the automated tests can still give a result of "passes" after a code change by coincidence when it should actually fail.

      Sure, if your tests are too low-level. But I'm advocating for system-level acceptance tests, not unit tests. The problem with those is far more often that they tend to give too many false negatives, due to brittle test frameworks and the difficulty in getting a machine to simulate human interactions with the product interface. And tests can never cover the interface itself, since organization, positioning, and styling are all visual elements affected by human psychology more than anything else. UI/UX testing and verification is another thing that will need to be done manually. But with appropriate acceptance tests, you can still be sure that the interface performs as it was designed to.

      An important fact that automated test evangelists leave out is the maintenance cost of those tests which must he updated and debugged just like the rest of the code. If you automate TOO much, you lose productivity to excessive test code work instead of the system you are getting paid to deliver.

      And if your tests are that complicated and difficult to maintain, that's because making changes to your system has unpredictable consequences. That's its own problem that needs something more drastic than testing to fix. If you make a change and half your tests break, you might be annoyed that you have to go fix a bunch of tests. But I'd be more concerned that without those tests, we'd all be spending the next year discovering bugs and losing productivity to fixing them as they come up as "must fix NOW". Not only that, but you are potentially losing business value, be it customer retention, marketing success, legal exposure, or actual ability to make sales. You might not feel that pain, but those things affect the ability of the business itself to function and threaten your job and everyone else's. The tests didn't make you lose productivity. They just made you deal with your chaos monkey of a system before it caused real problems instead of letting those problems simmer until they either demand your nights and weekends to fix or destroy the business.

      --
      If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?