At The Guardian, Cathy O'Neil writes about why algorithms can be wrong. She classifies the reasons into four categories on a spectrum ranging from unintential errors to outright malfeasance. As algorithms now make a large portion of the decisions affecting our lives, scrutiny is ever more important and she provides multiple examples in each category of their impact.
(Score: 2) by meustrus on Monday July 17 2017, @03:49PM (17 children)
An algorithm is a mathematical description of a solution to a problem. It is, ultimately, a shortcut to something a human could do with great effort. However, one thing that gets ignored with the algorithm is verification. That's fine if the algorithm is correct, but how do you know that beforehand?
The answer is automated testing. This is a necessary step to verify your algorithm meets real-world needs, but it is also still ignored by much of the industry as "too costly". Not only that, but among those that do maintain automated tests often don't make them complete enough to cover every known scenario, let alone eliminate dangerous unknowns.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 1, Insightful) by Anonymous Coward on Monday July 17 2017, @04:12PM (9 children)
This is something that automated test evangelists get VERY WRONG. It is not
possible, not even in theory, to cover all cases with automated tests. The combinatorial explosion of all the different existing states or inputs that could result in a given reached state means the number of test cases is impossibly high to cover them all. Instead, you must settle for testing a subset. This is in the general case. For a special case, it *might* be possible to cover all the inputs. Automated testing is an adjunct to manual testing and other verification methods, not a replacement, and not a magic bullet.
(Score: 3, Insightful) by meustrus on Monday July 17 2017, @04:22PM (4 children)
I agree with your assertion that you can't cover all cases. But you're wrong about one very important thing:
Automated testing is not distinct from "manual" testing. You run manual tests to discover things about the system, then you turn your manual tests into automated tests. Otherwise, you have wasted your time, because the next release (which for a continuously integrated product like most big companies are producing will be out there tomorrow) could fail the manual tests you just did and you wouldn't know.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 0) by Anonymous Coward on Monday July 17 2017, @06:00PM (3 children)
QA needs to be a separate entity from development, not on a lower level, just a separate level.
You are implying that QA won't know how the system is supposed to work because it is changing--the automated tests will bear most of that burden. This really takes QA out of the software lifecycle loop. They need to be the final stage, and updating their tests is part of this SDLC. And it's not as if automated tests are always complete and up to date--write them once and that work is finished. OH NO... the automated tests can still
give a result of "passes" after a code change by coincidence when it should actually fail. This is because automated tests, as I said before, do not test everything. There is no way they can. An important fact that automated test evangelists leave out is the maintenance cost of those tests which must he updated and debugged just like the rest of the code. If you automate TOO much, you lose productivity to excessive test code work instead of the system you are getting paid to deliver. Yes, too much automated testing is as bad as too little. Automated tests are brittle--it's the nature of the beast. The more they test, the more brittle they are. It's a judgement call as to when the optimium test coverage has been reached.
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @03:46AM (1 child)
I felt the the way you did, until I learned about "test driven development".
Rules:
The above algorithm, if followed religiously, results in full test coverage.
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @10:02AM
You are aware that full test coverage for
would, on a 32 bit architecture, consist of 264 test cases?
(Score: 2) by meustrus on Tuesday July 18 2017, @02:56PM
Sure, if your tests are too low-level. But I'm advocating for system-level acceptance tests, not unit tests. The problem with those is far more often that they tend to give too many false negatives, due to brittle test frameworks and the difficulty in getting a machine to simulate human interactions with the product interface. And tests can never cover the interface itself, since organization, positioning, and styling are all visual elements affected by human psychology more than anything else. UI/UX testing and verification is another thing that will need to be done manually. But with appropriate acceptance tests, you can still be sure that the interface performs as it was designed to.
And if your tests are that complicated and difficult to maintain, that's because making changes to your system has unpredictable consequences. That's its own problem that needs something more drastic than testing to fix. If you make a change and half your tests break, you might be annoyed that you have to go fix a bunch of tests. But I'd be more concerned that without those tests, we'd all be spending the next year discovering bugs and losing productivity to fixing them as they come up as "must fix NOW". Not only that, but you are potentially losing business value, be it customer retention, marketing success, legal exposure, or actual ability to make sales. You might not feel that pain, but those things affect the ability of the business itself to function and threaten your job and everyone else's. The tests didn't make you lose productivity. They just made you deal with your chaos monkey of a system before it caused real problems instead of letting those problems simmer until they either demand your nights and weekends to fix or destroy the business.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 2) by FatPhil on Monday July 17 2017, @04:46PM (3 children)
One thing I liked about the previous job I had was that the manual testing regimen included 30 minutes of "free testing" (a test was typically blocked to take 2 hours) where the tester was encouraged to just *use* the thing, try doing different stuff, play. Things which could easily described thus discovered would be later added to the automated tests if possible, and the manual tests if not.
Know the limits of the techniques and methods used - there are no silver bullets.
The biggest mistake by managers who are in charge of managing resources is to hire test monkeys. A smart tester is worth at least two dumb testers.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by meustrus on Monday July 17 2017, @06:39PM (2 children)
More like negative two dumb testers, as in it takes one smart tester just to undo the damage caused by the dumb ones. Your dumb QAs will just give you a false sense of security, often driven more by their manager's goals to look good than by actual quality. You need to be able to trust QA as much as your developers, and if you are hiring "dumb" manual QA you won't.
This isn't a dig at anybody in QA. Anybody can be hired as a "dumb" QA and be given none of the resources or training needed to do a good job. Delivering real quality anyway in such a role means fighting your managers, and in many places those people just get fired. It's the management mentality that's really to blame.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @08:29AM (1 child)
I always say that the testers need to be smarter and better than their developers. How else can they find problems that the developers missed? Making proper automated tests, especially integration style tests, is difficult.
I think achieving 100% coverage is possible, but no company is ever going to spend that much on quality. And for many a software product, this is not that big of a problem, your testing really needs to reflect what you are building, and how.
There are also many layers to testing, if your developers haven't done any significant testing themselves, you've already lost.
(Score: 2) by Immerman on Tuesday July 18 2017, @04:29PM
As someone pointed out above - full coverage for any non-trivial program is not possible, as it implies testing *all* possible inputs for unexpected corner cases - and the combinatorics involved in that explode rapidly. Even just "a*b" using 32-bit values has 2^64 possibilities that would have to be tested.
(Score: 2) by meustrus on Monday July 17 2017, @04:18PM (6 children)
Also (who RTFAs around here?) apparently the article is actually about AI, which is NOT the same thing as algorithms. The summary is shit for equating the two.
Even so, the answer is still the same: have a suite of automated tests (but with more variance and tolerance) that verify expected results in every known edge case, and try as hard as you can to eliminate the unknowns. If you do it right, your tests will then describe invariant expectations about the outcome.
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
(Score: 2) by The Mighty Buzzard on Monday July 17 2017, @04:27PM (5 children)
Until the AI learns to tell the difference between actual use and testing and lie selectively...
My rights don't end where your fear begins.
(Score: 1, Informative) by Anonymous Coward on Monday July 17 2017, @04:44PM
I just want to chime in here, you are presuming the existence of a true AI and then you presume the AI would have some motivation to lie about software testing. These observations point to only one obvious conclusion: you are an idiot.
(Score: 0) by Anonymous Coward on Monday July 17 2017, @04:46PM
Just don't let VW train the AI. :-)
(Score: 2) by HiThere on Monday July 17 2017, @04:49PM (1 child)
You are making assumptions about its goal structure. It shouldn't *want* to tell the difference. If an AI lies, it's because it has been trained to lie, probably via the specifications. This isn't the same as being biased which is impossible to remove, but only to minimize...and then only if you realize that your training data is biased. But if, say, the specs say that you need to get a certain percentage approved, then the net can (will probably) learn to lie in order to meet that goal. This is an error in the specifications of the goal...and unfortunately it isn't uncommon.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by maxwell demon on Monday July 17 2017, @07:07PM
No. Learning to lie would mean the net would learn what the correct response would be, but decide to give another response. While in reality the net will work out how to meet the given goal, and therefore it will give what, according to its programming, is the right answer, even though it is not the right answer according to what we actually want. Or in other words, the network is not lying, it is misinformed.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by meustrus on Monday July 17 2017, @06:23PM
Normally you would isolate the learning data from the testing data, and only an isolated copy of the AI will be used on the test data and them terminated. The AI will have never experienced the test data before.
There are two situations where this breaks down:
If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?