Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday May 25 2015, @05:42PM   Printer-friendly
from the venn-diagrams-are-your-friend dept.

Karl Popper came up with the idea in the 1930's that scientists should attempt to falsify their hypotheses rather than to verify them. The basic reasoning is that while you cannot prove a hypothesis to be true by finding a number of different confirming instances (though confirming instances do make you more confident in the truth), you can prove a hypothesis to be false by finding one valid counter-example.

Now Orin Thomas writes at WindowsITPro that you’ve probably diagnosed hundreds, if not thousands, of technical problems in your career and Popper's insights can serve as a valuable guide to avoid a couple of hours chasing solutions that turn out to be an incorrect answer. According to Thomas when troubleshooting a technical problem many of us “race ahead” and use our intuition to reach a hypothesis as to a possible cause before we’ve had time to assess the available body of evidence. "When we use our intuition to solve a problem, we look for things that confirm the conclusion. If we find something that confirms that conclusion, we become even more certain of that conclusion. Most people also unconsciously ignore obvious data that would disprove their incorrect hypothesis because the first reaction to a conclusion reached at through intuition is to try and confirm it rather than refute it."

Thomas says that the idea behind using a falsificationist method is to treat your initial conclusions about a complex troubleshooting problem as untrustworthy and rather than look for something to confirm what you think might have happened, try to figure out what evidence would disprove that conclusion. "Trying to disprove your conclusions may not give you the correct answer right away, but at least you won’t spend a couple of hours chasing what turns out to be an incorrect answer."

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Funny) by Bot on Monday May 25 2015, @06:26PM

    by Bot (3902) on Monday May 25 2015, @06:26PM (#187680) Journal

    The art of troubleshooting, below.

    If it runs on windows, the problem is windows, reboot and hope for the best.
    If it runs on OSX, the problem is that you have to spend $$$ and update.
    If it runs on GNU/Linux, the problem is you.
    If it runs on systemd/linux: the problem is this [youtube.com]

    --
    Account abandoned.
    • (Score: 0) by Anonymous Coward on Monday May 25 2015, @08:07PM

      by Anonymous Coward on Monday May 25 2015, @08:07PM (#187717)

      I don't know if it was the juxtaposition or what, but I was laughing out loud for way too long.

    • (Score: 2) by fritsd on Monday May 25 2015, @08:38PM

      by fritsd (4586) on Monday May 25 2015, @08:38PM (#187724) Journal

      Sheesh, it isn't THAT difficult to remove, it only took me 3 days of work troubleshooting and rebuilding Debian packages :-)

      P.S.: Devuan is *not* a joke, but it is still in "alpha".

      • (Score: 2) by Gravis on Monday May 25 2015, @09:16PM

        by Gravis (4596) on Monday May 25 2015, @09:16PM (#187735)

        P.S.: Devuan is *not* a joke, but it is still in "alpha".

        unfortunately, Devuan is vaporware because they run off developers that want to help.

        • (Score: 2) by Bot on Tuesday May 26 2015, @04:35PM

          by Bot (3902) on Tuesday May 26 2015, @04:35PM (#188120) Journal

          I run void linux, and stuff in docker containers. Runit > systemd for me so far.

          --
          Account abandoned.
  • (Score: 2) by tibman on Monday May 25 2015, @06:29PM

    by tibman (134) Subscriber Badge on Monday May 25 2015, @06:29PM (#187682)

    Most of the time it is a failure to apply Diax's Rake [wikia.com] : )

    --
    SN won't survive on lurkers alone. Write comments.
  • (Score: 0) by Anonymous Coward on Monday May 25 2015, @06:46PM

    by Anonymous Coward on Monday May 25 2015, @06:46PM (#187690)

    i think there used to be game called "mastermind".

  • (Score: 3, Insightful) by MichaelDavidCrawford on Monday May 25 2015, @06:54PM

    what I want to teach all the young people:

    Use assertions before you use debuggers.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 4, Insightful) by frojack on Monday May 25 2015, @06:59PM

    by frojack (1554) on Monday May 25 2015, @06:59PM (#187694) Journal

    Once you are in trouble shooting mode, whatever hopes and expectations of perfection you had are dashed, and gone.
    Now you need to find out what caused this particular trouble, Fix that, then think of where something similar could occur and check those places as well.

    Trouble shooting is not the same as quality control or system testing. In the latter two cases you have no idea what you are looking for. In the former, its broke, and you have some clues to prove it.

    Different process.

    --
    No, you are mistaken. I've always had this sig.
  • (Score: 3, Interesting) by VLM on Monday May 25 2015, @07:25PM

    by VLM (445) Subscriber Badge on Monday May 25 2015, @07:25PM (#187700)

    When we use our intuition to solve a problem, we look for things that confirm the conclusion.

    No that's called being an incompetent troubleshooter. The really good ones use their intuition to find the best spot to start, unconscious parallel processed adsorption of as much raw data as they can get, and then unleash the Popper or however you want to phrase it at that experience based starting point. Which used to just be called experienced problem isolation or finding and testing at a demarcation point or any number of other ways to put it. If doing whats always worked while re-naming it "Popper style testing" works, well, thats cool even if its not new. Save the blind panic and thrashing assumptions around randomly for when "real troubleshooting" doesn't work.

    A question: Is reading Popper worth the time? I have not, and in my infinite spare time I could make time if convinced its worth it. Or a more concrete way to phrase the question: someone here, who read some Popper, what, and when you were done did you feel it was worth the time or did you regret the time invested? The problem with a dude who was heavily influential half a century ago is what might have been controversial or thought provoking half a century ago, to a fish who grew up in that water, is going to be "duh" and not be very thought provoking, therefore not worth the time.

    • (Score: 2) by Snotnose on Monday May 25 2015, @09:48PM

      by Snotnose (1623) on Monday May 25 2015, @09:48PM (#187746)

      This is what I came here to say. Maybe the first year or two I used intuition to guess where the problem was, but any competent debugger quickly learns to gather data, think about it, possibly gather more data, then find and fix the problem.

      CSB. In the 80's a couple co-workers and I were meeting for drinks. I got there first and grabbed a 4 person table. Some dude I didn't know asked if he could sit there, as I only expected 2 others and no other tables were available I said "sure!". My other 2 buds arrive and, soon enough, we start talking about work. We had a nasty problem, knew what it was, and had no idea how to fix it. The dude asked "you've got the source code, don't you?". Yeah, we actually wrote the source code. "Well, why don't you fix it with the editor!".

      --
      Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
    • (Score: 5, Interesting) by aristarchus on Monday May 25 2015, @10:52PM

      by aristarchus (2645) on Monday May 25 2015, @10:52PM (#187768) Journal

      A question: Is reading Popper worth the time?

      Probably not. The issue Popper was dealing with was the nature of verification in science. Application to troubleshooting is a kind of limited case, usually with a much smaller range of possibilities where the process of elimination makes sense. And worse, once you fix the problem you don't really care whether the fix was in fact what fixed the problem (so long as it stays fixed).

      Popper clarifies the logical validity of the Hypothetico-deductive method of science. We start out with a problem, form a hypothesis (possible explanation), and then devise an experimental test of the hypothesis. The "intuition" people are speaking of here belongs to the "knack" for forming reasonable hypotheses, what Charles Sanders Peirce called "abductive" reasoning. Not part of Popper's Falsificationism, but it does relate to the real problem.

      The issue is inferences based on conditional (hypothetical) statements. If the hypothesis is correct, we necessarily get these particular results. Probably through in a certerus paribus here, because another factor is experimental design that rules out things like contamination, etc. So, we run the experiment, get the predicted results, WooHoo! Our hypothesis is true! This is where Popper says, not so fast. If the hypothesis is correct, then you must get the deduced results. But that does not mean you can go the other way, that is you get the results that means the hypothesis is true. Technically, this is the formal fallacy of affirming the consequent in symbolic logic.

      On the contrary, however, there is the other valid form of reasoning with hypotheticals, and this is the one used in troubleshooting. You from a hypothesis, as above, deduce a test, if the test fails, your hypothesis is wrong, necessarily. Car analogy! Car won't start, I intuit the hypothesis of a back coil. If the coil is bad, spark plugs would have no spark. Take out one plug, crank and observe. Good spark. Therefore, coil is not the problem. This is the logical form modus tollens. So in the philosophy of science, Popper's position is that theories (hypotheses writ large) can only be definitively disproven.

      This is related to Quine's "underdetermination thesis", which is basically saying that for any particular experimental result, there are always more than one potential hypothesis that could be correct, and positive results do not tell us which one! This is why scientists can have very different theories considered to be true at any one point in history, and leads to Thomas Kuhn's notion of paradigm shifts: old theories never are disproven, just sooner or later all their adherents pass away. So, see where confirmation bias is in all of this?

      Take away: theories are never deductively proven, only disproven. But that doesn't mean we can believe whatever we like. Theories are inductively proven, which means never conclusively, but the more experimental designs we have that give positive results for a theory, the more repetitions of experiments to rule out flukes, and the more competing theories are disproved, the more likely the last theory standing is true. Nice thing about troubleshooting is we reach an end state where either the problem is solved and we don't really care why (it's like medicine!) or we junk the whole thing and start over. In science, we can't say in advance what a "fix" would be.

      Sorry to go on so long, hope this answers your question.

      • (Score: 0) by Anonymous Coward on Tuesday May 26 2015, @09:58AM

        by Anonymous Coward on Tuesday May 26 2015, @09:58AM (#187944)

        Nice thing about troubleshooting is we reach an end state where either the problem is solved and we don't really care why

        Really? I definitely have a bad feeling if something works and I don't know why. It can of course be reasonable to apply a fix you don't really understand, just because it is more important to get the problem fixed now. But that's not the same as not caring; it means simply to prioritize.

    • (Score: 0) by Anonymous Coward on Tuesday May 26 2015, @12:15PM

      by Anonymous Coward on Tuesday May 26 2015, @12:15PM (#187977)

      Niave Popper doesn't really work. The reason is that you can't test a theory in isolation, there are always auxiliary assumptions being made (like properly functioning equipment).
      The logic goes
      (T AND A) implies O
      where,
      T=Theory; A=Assumptions; O=Observation

      With falsification you do not see the expected observation (~O)

      ~O Therefore ~T OR ~A

      See the problem?

      http://www.tc.umn.edu/~pemeehl/147AppraisingAmending.pdf [umn.edu]

      • (Score: 2) by aristarchus on Saturday May 30 2015, @08:36AM

        by aristarchus (2645) on Saturday May 30 2015, @08:36AM (#190028) Journal

        I would take your critique more seriously if you spelled naïve correctly. But yes, your critique does got to Quine's position, that one can alway adjust assumptions to take into account experimental refutations. Of course, where does that leave us? Republicans could be correct? The horror! The Horror! (cf. Joseph Conrad, _The Heart of Darkness_, or, just watch "Apocalypse Now".

        • (Score: 2) by Yog-Yogguth on Saturday May 30 2015, @11:52PM

          by Yog-Yogguth (1862) Subscriber Badge on Saturday May 30 2015, @11:52PM (#190254) Journal

          You're both wrong :)

          Doing so (Quine etc.) would make it an ad hoc hypothesis and thus fail the falsifiability requirement i.e. it is no longer possible to falsify the hypothesis when you make an exception to explain the “validity” of it i.e. it has all been reduced to a tautology.

          If that has been done “well enough” and the tautology is byzantine enough it will be published or taken as “true” or make a career even though it is anti-scientific ritualistic cargo cult “science”.

          More Popper = more science.

          Cudos for mentioning Conrad though, it's a good book, and it's hard not to drag that movie into it even though it's set on another continent and has a different story and is only ever so slightly related (more inspired than derived).

          --
          Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
  • (Score: 2) by PizzaRollPlinkett on Monday May 25 2015, @07:41PM

    by PizzaRollPlinkett (4512) on Monday May 25 2015, @07:41PM (#187704)

    So what in the world is WindowsITPro and why should we take troubleshooting advice from them? The name doesn't immediately inspire confidence and I've never heard of them before. A Windows source talking about troubleshooting could be really good, since Windows is notorious for its problems, so maybe they have more experience than, say, a Linux person who sets up a box and basically forgets about it for years.

    I'm not saying this Popper stuff wouldn't work, but it sounds like it would take too long. Experienced troubleshooters zero in quickly on the part of the system that is causing the problem, and do it intuitively because they've done it so many times.

    The worst kind of troubleshooting is the "try this, try that" approach where someone wants you to reboot, or rename a DLL, or install something, or change a file - totally random things that even if they worked you wouldn't know why. People who have that approach make me cringe.

    --
    (E-mail me if you want a pizza roll!)
    • (Score: 1) by redneckmother on Monday May 25 2015, @07:58PM

      by redneckmother (3597) on Monday May 25 2015, @07:58PM (#187710)

      I gave up trying to support closed systems long ago. Now, I tell people "I don't do windows."

      --
      Mas cerveza por favor.
    • (Score: 2) by aristarchus on Monday May 25 2015, @08:04PM

      by aristarchus (2645) on Monday May 25 2015, @08:04PM (#187715) Journal

      A Windows source talking about troubleshooting could be really good, since Windows is notorious for its problems,

      Long ago I ran Windows, pre-linux era. One day I was the recipient of a Blue Screen of Death, and the message written there, excited on the phosphors of the living CRT itself, was: "There has been an undetectable error in your system." For years I have marveled over the usefulness of that message, but what really boggled my mind was, if the error was in fact undetectable, how did Windows know it was there?

      • (Score: 4, Touché) by Dunbal on Monday May 25 2015, @08:11PM

        by Dunbal (3515) on Monday May 25 2015, @08:11PM (#187718)

        "Kernel panic: fatal exception" is much more expressive.

        • (Score: 3, Funny) by Marand on Monday May 25 2015, @11:06PM

          by Marand (1081) on Monday May 25 2015, @11:06PM (#187772) Journal

          "Kernel panic: fatal exception" is much more expressive.

          What's wrong with that? Everyone knows kernels are skittish and easily frightened, so it should surprise no one that it crashed when it encountered a dead exception.

          If you think that's bad, wait until you see how panicked it gets when it notices lp0 is on fire [wikipedia.org].

      • (Score: 3, Funny) by maxwell demon on Monday May 25 2015, @08:22PM

        by maxwell demon (1608) on Monday May 25 2015, @08:22PM (#187721) Journal

        but what really boggled my mind was, if the error was in fact undetectable, how did Windows know it was there?

        Simple: No error could be detected for some time. Being Windows, there was no chance that there was no error for such a long time, therefore there must have been an undetectable error, or else it would have been detected. ;-)

        --
        The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 3, Funny) by PizzaRollPlinkett on Monday May 25 2015, @10:18PM

        by PizzaRollPlinkett (4512) on Monday May 25 2015, @10:18PM (#187758)

        I don't know what that means, either, but reading about software finding an undetectable error has pushed me into a new state of transcendence. I am going to have to start using Windows. I didn't know it was so profoundly powerful. It's like they skipped self-aware AI and went straight to AI sunyata!

        --
        (E-mail me if you want a pizza roll!)
  • (Score: 3, Interesting) by unzombied on Monday May 25 2015, @09:09PM

    by unzombied (4572) on Monday May 25 2015, @09:09PM (#187734)

    A troubleshooting old-timer has a knack for tracking down a problem's source (be it hardware, software, person, or combo) based on years of experience. To an outsider this looks like art, or even magic, but is intuition that you see in every specialty. Mechanics, doctors, lab workers, teachers, therapists, appliance repair folk all have it (okay maybe not all).

    Sometimes everyone gets stumped, and makes 16 random changes at the same time, 1 of which works. The beginner changes them back 1 at a time; with an average solve after 8 tries. The better informed change back 8 at a time. Then, depending on outcome, change 4 of the reverted 8 or revert 4 in the second group of 8. With an average solve of 4 tries. (Binary divide and conquer.)

    Karl Popper's false hypothesis idea, it seems to me, is helpful in figuring out which 16 of the many changes a troubleshooter could make. Useful in avoiding the rabbitless rabbit holes of cognitive bias.

    • (Score: 1, Interesting) by Anonymous Coward on Monday May 25 2015, @10:04PM

      by Anonymous Coward on Monday May 25 2015, @10:04PM (#187754)

      16 changes at one time indicates someone who has no clue what they are doing.
      1 change at a time and 1 change needed, now that is a troubleshooter.

      • (Score: 2) by Tramii on Tuesday May 26 2015, @07:22PM

        by Tramii (920) on Tuesday May 26 2015, @07:22PM (#188205)

        To be fair, sometimes making multiple changes is called for. Editing code is fast, but the compiling and deploying steps can take a significant amount of time. Some builds can take an hour or more. If you know the problem is due to a certain function call, and there are hundreds of calls to that function, you probably would be better served to comment out multiple lines at once to quickly narrow down the list of culprits.

    • (Score: 0) by Anonymous Coward on Tuesday May 26 2015, @05:38PM

      by Anonymous Coward on Tuesday May 26 2015, @05:38PM (#188160)

      A binary search looks very good on paper and is indeed useful for troubleshooting some problems. But there are more typically interactions between the different changes, sometimes highly nonlinear. In for example software I recommend trying one thing at a time. And for the love of all that is good: Only one change per commit.

  • (Score: 1) by wasosa on Thursday May 28 2015, @12:41AM

    by wasosa (5269) on Thursday May 28 2015, @12:41AM (#188885)

    Nothing revolutionary, just a nice and concise collection of how it should be done: http://debuggingrules.com/ [debuggingrules.com]