Stories
Slash Boxes
Comments

SoylentNews is people

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by VLM on Wednesday June 29 2016, @01:30PM

    by VLM (445) Subscriber Badge on Wednesday June 29 2016, @01:30PM (#367564)

    The problem with the word "recipe" is people who are bad cooks can only use recipes that are like machine code for human robots "rotate wrist joint to mix boxed cake mix 50 times" and so forth. But people who can cook think recipes are ratios and rules of thumb, like "chicken fajitas should have about a cup of marinade per pound of meat, and the ratio of citrus juices seems to be an optimum around 1:1:1 of orange, lime, lemon" and some other stuff about how to marinate meat without giving people food poisoning and ideal ratio of fresh pressed garlic to fresh ground pepper (which I feel is near 1:1 but some people flip the heck out about that), etc.

    People who can't cook think that people who can cook have merely unthinkingly memorized a larger number of mindless recipes than they have memorized (which probably is about zero). In the real world, what actually happens is people who can cook don't even measure ingredients most of the time (except for the crazy pastry people) and run mostly on taste testing. I don't even know the recipe for the chicken fajita marinade I'm making on Friday I just mess with it until it tastes "right" and then I dump the chicken in. Whats in it, I donno, I worked on it until it tasted right.

    Its similar to how people who don't program have really weird ideas about how people program, that usually has nothing in common with how people who program actually do it.

    In "food network" terms they're saying don't learn to cook by watching Giada, because she's smoking hot but you won't learn anything and just get distracted. Instead of watching TV shows you should (and insert here a long journal article that summarizes to watching the "Good Eats" Alton Brown ... TV show).

    I guess the point is that people who can't cook will admit it, people who can't statistic will become published academics anyway, at least in the soft sciences. And the mental model of good cooks works out pretty well for the mental model of good statisticians. And just like you cannot become Alton Brown merely by slavishly copying his recipe really carefully, you can't do statistics the right way by randomly obeying some statistician's rules some of the time.

    • (Score: -1, Offtopic) by Anonymous Coward on Wednesday June 29 2016, @02:08PM

      by Anonymous Coward on Wednesday June 29 2016, @02:08PM (#367581)

      What happened to Giada's pasta sauce? It's good stuff but I can't find it at Target anymore.

      • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @11:09PM

        by Anonymous Coward on Wednesday June 29 2016, @11:09PM (#367778)

        Giada's pasta sauce

        Before I got to the end of the sentence, I thought this was a euphemism for something else.

    • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @03:23PM

      by Anonymous Coward on Wednesday June 29 2016, @03:23PM (#367607)

      sudo make me-fajitas && git stash of-margaritas

      • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @03:12AM

        by Anonymous Coward on Thursday June 30 2016, @03:12AM (#367846)

        No, no, it's all about Rust now. cargo new --bin --vcs git me-fajitas, you mean. Rust has no support for x: &'a Margartia, because they're not web-scale.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday June 29 2016, @03:57PM

      by Anonymous Coward on Wednesday June 29 2016, @03:57PM (#367620)

      (except for the crazy pastry people)

      I'm not crazy, my Mom got me tested!

      Joke aside, yes many pastry recipes require to be very precise, but there are still a lot of cases, and most of the beginners' ones, where you can just use your eyes, hands and tongue to get to the right taste, aspect and consistency (think crepe batter or bread dough)

      • (Score: 3, Interesting) by VLM on Wednesday June 29 2016, @04:36PM

        by VLM (445) Subscriber Badge on Wednesday June 29 2016, @04:36PM (#367631)

        many pastry recipes require to be very precise

        Yeah some might be self aggrandizement from the chefs themselves "Oh no, if I dramatically stir that batter one stroke too many the gluten will fire off and the brownies will turn into cake, whoa that was a reality TV style close call, luckily I saved the entire planet by my ninja like self control" and so on.

        Crazy more as in crazy hard core using scales to measure fractions of a gram of ingredients rather than crazy as in nuts. Although there are crazy obsessive compulsive ones who rearrange chocolate chips in cookie dough using chopsticks to get them just right, yeah she was a weird girlfriend.

        I thought of an example a few minutes ago of a very precise cooking task that I do that isn't pastry, which is canning food. You don't want to die a horrible death of food poisoning because you forgot to acidify your applesauce sufficiently or you misread the recipe in the ball book. (not to non-canners... never buy the paperback, get the spiral bound ball book, I screwed up there)

        I have a hobby of turning all kinds of apples into applesauce to see how they taste. They're surprisingly different in texture and flavor. Its very hard to buy commercial food store applesauce that's not made with gag level quantities of corn syrup and there is no single species sauce on the market other than granny smith and the commercial stuff has bug legs and bits of peel in it which I find un-a-PEEL-ing (oh the pun).

        Another thing I like to can that I can't buy in stores is fruit packed in liquors. Food stores don't want to put up with over 21 age checking and limited alcohol sales hours for mere canned groceries, so you basically can't buy peaches packed in rum or apples packed in brandy. But I can can them. And they taste freaking great.

        I went thru a phase of experimentally grinding blueberries to different texture levels for jams but thankfully I am cured of that weird phase. Or bored of it. Maybe some day I will backslide into differential grinding of strawberries into different levels of jam smoothness. It seems if you leave fruit in a processor for thirty minutes it turns almost jellylike but not clear which can be disconcerting. Also the motor starts smoking. And the food gets warm which I guess makes sense if you're dumping a couple hundred watts of rotating motion into it. It was a fun phase, but ultimately pointless.

        I'm well aware they're non-paleo little sugar bombs of unhealthiness but if I'm going to eat junk food its going to be awesome off the charts good junk food, and making it myself means my annual consumption is at least fixed at a low level.

        Because of the strong texture I feel like grilling some pork chops right now and slopping some homemade Ida Red applesauce on them.

        Canning is like playing the chemistry lab except you're supposed to eat the results. Also I can make stuff I can't buy. Now I wanna can something this weekend. And I'm hungry.

  • (Score: 3, Insightful) by shrewdsheep on Wednesday June 29 2016, @01:39PM

    by shrewdsheep (5215) on Wednesday June 29 2016, @01:39PM (#367570)

    "treat statistics as a science, not a recipe."

    While this is the closing sentence of the article, it contradicts the rest of the article, which is exactly a recipe. If you want to treat statistics as a science, learn the math behind it. The rest of the article is a 101 of statistical practice, which I think, it is meant to be. My own experience with statistical consultation (being a trained statistician) is that the non-statistician needs the recipes, namely what to pay attention to and how to interpret. Over time deeper insight will develop occasionally even touching the mathematics. No ten simple rules are going to change the need for this process. Learning to analyze real data takes experience (also for the trained statistician). Unfortunately, the article does not seem to stress this (not read carefully, though).

    • (Score: 3, Informative) by JoeMerchant on Wednesday June 29 2016, @03:27PM

      by JoeMerchant (3937) on Wednesday June 29 2016, @03:27PM (#367609)

      In my practical use of statistics, statistics are nothing but recipes for generating "recognized measures" of validity, proof, etc.

      The thing that requires more definition in (my practical use of) statistics is definition of what is valid input data. Anybody can crank out a standard deviation or a 95/5 confidence interval, but do they know what constitutes a meaningful sample set to perform that computation on?

      Actually, in my experience, I more often face engineers attempting to "prove their process" by measuring 5 sample outputs against a pass-fail criteria, and if they get 5/5 they want to claim the process is "good" - which by our procedures would require a 95/5 ci, which I have to remind them is nowhere near met by a sample set of 5 pass/fail tests - they were in the same on-site training classes as me, what happened?

      --
      🌻🌻 [google.com]
  • (Score: 1, Informative) by Anonymous Coward on Wednesday June 29 2016, @01:48PM

    by Anonymous Coward on Wednesday June 29 2016, @01:48PM (#367572)

    The underlying math are intricate and delicate, building on a pile of subtle assumptions that must be corroborated afterward. If it was easy, we wouldn't have named it "stat", would we? :)

    The unfortunate thing is, stats is the main "weapon" of social scientists, the ones least equipped to deal with the math subtleties.

    • (Score: 3, Insightful) by SubiculumHammer on Wednesday June 29 2016, @04:17PM

      by SubiculumHammer (5191) on Wednesday June 29 2016, @04:17PM (#367628)

      Statistics is challenging to the social scientist not because they are incapable or ignorant but because most need advanced statistical models to account for complex questions, and because they are alreafy asked to be experts in many other domains simultaneously. e.g. psychological/cognitive theory, MR imaging analysis and interpretation, experimental design, and be proficient in others, e.g. python scripting, bash, technicall writing, human subjects IRB regulations, teaching, etc. and others..

      We all accept that a C++ programmer might not be the best resource for database programming,or that your Ruby devloper should not also be the accountant. Why do we expect non-stat phd's to be experts in generalized mixed models, structural equation modeling, etc?
      Statitician collaboration should be common place, but it is not.

      • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @04:44PM

        by Anonymous Coward on Wednesday June 29 2016, @04:44PM (#367632)

        ... because they [social scientists] are alreafy asked to be experts in many other domains simultaneously. e.g. psychological/cognitive theory, MR imaging analysis and interpretation, experimental design, and be proficient in others, e.g. python scripting, bash, technicall writing, human subjects IRB regulations, teaching, etc. and others..

        Nobody asked them to be experts on sundry side endeavors - they claim to be proficient in them in order to make more obtuse claims.

        I would throw in medical researchers in there as well, but theirs is more convoluted due to all the money sloshing about in that world.

        • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @07:14PM

          by Anonymous Coward on Wednesday June 29 2016, @07:14PM (#367692)

          As a medical researcher....

          (a) That's how sausage is made. Don't look if you don't want to know.

          (b) Statistician collaborators are a pain in the ass. If you consult before you start your project they'll ask you a few unanswerable questions (and God forbid if you aren't doing linear regression with Gaussian statistics!): what result do you expect, what noise do you expect, what is the model (expressed as a linear equation, obviously). If you consult after you start your project, you'll find you've broken so many laws of statistics already and you can't look at your data until it's all collected and your research question is invalid, etc.

          The way I see it now, statisticians are the guys to bring in on like the 2nd or 4th round of a project. You already know the idea works, you've got it out in the world, now you need a thin veneer of polish on it to fill in the unread sections of your grants.

    • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @08:15PM

      by Anonymous Coward on Wednesday June 29 2016, @08:15PM (#367713)

      stats is the main "weapon" of social scientists

      Stats and fear... fear and stats... Our two weapons are fear and stats... and ruthless efficiency! Our three weapons are fear, and stats, and ruthless efficiency... and an almost fanatical devotion to Liberalism... Our four... no... Amongst our weapons... Hmf... Amongst our weaponry... are such elements as fear, stats... I'll come in again.

  • (Score: 3, Funny) by Dunbal on Wednesday June 29 2016, @01:54PM

    by Dunbal (3515) on Wednesday June 29 2016, @01:54PM (#367576)

    I give them an 83% chance of failing.

    • (Score: 3, Insightful) by choose another one on Wednesday June 29 2016, @02:39PM

      by choose another one (515) Subscriber Badge on Wednesday June 29 2016, @02:39PM (#367593)

      I give them an 83% chance of failing.

      Yeah, but if they keep trying they'll have >50% chance of success with p0.05

      So the only problem was they published too soon...

      • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @03:28PM

        by Anonymous Coward on Wednesday June 29 2016, @03:28PM (#367610)


        10 PRINT RESULT$
        20 GOTO 10
        Eureka!

    • (Score: 3, Interesting) by theluggage on Wednesday June 29 2016, @03:24PM

      by theluggage (1797) on Wednesday June 29 2016, @03:24PM (#367608)

      p = 0.051337

      I lost my faith in stats when I needed to look up the code to translate chi-square and T-test results into probabilities... (You know, that bit in the stats book where, even after the math of the test is explained in detail, you end up looking up the result in this mysterious table that you're just supposed to trust). Ye gods, this much spurious precision when I'm only fairly sure that my distributions were normal-ish but I'm pretty confident that the person who tried to explain 1-tailed vs. 2-tailed didn't understand it themselves?

      TFA missed rule zero: the probability of a bogus result arising from the sort of 'luck of the draw' fluke that a statistical test will detect is far less than the chance of it arising from some other bias that the test won't show up.

      • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @07:19PM

        by Anonymous Coward on Wednesday June 29 2016, @07:19PM (#367694)

        I much prefer calling functions on my computer "that are supposed to be correct". Numeric tables are just soooooo risky.

      • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @01:53AM

        by Anonymous Coward on Thursday June 30 2016, @01:53AM (#367821)

        TFA missed rule zero: the probability of a bogus result arising from the sort of 'luck of the draw' fluke that a statistical test will detect is far less than the chance of it arising from some other bias that the test won't show up.

        The null hypothesis is hardly ever disproved because other potentially influential factors are not ruled out.

        TL;DR: Correlation isn't Causation.

        • (Score: 2) by theluggage on Thursday June 30 2016, @12:34PM

          by theluggage (1797) on Thursday June 30 2016, @12:34PM (#367961)

          TL;DR: Correlation isn't Causation.

          More to the point, Correlation Coefficient != Causation Coefficient

  • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @02:07PM

    by Anonymous Coward on Wednesday June 29 2016, @02:07PM (#367580)

    Leading Sciencetists would Establish Steps to Convey Science as a Science not a Recipe

  • (Score: 3, Insightful) by opinionated_science on Wednesday June 29 2016, @02:13PM

    by opinionated_science (4031) on Wednesday June 29 2016, @02:13PM (#367584)

    But this article was written for plumbers?

    Seriously, I was expecting some "name and shame" with maths to back it up...

    Anyone else?

    • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @07:26PM

      by Anonymous Coward on Wednesday June 29 2016, @07:26PM (#367699)

      I was expecting yet another pompous list of 10 things that p-values are not.

  • (Score: -1, Offtopic) by Anonymous Coward on Wednesday June 29 2016, @02:31PM

    by Anonymous Coward on Wednesday June 29 2016, @02:31PM (#367590)

    Introduction

    Have you ever wondered how to make a comment on Soylent News? The following ten rules should help you.

    Rule 1: Make sure you are actually on Soylent News.

    Being actually on Soylent News is the first step on successfully commenting there. Therefore you should make sure that you are actually on that site, not on some other site. In particular, there are some sites that look similar to Soylent News, like Pipdot, or The Other Site™.

    While it is technically possible that a site that is not SoylentNews provides a way to submit a comment to Soylent News, it is highly improbable that any site will do so (unless the site owner has read this list and intentionally tries to prove me wrong — but by adding this exception, I pre-emptively foiled their plan anyway).

    Rule 2: Select a story or journal entry with enabled commenting.

    Comments can only be added to stories or journal entries. Moreover, if a story gets old, it gets archived and commenting is no longer possible. Moreover, an user can explicitly disable comments on his journal entries.

    Note that the story or journal entry you choose will heavily influence who and how many people will read your comment.

    Rule 3: Find something to comment on.

    Of course you can always comment the story itself, but you can also comment a comment on that story, or a comment on a comment on that story, and so on.

    Like with rule 2, your decision will influence the readership of your comment.

    Rule 4: Find the corresponding Comment button.

    Usually there are a lot of comment buttons on a page. Find the right one. Comment buttons for comments immediately follow the comment you can comment on by using them, while comment buttons for the story are found both between the story and the comments, and after the last comment. Be careful: The comment button of the last comment and the final comment button for the story are easy to confuse.

    Rule 5: Have something to say.

    Comments without content are not possible; comments without real content will likely get moderated down quickly, and therefore will be rendered invisible for the majority of users.

    Rule 6: Make sure you've got a title.

    When replying to a comment, the title is automatically provided for you, but when replying directly to the story, you have to write a title yourself. Even if you get provided with a title, you may consider changing it if the existing title does not match your comment.

    Rule 7: Preview.

    OK, I'm joking here. Nobody actually previews. But pressing the Preview button is mandatory for ACs.

    Rule 8: Submit.

    If you don't submit, your comment will never be seen by anybody.

    Rule 9: Check for errors.

    Sometimes instead of actually submitting, an error will turn up (usually telling you that you have to wait a bit before posting again, but other errors are possible). If you don't check for this and correct any error (or, in the case of a "Slow Down" error, simply wait a bit before trying again) before trying to submit again, your comment will be lost.

    Rule 10: There is no rule 10.

    OK, you need to have ten rules in order to have ten rules. But what to do if you lack a rule? Well, make one up. Such like this one.

    Summary:

    Congratulations. You've reached the end of this comment.

    • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @05:42PM

      by Anonymous Coward on Wednesday June 29 2016, @05:42PM (#367650)

      tl;dr

  • (Score: 2, Touché) by NickFortune on Wednesday June 29 2016, @02:44PM

    by NickFortune (3267) on Wednesday June 29 2016, @02:44PM (#367595)

    So that would be what? Hypothesis, Experiment, Observation, right.

    Hypothesis: come up with some numbers that sound about right.

    Experiment: Show them to some people.

    Observation: Do the people react in the desired way? If not, repeat with a new set of hypothetical numbers.

    Of course, it can't be considered Good Science until other scientists have produced the desired result with the same numbers.

    • (Score: 2, Insightful) by Anonymous Coward on Wednesday June 29 2016, @03:57PM

      by Anonymous Coward on Wednesday June 29 2016, @03:57PM (#367621)

      I wish we could put this stupidity to bed.

      Yes, in some empirical sciences, that sometimes works as an approach, but the world, even the world of science, has more nooks and crannies than that.

      In the analytical sciences (such as mathematics and computer science and statistics) you can formulate all the hypotheses you want, but they are not empirical in nature. You want deductive proofs.

      In the social sciences (you can stop sneering any time now, physicists, you're only proving how short-sighted you are) you need to contend with the complexities of human nature and experience. That makes it harder, not easier, to come up with solid conclusions but it doesn't render them meaningless or useless - just difficult. However, the usual empirical cycle often doesn't apply, or is impractical.

      Then we have epistemological problems around mental models, how accurate they can or need to be, .... you know what, screw this. Let's require all science graduates to pass a course in the philosophy of science. Otherwise they don't even know what they're doing .... we can even tell them they don't need to wear black turtlenecks to do philosophy.

      Never mind. Stupid idea. Carry on as you were.

      • (Score: -1, Troll) by Anonymous Coward on Wednesday June 29 2016, @04:54PM

        by Anonymous Coward on Wednesday June 29 2016, @04:54PM (#367638)

        Another butthurt loser. BTW, if we knew what we were doing, it wouldn't be called research, would it? Dumb-dumb.

  • (Score: 0) by Anonymous Coward on Wednesday June 29 2016, @04:44PM

    by Anonymous Coward on Wednesday June 29 2016, @04:44PM (#367633)

    I didn't see any citation of Meehl 1967, without doing more than skimming I can tell this will just be adding more noise.
    http://www.fisme.science.uu.nl/staff/christianb/downloads/meehl1967.pdf [science.uu.nl]

    The main problem is:
    The Null Hypothesis is Not the Research Hypothesis

    No amount of math or careful behaviors will ever fix this problem. As long as it is there the entire statistics thing is nothing but a pointless ritual. At least the machine learning community has pretty much dropped this whole "null hypothesis testing" idea and going after out of sample predictive skill instead, so there is still hope for people to make progress.

    • (Score: 1) by khallow on Thursday June 30 2016, @12:01AM

      by khallow (3766) Subscriber Badge on Thursday June 30 2016, @12:01AM (#367792) Journal

      The main problem is:
      The Null Hypothesis is Not the Research Hypothesis

      No, something has to be a problem first in order to be a main problem. Your statement isn't even something that can be logically expressed as a problem.

      • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @01:33AM

        by Anonymous Coward on Thursday June 30 2016, @01:33AM (#367814)

        I love it now that I don't have to deal with this all the time. It is exactly like telling someone who has never met an atheist that god may not exist: "WTF are you talking about, it doesn't even make sense to talk about God existing or not!"

        Once you buy into the idiotic idea that testing some other null hypothesis can tell you about your research hypothesis, you are pretty much doomed (at least as a productive scientist). It is possible though, you can get out of it, but that is a realization you need to come to yourself. And it will hurt, a lot.

        • (Score: 1) by khallow on Thursday June 30 2016, @02:17AM

          by khallow (3766) Subscriber Badge on Thursday June 30 2016, @02:17AM (#367832) Journal
          I notice that you don't rebut my observation. My view is that you aren't even wrong here.

          Once you buy into the idiotic idea that testing some other null hypothesis can tell you about your research hypothesis, you are pretty much doomed (at least as a productive scientist).

          There's some more logical fail here. For an analogy, you don't walk a doorway. You walk through a doorway from one region to another. Some things are dependent on connections or correlations between things.

          Similarly, you don't test a single hypothesis since there is no meaning in isolation of such a test, you test between two or more rival hypotheses whether explicitly or not. In other words, testing is relative to two or more ideas and has no meaning when you don't have the ability to distinguish between such ideas. The "null hypothesis" model is based on the assumption that there is some natural default in the absence of an effect or correlation. When there isn't, then the idea breaks down. But you're not talking about a real problem like that.

          It is exactly like telling someone who has never met an atheist that god may not exist: "WTF are you talking about, it doesn't even make sense to talk about God existing or not!"

          Which let us note is a real problem with talking about God existing or not. I find it remarkable how terrible your example here is. What does it mean to exist in this situation? The common standard in the real world is that something exists, if you can observe it. There's no way we can fully observe a being with the extreme attributes of God and hence, in a very concrete way God can't exist for us. Among other things, this lead to the fact that there is no test to determine whether, for example, a being is God or merely too powerful, complex, or sublime for us to distinguish from God.

          I would agree with the statement that it doesn't make sense to talk about God existing or not, especially that there's no evidence for a being powerful enough to confuse with God, much less God itself. It's yet another "not even wrong" discussion.

          • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @03:30AM

            by Anonymous Coward on Thursday June 30 2016, @03:30AM (#367853)

            Similarly, you don't test a single hypothesis since there is no meaning in isolation of such a test, you test between two or more rival hypotheses whether explicitly or not.

            Here is how I think about the process, where is the logical error?

            If there is a hypothesis that gravity will bend a beam of light by the angle theta=(4*G*M)/(r*c^2), and the light from stars during an eclipse are not displaced by the amount predicted, then you can reject the conjunction of that hypothesis and any assumptions being made (ie something is wrong somewhere, maybe you assumed the telescope was working when it wasn't). Formally, this would be

            H = Hypothesis
            A = Auxiliary Assumptions
            O = Observation
            -> = Entails
            ~  = Not
             
            1.  (H and A) -> O.
            2. ~0, therefore
            3.  ~(H and A) = (~H or ~A)

            What you say is only true in the trivial sense that if a theory is wrong then it isn't correct. The latter is not a meaningful "hypothesis" since it's "predictions" amount to anything at all besides what was predicted by the hypothesis being tested. This is too vague to be of any use. Now, if you have multiple hypotheses that predict various amounts of light bending, you would use Bayes' rule which tells you the probability the hypothesis is correct given the observation. This is relative to the probability of seeing the data given each other proposed hypothesis is correct:

            Hi = Hypothesis i, where i indexes all proposed hypotheses, going from 0 to N.
            H0 = The hypothesis being tested, a subset of all Hi
            | = Given
            p(Hi) = Probability of Hi independent of the observation
            p(Hi|O) = Probability of Hi given the observation O was made
             
            p(H0|O) =p(H0)p(O|H)/ sum( p(Hi)p(O|Hi) )

            In practice the sum in the denominator only needs to include the top few hypotheses, as those with small values of either p(Hi) or p(O|Hi) will have little effect on the outcome of the calculation.

            • (Score: 1) by khallow on Thursday June 30 2016, @04:20AM

              by khallow (3766) Subscriber Badge on Thursday June 30 2016, @04:20AM (#367861) Journal

              If there is a hypothesis that gravity will bend a beam of light by the angle theta=(4*G*M)/(r*c^2), and the light from stars during an eclipse are not displaced by the amount predicted

              What do you mean by "predicted"? If you get different numbers, why isn't that predicted too? This is the implicit comparison of hypotheses I spoke of.

              What you say is only true in the trivial sense that if a theory is wrong then it isn't correct. The latter is not a meaningful "hypothesis" since it's "predictions" amount to anything at all besides what was predicted by the hypothesis being tested.

              Except that it means that you have the most important observations of a hypothesis - the ways it doesn't work.

              • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @12:57PM

                by Anonymous Coward on Thursday June 30 2016, @12:57PM (#367968)

                What do you mean by "predicted"?

                I mean there was a hypothesis that the light should be displaced by the angle theta, which can be calculated using that equation. For a given set of G, M, r, and c, the prediction is that the position of the star will appear to have changed by exactly that amount

                If you get different numbers, why isn't that predicted too? This is the implicit comparison of hypotheses I spoke of.

                Predicted by who? There needs to be some equation, etc written down before hand to have a prediction. Once again if you are going to say, "I don't know anything but I know your precise hypothesis is wrong" counts as a hypothesis, whatever. That is a useless hypothesis, it adds nothing.

                Except that it means that you have the most important observations of a hypothesis - the ways it doesn't work.

                Another hypothesis would be that the light is displaced by theta=(2*G*M)/(r*c^2); as deduced from Newtonian mechanics.

                In the notation I used above, we already have a way to talk about incorrect hypotheses:

                H = hypothesis is correct
                ~ H = hypothesis is incorrect

                You don't say H0 = hypothesis is correct and H1= hypothesis is incorrect. This is just nonsense come up with by stats people that have no idea what they are talking about. You can even read Ronald Fisher's rants about this later in his life:

                "It was only when the relation between a test of significance and its corresponding null hypothesis was confused with an acceptance procedure that it seemed suitable to distinguish errors in which the hypothesis is rejected wrongly, from errors in which it is "accepted wrongly" as the the phrase does."

                Fisher, R N (1955). "Statistical Methods and Scientific Induction" Journal of the Royal Statistical Society. Series B (Methodological), Vol. 17, No. 1, 69-78. http://www.phil.vt.edu/dmayo/PhilStatistics/Triad/Fisher%201955.pdf [vt.edu]

                "We are quite in danger of sending highly trained and highly intelligent young men out into the world with tables of erroneous numbers under their arms, and with a dense fog in the place where their brains ought to be. In this century, of course, they will be working on guided missiles and advising the medical profession on the control of disease, and there is no limit to the extent to which they could impede every sort of national effort."

                Fisher, R N (1958). "The Nature of Probability" Centennial Review 2: 261–274. http://www.york.ac.uk/depts/maths/histstat/fisher272.pdf [york.ac.uk]

                • (Score: 1) by khallow on Thursday June 30 2016, @01:23PM

                  by khallow (3766) Subscriber Badge on Thursday June 30 2016, @01:23PM (#367974) Journal

                  I mean there was a hypothesis that the light should be displaced by the angle theta, which can be calculated using that equation. For a given set of G, M, r, and c, the prediction is that the position of the star will appear to have changed by exactly that amount

                  If no other hypothesis is possible, then this is just a measure of your ability to measure and says nothing about the hypothesis.

                  Once again if you are going to say, "I don't know anything but I know your precise hypothesis is wrong" counts as a hypothesis, whatever. That is a useless hypothesis, it adds nothing.

                  Why do you think it adds nothing? That's a peculiar thing to say given the obvious relevance of determining a hypothesis is wrong.

                  You don't say H0 = hypothesis is correct and H1= hypothesis is incorrect. This is just nonsense come up with by stats people that have no idea what they are talking about.

                  Yet we can do it and it's a valid thing to do (though normally we use language that is more neutral and acknowledging of error, such as "reject" and "confirm"). Sometimes the above test is in error, but so what? No test is perfect.

                  • (Score: 0) by Anonymous Coward on Thursday June 30 2016, @01:31PM

                    by Anonymous Coward on Thursday June 30 2016, @01:31PM (#367976)

                    If no other hypothesis is possible, then this is just a measure of your ability to measure and says nothing about the hypothesis.

                    1) What makes you think this?
                    2) What examples of "no other hypothesis is possible" can you point out?

                    • (Score: 1) by khallow on Thursday June 30 2016, @07:36PM

                      by khallow (3766) Subscriber Badge on Thursday June 30 2016, @07:36PM (#368092) Journal

                      1) What makes you think this?
                      2) What examples of "no other hypothesis is possible" can you point out?

                      Perpetual motion machines are a classic example. Or the historical, bizarre infatuation with squaring the circle (using ruler and compass in the traditional way to draw a square with the same area as a given circle - this has been shown to be mathematically impossible). If someone claims to have done either, it reduces to a matter of determining where their error in logic or procedure happened.

                      • (Score: 0) by Anonymous Coward on Friday July 01 2016, @06:10AM

                        by Anonymous Coward on Friday July 01 2016, @06:10AM (#368297)

                        determining where their error in logic or procedure happened.

                        There would be multiple ways to go wrong, I don't see how either are an example of only one hypothesis is possible. Also, if you had bothered to read the links I provided you would see this type of argument was already pointed out as wrong by Fisher with regard to this very issue when he discusses:
                        https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems [wikipedia.org]

                        I planned on going on to your next points, let me know if you are still interested in feedback.

                        • (Score: 1) by khallow on Saturday July 02 2016, @12:10AM

                          by khallow (3766) Subscriber Badge on Saturday July 02 2016, @12:10AM (#368709) Journal

                          There would be multiple ways to go wrong

                          Which is irrelevant. When certain claims are made, you know they went wrong somehow even if you can't be bothered to distinguish how.

  • (Score: -1, Flamebait) by Anonymous Coward on Wednesday June 29 2016, @05:05PM

    by Anonymous Coward on Wednesday June 29 2016, @05:05PM (#367643)

    There are three type of liars:
    1) Liars
    2) Damn Liars
    3) Statisticians

    It is important to remember that while looking at *ANY* statistic work, because with the right filter or starting point, *ANY* data set can prove the positive or negative correlation. It is why "recipe" learning maybe be better than "science" learning. Because following a "recipe" the same answer should come out each and every time. It maybe wrong, but it is repeatable by all.

    One other important fact on statistics: 87% of all statistics are made up on the spot!

  • (Score: 1) by Ken on Wednesday June 29 2016, @07:52PM

    by Ken (5985) on Wednesday June 29 2016, @07:52PM (#367704)

    It appears that more than 50% of commenters didn't read the article in its entirety.

    Greater than 80% don't care about the preceding stat.