Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday July 09 2014, @11:13AM   Printer-friendly
from the rise-of-clueless dept.

Rise-Of-Clueless-Dept.

Over the past few years, social psychologists have come under fire for publishing work based on falsified and non-reproducible evidence. And now one social psychologist has published an awe-inspiringly clueless rant about this situation that will leave you smashing your face into your desk.

At issue in this essay by Harvard's Jason Mitchell is the specific accusation, leveled against many social psychologists, that their results cannot be reproduced. Though the idea of reproducibility is essential to the scientific process indeed, some would argue the very definition of it [PDF]. Mitchell believes that the emphasis on reproducibility is nothing more than "hand-wringing ( http://io9.com/the-rise-of-the-evolutionary-psychology-douchebag-757550990 )".

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Flamebait) by Anonymous Coward on Wednesday July 09 2014, @11:37AM

    by Anonymous Coward on Wednesday July 09 2014, @11:37AM (#66453)

    No seriously who gives a fuck. Science is all fraud anyway. Scientists don't care about science. They only care about prestige, wallpapering their offices with their "research" papers, and sleeping with their nubile students.

    • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @11:49AM

      by Anonymous Coward on Wednesday July 09 2014, @11:49AM (#66455)

      "Science is all fraud anyway."
      LOL, if you mean fraud as completely verifiable and reproducible, which I think is kind of the opposite of fraud.

      I'm sorry that's probably insensitive, were you raped by a text book in your high school science course?

      • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @11:53AM

        by Anonymous Coward on Wednesday July 09 2014, @11:53AM (#66456)

        I was raped by my high school science teacher, you insensitive fraud!

        • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @12:08PM

          by Anonymous Coward on Wednesday July 09 2014, @12:08PM (#66461)

          At least my insensitivity isn't verifiable or reproducible.

    • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @02:09PM

      by Anonymous Coward on Wednesday July 09 2014, @02:09PM (#66530)

      Just stick to "Fuck Beta!" next time, you're much better at it.

    • (Score: 2) by Tork on Wednesday July 09 2014, @09:25PM

      by Tork (3914) Subscriber Badge on Wednesday July 09 2014, @09:25PM (#66744)
      Heh. Yeah, hence the stereotype of the jet-setting labcoat.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @09:49PM

        by Anonymous Coward on Wednesday July 09 2014, @09:49PM (#66754)

        Leela: "Professor?! What did you do to the Planet Express ship? And why are you wearing a leather lab coat?"
        Farnworth: "Because unlike you, I'm cool. I drive fast late at night when I should be sleeping. But you wouldn't understand."

  • (Score: 5, Insightful) by geb on Wednesday July 09 2014, @12:13PM

    by geb (529) on Wednesday July 09 2014, @12:13PM (#66462)

    This isn't a cause for instant rage. It didn't make me smash my face into the desk.

    There are plenty of points I disagree with in the essay, but honestly it came across more as a frustrated researcher rather than an incompetent one. It sounds like he puts a great deal of effort into trying to produce good quality studies (whether successful or not) and then gets annoyed when others fail to replicate them in followup studies that don't use exactly the same method. Maybe the other studies are better, I can't judge that.

    I do disagree strongly with some points he made, such as the distinction between negative and positive evidence. This is where statistics and modelling is supposed to come in. Negative results are still evidence, and you can calculate how strongly you should weight them as evidence.

    For example, he's quite right in saying "all swans so far have been white" is relatively weak evidence for the claim "all swans everywhere are white". However, if you say "there is an elephant in my house" then there are only a certain number of places it could be hiding, and if you search room by room finding no elephants, those negative results are very strong evidence of the negative conclusion.

    Overall though, I'm more annoyed at the sensationalist headline than the content of the essay.

    • (Score: 2, Insightful) by No.Limit on Wednesday July 09 2014, @12:26PM

      by No.Limit (1965) on Wednesday July 09 2014, @12:26PM (#66465)

      Have to agree with being more annoyed at the headline than the essay.

      Please SN don't try to bait people into reading articles by using such headlines.

      Better keep the headline informative and then the reader can decide for themselves whether they're interested in reading the article.

      (Btw. this headline is really just the exception, let's keep it that way)

      • (Score: 2) by Thexalon on Wednesday July 09 2014, @01:47PM

        by Thexalon (636) on Wednesday July 09 2014, @01:47PM (#66516)

        Please SN don't try to bait people into reading articles

        Yup, they definitely shouldn't do that. Commenting without reading the article (like I'm doing right now) is a long and glorious tradition!

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 2) by nitehawk214 on Wednesday July 09 2014, @02:04PM

          by nitehawk214 (1304) on Wednesday July 09 2014, @02:04PM (#66526)

          Well the article is on io9, I would say not reading the article is best.

          Actually at some point my computer became self-aware and the Chrome browser tab simply crashes when any Gawker owned site is loaded. Deadspin, Gizmodo, io9, lifehacker... Obviously a defense mechanism by the computer.

          --
          "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
          • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @04:28PM

            by Anonymous Coward on Wednesday July 09 2014, @04:28PM (#66597)

            For a few years gawker sites were 100% dependent on javascript, you couldn't see anything beyond the site's identifying banner if you had javascript blocked. That made it really easy to avoid reading anything they put out.

            Sadly, they seem to have hired a competent webmaster in the last year or so and now gawker doesn't automatically protect me from their dumb anymore.

      • (Score: 2) by nitehawk214 on Wednesday July 09 2014, @04:19PM

        by nitehawk214 (1304) on Wednesday July 09 2014, @04:19PM (#66594)

        I would say that Soylent editors listen to the community at an unprecedented level. Occasionally bad summaries like this one slip through, its part of being a community supported content site. I doubt the editor or submitter intended for it to sound the way it does.

        The green site just ignores arguments of "slashvertisement", which is certainly not the worst way to deal with criticism. I was banned from Ars today because I complained that an article sounded somewhat like a paid advertisement. Well that tells me all I need to know about Ars' so-called journalistic integrity. I won't let the door hit me on the way out, and I certainly will never visit that site again.

        --
        "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
        • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @04:30PM

          by Anonymous Coward on Wednesday July 09 2014, @04:30PM (#66599)

          > I was banned from Ars today because I complained that an article sounded somewhat like a paid advertisement.

          Without links we can't tell if Ars was the arsehole or if you were.

          • (Score: 2) by nitehawk214 on Wednesday July 09 2014, @06:24PM

            by nitehawk214 (1304) on Wednesday July 09 2014, @06:24PM (#66649)

            It was both. But they were the bigger asshole for simply banning without any explanation of why. So I must assume is because I am right.

            --
            "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
        • (Score: 1) by No.Limit on Wednesday July 09 2014, @04:38PM

          by No.Limit (1965) on Wednesday July 09 2014, @04:38PM (#66602)

          I would say that Soylent editors listen to the community at an unprecedented level. Occasionally bad summaries like this one slip through

          I absolutely agree with that. It was more about giving feedback that such headlines aren't desirable.

        • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @08:11AM

          by Anonymous Coward on Thursday July 10 2014, @08:11AM (#66965)

          If they were listening they'd find some nerds to be editors, or at least proof-readers. There was no reason to subject us to this crap. A mean-spirited attack using straw men and implied authority, and telling us what to think about it too. Except none of it is true, and there is no objective reason to feel the way they instruct.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday July 09 2014, @04:55PM

        by Anonymous Coward on Wednesday July 09 2014, @04:55PM (#66609)

        I was more annoyed with io9 design than either the essay or TFS. They've earned the top slot on my mental list of sites never to visit.

        That said, the actual essay seems to center on whether we should intrinsically trust the people who find an effect or intrinsically trust people who fail to replicate the effect. That is, he argues that people who try to replicate a study start with the expectation that the original study was bullshit, thus they don't "really" try to reproduce the results. Meanwhile, he ignores that the original study was strongly motivated to publish something, thus the don't really try to question the first bit of statistical significance. He seems to assume that experimenter error will only produce incorrect lack-of-effect and never incorrect effects.

        Meanwhile, out here in the real world, I'm thinking that, if an effect is so subtle you can only measure it under conditions so restrictive that not even an ordinary scientist in your discipline can replicate them, then your effect is not really important, and we'd all do better to just move on.

    • (Score: 5, Insightful) by TGV on Wednesday July 09 2014, @01:21PM

      by TGV (2838) on Wednesday July 09 2014, @01:21PM (#66497)

      I disagree. It is a cause to be upset. For some time now, experimenters and statisticians from other areas in psychology have been fighting with people like Mitchell. There is so much nonsense out there: you do more X when you take a warm shower, your intuition is better than your rational judgement, you're better at thinking out of the box when there is a box on the table, and the list goes on. They make so many errors in their assumptions, experiments and analysis, that it is quite appalling to get another "you can't reproduce it properly and we're not going to acknowledge our errors. Nanananana."

    • (Score: 5, Insightful) by kebes on Wednesday July 09 2014, @02:25PM

      by kebes (1505) on Wednesday July 09 2014, @02:25PM (#66539)
      After reading his essay, I agree with you. Which is to say: the essay is wrong, but it's a matter of a subtly misguided scientist, not a complete idiot. He is trying to make a point about how experiments are conducted, and about how to weigh competing evidence. It's a subtle question, actually. Unfortunately, his arguments are flawed. He is mostly using common sense to guide his discussion, rather than thinking in terms of the harsh logic and probabilities that are central to science. (So, yes, in this sense he's missing the point of how science is supposed to be done.)

      1. One of his central arguments is that a 'failed' experiment (finds no effect) is likely due to an experimental mistake, whereas a 'successful' experiment (finds an effect) is likely due to the presence of a real effect. However this isn't at all correct; he's biasing the argument with his wording (fail/success). The 'right way' to do science is to collect the data carefully, and just see what it says. The experiment is successful as long as you're relatively confident that you didn't make any mistakes in execution or data processing. This is independent of whether the result is evidence for a correlation (i.e. effect stronger than noise), or evidence for lack of correlation (i.e. effect non-existent or weaker than noise). Note that even in the second case, we have a positive conclusion: we can set a rigorous bound on the strength of the correlation/effect.
      2. He tries to heuristically argue that there is a huge difference between positive and negative claims. The problem is that he uses a binary example, whereas in science (especially social science), we are usually talking about a matter of degree. He says:

      for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample.

      This is fair enough, from a pure logic standpoint. Except that he then tries to conclude: "Thus, negative findings--such as failed replications--cannot bear against positive evidence for a phenomenon." This is wrong. In most experiments of the kind he's actually talking about, we are dealing not with binary classification, but for some strength of correlation (with some error bar). The better example would be to talk about the percentage of swans that are black vs. white. If you do a small study, and find that 10% of them are black, that's fine. If we then replicate your study many times, and find that overall it's now more like 1% of swans that are black... that doesn't mean our followup studies are wrong but your original study (with the bigger effect) was right.
      3. The other problem with the swan analogy is that we implicitly assume that determining the color of a swan has zero error. But the effects social scientists are studying are much harder to measure, hence have large error bars. If to determine swan color was very difficult (e.g. we lived in a world without light, and could only infer color based on detailed chemical analysis of their feathers), then a single positive claim of a black swan would, actually, not strongly counter the "all swans are white" claim. It could just be an experimental error; we would want to replicate the study many times before we were properly convinced. Throughout his essay, he continually assumes that if you observe a positive effect in a study, then it must be due to the actual presence of an effect. But when using error-prone measurements, observing a positive effect is frequently just due to statistical noise. His essay goes at great length to argue that 'failed' experiments are usually due to an experimental mistake; but never acknowledges that many 'successful' experiments are in fact due to experimental mistakes.
      4. Similarly, he talks about the inherent bias of replicators: that they are usually trying to disprove the original finding, and so they bias their results. This is likely true. But he does not acknowledge the similar bias in positive findings: that if you are searching for a given effect, you will tend to keep searching until you find the effect, and then publish the result. These biases are hard to remove, though much of science is precisely about instituting mechanisms to overcome these biases. And, in fact, independent replication is one such mechanism.

      To conclude: He's not an idiot. He's legitimately trying to argue logically against replication efforts. But his arguments are fundamentally flawed. He's arguing emotionally, and not fairly comparing the two sides of the issue. In short, he's not thinking dispassionately (scientifically!) about how science works.

      • (Score: 2) by Thexalon on Wednesday July 09 2014, @03:31PM

        by Thexalon (636) on Wednesday July 09 2014, @03:31PM (#66568)

        Or, in short: If you expect something to exist, you'll be more likely to find it even if it isn't there. If you expect something to not exist, you'll be less likely to find it, even if it is there.

        I'm guessing this guy just had his pet theory disproved by somebody.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by TGV on Wednesday July 09 2014, @05:34PM

        by TGV (2838) on Wednesday July 09 2014, @05:34PM (#66620)

        If you're right that he is not an idiot, then he is a bit fishy. Why would he (and his colleagues, like Gilbert) argue like this? It certainly sounds like they're trying to save their skin by proxy of their citation scores.

      • (Score: 2) by HiThere on Wednesday July 09 2014, @07:18PM

        by HiThere (866) Subscriber Badge on Wednesday July 09 2014, @07:18PM (#66683) Journal

        Sorry, but data analysis isn't science. Science involves making predictions and then verifying that they are correct. This is why the multi-world interpretation of quantum physics isn't science. Yes, it matches all the data, but it didn't make predictions that were then found to be correct. The data came first, and there are multiple interpretations that fit all the data.

        If you don't have the theory guiding the research, then you aren't doing science. If the theory can't be invalidated, then you aren't doing science. If the theory doesn't make predictions, then you aren't doing science. There are lots of different areas of science, but they MUST meet those criteria, or they aren't science. They may be math, or philosophy, or something else. Technology often only builds the theory after the results, but it isn't science when it does so. (Note that this doesn't make it invalid.)

        Psychology is in a real predicament here. Their theories are usually not framed tightly enough to be falsified, so they aren't science. But they aren't willing to be philosophy, either. Many of them are actually sort of metaphysics, but none of them will admit that. Do note that there are things that impinge on psychology that qualify as science. There was just the other day made a prediction that if you stimulate a particular area of the brain with a certain frequency and voltage of current then consciousness will disappear. That *IS* testable. Not easily, because the number of people who have cause to have that particular area of the brain open for testing is quite small. But is it psychology or physiology?

        OTOH, psychology is making theories and predictions about actions of complex entities in a very noisy domain. It's not really surprising that they haven't made more progress there. Occasionally I run across a testable theory that seems interesting and useful. (Well, not as much recently, but they were always infrequent, and I suspect that as the population has aged, the percentage of people interested in experimenting with themselves has declined. Also there aren't nearly as many book stores around as there were a few years ago, and I don't run into these "black swans" while browsing on the web. And when I do I'm much more skeptical about them.) Some of them were useful techniques, but none of them were framed in a way that admits scientific investigation.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 1) by wantkitteh on Wednesday July 09 2014, @02:35PM

      by wantkitteh (3362) on Wednesday July 09 2014, @02:35PM (#66547) Homepage Journal

      I will admit the headline did make me check the one below it to see if it was a weird trick to lose weight or a grandmother 3D printing herself a facelift.

      I will also admit to having assumed the pose of a certain starship captain in a certain popular meme several times while reading the original essay.

  • (Score: 3) by Geezer on Wednesday July 09 2014, @12:15PM

    by Geezer (511) on Wednesday July 09 2014, @12:15PM (#66463)

    Given that social "science" isn't really science at all, it seems reasonable for related research standards to be less than rigorous.

    • (Score: 4, Insightful) by Taibhsear on Wednesday July 09 2014, @01:24PM

      by Taibhsear (1464) on Wednesday July 09 2014, @01:24PM (#66501)

      Yeah there's a difference between hard science (physics) and soft science (sociology). Neurology is science. Psychology is on par with alchemy. Even philosophy frequently has a logical mathematical framework to work with. This doesn't get me as riled up as scientists blatantly falsifying data or using snake oil tactics and having financial incentive to mislead the public.

      • (Score: 4, Insightful) by Reziac on Wednesday July 09 2014, @04:19PM

        by Reziac (2489) on Wednesday July 09 2014, @04:19PM (#66592) Homepage

        "Psychology is on par with alchemy."

        I agree, and until they start focusing first and foremost on the biochemistry of the brain and stop relying on the bastard descendants of Freud and Jung, it's going to continue to be alchemy.

        It's 'amazing' how many psychological issues become more-reliably treatable as soon as the underlying biochemical defect(s) is identified.

        --
        And there is no Alkibiades to come back and save us from ourselves.
        • (Score: 2) by tibman on Wednesday July 09 2014, @05:30PM

          by tibman (134) Subscriber Badge on Wednesday July 09 2014, @05:30PM (#66616)

          I think that would work for biological issues but not social/experience ones. If work is stressing you out, that is not something that should cause any modification of your brain chemistry. Just need a coping mechanism or something.

          --
          SN won't survive on lurkers alone. Write comments.
          • (Score: 2) by Reziac on Wednesday July 09 2014, @06:16PM

            by Reziac (2489) on Wednesday July 09 2014, @06:16PM (#66645) Homepage

            Well, sure. But stress factors (ignoring the question of how much control one may have) kinda falls under "Doctor, it hurts when I do this!" "Well then, don't do that." Brain chemistry errors aren't amenable to being told "Don't do that", but that's pretty much how the therapy industry has handled it.

            "Stress: what the body feels when the brain overrides its perfectly reasonable desire to choke the living shit out of some idiot who desperately deserves it."

            --
            And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 2) by opinionated_science on Wednesday July 09 2014, @02:16PM

      by opinionated_science (4031) on Wednesday July 09 2014, @02:16PM (#66535)

      glad you said it.....(ducks for cover...)

  • (Score: 3, Insightful) by Sir Garlon on Wednesday July 09 2014, @12:26PM

    by Sir Garlon (1264) on Wednesday July 09 2014, @12:26PM (#66466)

    Mitchell isn't claiming that reproducing scientific results is unimportant. I'll spare you his whole patronizing homily, but it starts with

    This is a barren defense. I have a particular cookbook that I love, and even though I follow the recipes as closely as I can, the food somehow never quite looks as good as it does in the photos.

    and ends with

    Someone without full possession of such know-how -- perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specifically -- could well be expected to bungle one or more of these important, yet unstated, experimental details.

    What he's saying is, in fact, that anyone who fails to reproduce his results is failing because of incompetence.

    Speaking of "barren defense," I'll concede that if one scientist cannot reproduce the results of another's experiment, that may be because one of the scientists is incompetent. That says nothing about *which one* is incompetent -- the one who initially faked^H^H^H^H discovered the result, or the one who couldn't reproduce it.

    --
    [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
    • (Score: 3, Insightful) by Dunbal on Wednesday July 09 2014, @12:57PM

      by Dunbal (3515) on Wednesday July 09 2014, @12:57PM (#66479)

      "What he's saying is, in fact, that anyone who fails to reproduce his results is failing because of incompetence."

      Yes and he is going to have to do a hell of a lot better than that. The thing is that competence is not supposed to be a variable in ANY experimental design. How could you be objective? Therefore I would suggest that his entire argument is based on false premises if he states that no one is competent enough. It's up to him to figure out how to teach us, the supposed ignorant people. It's not up to us to have faith and believe what he says is true.

      • (Score: 3, Informative) by geb on Wednesday July 09 2014, @01:15PM

        by geb (529) on Wednesday July 09 2014, @01:15PM (#66495)

        The essay does make a few points about how to raise the quality of studies. It's not an in depth step by step plan for reforming science, but it is addressed.

        • (Score: 2) by hubie on Wednesday July 09 2014, @02:31PM

          by hubie (1068) Subscriber Badge on Wednesday July 09 2014, @02:31PM (#66543) Journal

          Of course, if he is frustrated because it seems that people consistently seem incapable of reproducing his results, perhaps there is another place he needs to look first. There is an old academe joke about obliviousness of poor teaching: a professor laments his frustration to a colleague "Students today are not very bright. They seem incapable of picking up the concept no matter how many times I repeat the same lecture to them."

    • (Score: 1) by darnkitten on Wednesday July 09 2014, @05:51PM

      by darnkitten (1912) on Wednesday July 09 2014, @05:51PM (#66629)

      I know next to nothing about social psychology, but I can tell him why "the food [in his favorite cookbook] somehow never quite looks as good as it does in the photos:" food photography. [amazon.com] The images in the previews look color-enhanced and are certainly prepped and arranged to best advantage.

      That being said, I'd wager that if he put in as much time learning the preparation and presentation of that particular cuisine as does, say, an apprentice chef, or as he himself did in learning his own particular discipline, he would "share [the] tacit knowledge of...culinary conventions and techniques" that he currently lacks, and would be able to reproduce the results (recipes) consistently and within the tolerances of that profession.

      Which, of course, only highlights the problem of "failed replications in social psychology" he is dismissing in his article.

      • (Score: 2) by HiThere on Wednesday July 09 2014, @07:26PM

        by HiThere (866) Subscriber Badge on Wednesday July 09 2014, @07:26PM (#66686) Journal

        I tend to include that when I say "The theories of psychology aren't framed precisely enough.", though I mean more than just that. But I'm a programmer, and to me a precise specification looks a lot like source code. Even so, do notice that if you don't have the proper libraries installed then you can't compile, so the specification needs to include those, too. And the operating system(s). Etc.

        I can sort of forgive psychology for not making more advances considering with the difficulty of specifying precisely enough to ensure reproducible results. But on the same hand I'm reluctant to consider them a science, because they DON'T specify their theories precisely enough to allow reproducible results. And saying "you don't understand things properly" rather than saying "you didn't follow the given instructions" seems to shift it towards either politics or theology.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 5, Informative) by Per on Wednesday July 09 2014, @12:28PM

    by Per (2249) on Wednesday July 09 2014, @12:28PM (#66467)

    Dr Jason Mitchell argues that attempts to reproduce scientific results may fail for a number of reasons, and a failed such attempt is not sufficient evidence to insinuate that the original study was fraudulent. Blogger Annalee Newitz sees this as an attack on the scientific method. It is an interesting and important subject. Too bad neither of the participants manages to make a good argument.

    • (Score: 1) by oldmac31310 on Wednesday July 09 2014, @06:33PM

      by oldmac31310 (4521) on Wednesday July 09 2014, @06:33PM (#66654)

      I wish they wouldn't link to stuff like this Anally Newitz thing. She is really not qualified to comment as far as I recall. And I know from personal experience that she cannot take criticism or a joke.

  • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @12:34PM

    by Anonymous Coward on Wednesday July 09 2014, @12:34PM (#66468)

    It's a roundabout way to admit that "social psychology" is not a science. So next time these clowns go on about their "science," ignore them and move on.

    • (Score: 2) by c0lo on Wednesday July 09 2014, @12:42PM

      by c0lo (156) Subscriber Badge on Wednesday July 09 2014, @12:42PM (#66472) Journal
      You kidding, right? Haven't you read the study cases? [wikipedia.org]
      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by choose another one on Wednesday July 09 2014, @12:53PM

      by choose another one (515) Subscriber Badge on Wednesday July 09 2014, @12:53PM (#66474)

      Careful - if you RTFA you'll note that they have much better ways to detect implicit prejudice these days....

  • (Score: 2) by Dunbal on Wednesday July 09 2014, @12:54PM

    by Dunbal (3515) on Wednesday July 09 2014, @12:54PM (#66475)

    Well there's nothing stopping Mr. Mitchell from going out and founding his own church, I guess. Stranger things have happened.

    The GOOD thing about science is that no one speaks for a scientist. Oh plenty of people try to - politicians, other scientists, etc. But you see to the true scientist the only thing that has a voice is fact backed by reproducible evidence.

  • (Score: 5, Insightful) by deimios on Wednesday July 09 2014, @01:05PM

    by deimios (201) Subscriber Badge on Wednesday July 09 2014, @01:05PM (#66484) Journal

    you're doing them wrong. There is a disturbing trend of clickbait/ sensationalist headlines that sadly reached Soylent too. The current headline tells us nothing.

    How about "Social psychologist lashes out against critics over non-reproducible studies"? A headline that actually tells us something about the story.

    • (Score: 1) by Meepy on Wednesday July 09 2014, @03:14PM

      by Meepy (2099) on Wednesday July 09 2014, @03:14PM (#66560)

      If I could mod you up to 6, I would.

      • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @04:42PM

        by Anonymous Coward on Wednesday July 09 2014, @04:42PM (#66604)

        > If I could mod you up to 6, I would.

        If I were logged in, I would mod him down.

        Complaining about clickbait headlines is the lowest form of criticism. Seems to me that the solution to clickbait headlines lies within the self - learn to accept that due to their enforced brevity headlines will always be pushed towards extreme over-simplification. Take headlines for what they are - a colorful hook - and turn down your own desire to take them at face value. While it is not an issue of black and white, I'd prefer to see the site err on the side of more entertaining headlines than more dull headlines.

        If experience is any guide, any responses to this position will be nit-pickers.

        • (Score: 1) by Meepy on Wednesday July 09 2014, @09:14PM

          by Meepy (2099) on Wednesday July 09 2014, @09:14PM (#66735)

          It's not about entertaining headlines, it's about "upworthy style" empty, contentless ones that go beyond simplification. Perhaps you're just unaware of the trend... http://www.upworthygenerator.com/ [upworthygenerator.com]

          Anyway, I don't agree with the fatalist viewpoint and I think it's worth fighting against on a news site I want to read.

          • (Score: 1, Interesting) by Anonymous Coward on Thursday July 10 2014, @02:37AM

            by Anonymous Coward on Thursday July 10 2014, @02:37AM (#66857)

            The reason those kinds of headlines aren't such a problem here is because we include summaries. Those clickbait headlines are anti-informative when they stand alone. But when there is a couple of paragraphs filling in the context right next to the headline it really mitigates any harm they might do.

    • (Score: 2) by HiThere on Wednesday July 09 2014, @07:34PM

      by HiThere (866) Subscriber Badge on Wednesday July 09 2014, @07:34PM (#66691) Journal

      Actually he was claiming that if they had been competent they WOULD have reproduced his results. It's difficult to tell whether he was sloppy or they were, or whether the experiments were so loosely described that both performed them according to the description. He seems to be claiming that the other people should have known what his normal practice was. I have a hard time accepting this as a defense, even though I know that it's common in many areas to refrain from specifying what one considers "normal practice among those skilled in the art". To me that sounds like "This is art, not science. You're supposed to have the same cultural background as I do.", but I could be misreading him.

      It's fairly clear that he honestly thinks that he's doing science properly, and I doubt that he was being fraudulent. But I also doubt that those who didn't reproduce his results were.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 0) by Anonymous Coward on Wednesday July 09 2014, @03:23PM

    by Anonymous Coward on Wednesday July 09 2014, @03:23PM (#66564)

    The explicit link about "hand-wringing" links to "The rise of the evolutionary psychology douchebag" from Annalee Newitz, which speaks neither about "hand-wringing" nor about Mitchell. I guess the article linked there should be the previously linked "If You Love Science, This Will Make You Lose Your Sh*t" from the same author, which both speaks of Jason Mitchell and links to his article saying "Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value."-Ignacio Agulló

  • (Score: 2) by lubricus on Wednesday July 09 2014, @09:14PM

    by lubricus (232) on Wednesday July 09 2014, @09:14PM (#66734)

    Two points:

    1. Science is not only about discovery, but about effectively communicating findings. If many researchers cannot replicate experiments, perhaps he should aspire to communicate his methods clearly, rather than issue blanket statements of incompetence.

    2. The best scientists I know are humble. They worry when others cannot replicate their experiments. In fact, the firs thing they do is go over their notes to check for a mistake.... this after the innumerable checks for mistakes before publication. They also realize that analysis and interpretation can vary widely, and are wary of unconscious bias. I can not imagine any of my "hard-science" friends (which means, basically, all my friends... yes I have friends!), being cock-sure enough to write something like this.

    It's shameful, really.

    --
    ... sorry about the typos