A study of pull requests made by nearly 1.4 million users of Github has found that code changes made by women were more likely to get accepted, unless their gender was easily identifiable. The study is awaiting peer review, so keep that in mind:
The researchers, from the computer science departments at Caly Poly and North Carolina State University, looked at around four million people who logged on to Github on a single day - 1 April 2015. Github is an enormous developer community which does not request gender information from its 12 million users. However the team was able to identify whether roughly 1.4m were male or female - either because it was clear from the users' profiles or because their email addresses could be matched with the Google+ social network. The researchers accepted that this was a privacy risk but said they did not intend to publish the raw data.
The team found that 78.6% of pull requests made by women were accepted compared with 74.6% of those by men. The researchers considered various factors, such as whether women were more likely to be responding to known issues, whether their contributions were shorter in length and so easier to appraise, and which programming language they were using, but they could not find a correlation.
However among users who were not well known within the community, those whose profiles made clear that they were women had a much lower acceptance rate than those whose gender was not obvious. "For outsiders, we see evidence for gender bias: women's acceptance rates are 71.8% when they use gender neutral profiles, but drop to 62.5% when their gender is identifiable. There is a similar drop for men, but the effect is not as strong," the paper noted.
"Women have a higher acceptance rate of pull requests overall, but when they're outsiders and their gender is identifiable, they have a lower acceptance rate than men. Our results suggest that although women on Github may be more competent overall, bias against them exists nonetheless," the researchers concluded.
[Continues...]
The excellent Slate Star Codex has analysed this data.
I would highly recommend reading Scott Alexander's full analysis, but here's his summation...
So, let’s review. A non-peer-reviewed paper shows that women get more requests accepted than men. In one subgroup, unblinding gender gives women a bigger advantage; in another subgroup, unblinding gender gives men a bigger advantage. When gender is unblinded, both men and women do worse; it’s unclear if there are statistically significant differences in this regard.Only one of the study’s subgroups showed lower acceptance for women than men, and the size of the difference was 63% vs. 64%, which may or may not be statistically significant. This may or may not be related to the fact, demonstrated in the study, that women propose bigger and less useful changes on average; no attempt was made to control for this. This tiny amount of discrimination against women seems to be mostly from other women, not from men.
The media uses this to conclude that “a vile male hive mind is running an assault mission against women in tech.”
Every time I say I’m nervous about the institutionalized social justice movement, people tell me that I’m crazy, that I’m just sexist and privileged, and that feminism is merely the belief that women are people so any discomfort with it is totally beyond the pale. I would nevertheless like to re-emphasize my concerns at this point.
(Score: 0) by Anonymous Coward on Monday February 15 2016, @09:36AM
screaming "flamebait!!!!111" on a story being reported by BBC, Ars, The Register, CNN, The Guardian, CBS, MIT Technology Review, and others.
Wow, media outlets that are quickly becoming irrelevant because of dying business models are reporting on a story that is guaranteed to get them a lot of clicks. Who'd have thunk?!
(Score: 2) by takyon on Monday February 15 2016, @09:54AM
It's still not flamebait because of the content, which is a widely-reported study. It's entirely possible to discuss the validity of the study without flaming.
Let's say the study survives peer review and gets reported on again, or the results are replicated by other researchers. Will you still consider it clickbait or flamebait?
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Monday February 15 2016, @10:25AM
yes. it is what it is. and its brainwashing propaganda. the answer is yes. and im with the people above, when they redo the study please dont post it.
(Score: -1, Troll) by Anonymous Coward on Monday February 15 2016, @10:43AM
I did my own study. I found that systemd really is an excellent piece of software, but only because Lennart Poettering has a vagina *and* testicles! Another test I did which you might find enlightening... I also did a DNA test on him and found out not only is he half man half woman he's half black, half mexican, half native american, half muslim, half calf, half vente and half a sack of shit! His code makes even the most anal retentive Social Justice Warrior ponder 24/7/365 about what they could possibly bitch about.
(Score: 3, Insightful) by Anal Pumpernickel on Monday February 15 2016, @11:16AM
The problem is that fools start citing these studies as a sort of proof even if the quality of the research is shoddy (as you can expect from the social sciences), it hasn't been peer-reviewed, and it hasn't been replicated.
I don't think it's flamebait, but I also don't think it's very smart for idiotic reporters to instantly cite studies simply because they exist and are new, no matter what conclusions they reach. It's especially bad in situations where they use the studies to justify changes to government policy.