Timofey Pnin linked to an Alice Eagly article that mentioned these two meta-analyses:

  • van Dijk et al. 2012 "Defying Conventional Wisdom: A Meta-Analytical Examination of the Differences between Demographic and Job-Related Diversity Relationships with Performance"
  • Post and Bryon 2015 "Women on Boards and Firm Financial Performance: A Meta-Analysis"

I wanted to check for funnel plot asymmetry in the set of studies in these meta-analyses, so I emailed coauthors of the articles. Hans van Dijk and Kris Byron were kind enough to send data.

The funnel plot for the 612 effect sizes in the van Dijk et al. 2012 meta-analysis is below. The second funnel plot below is a close-up of the bottom of the full funnel plot, limited to studies with fewer than 600 teams. The funnel plot is remarkably symmetric.

FP1

FP2

The funnel plots below are for the Post and Byron 2015 meta-analysis, with the full set of studies in the top funnel plot and, below the full funnel plot, a close-up of the studies with a standard error less than 0.4. The funnel plot is reasonably symmetric.

FP3

FP4

UPDATE (Apr 13, 2016):

More funnel plots from van Dijk et al. 2012.

Sample restricted to age diversity (DIV TYPE=1):

vDe - Age Diversity (1)

Sample restricted to race and ethnic diversity (DIV TYPE=2):

vDe - Race Ethnic Diversity (2)

Sample restricted to sex diversity (DIV TYPE=5):

vDe - Sex Diversity (5)

Sample restricted to education diversity (DIV TYPE=6):

vDe - Education Diversity (6)

Tagged with: , ,

The Washington Post police shootings database as of January 4, 2016, indicated that on-duty police officers in the United States shot dead 91 unarmed persons in 2015: 31 whites, 37 blacks, 18 Hispanics, and 5 persons of another race or ethnicity. The database updates; the screen shot below is the data as of January 4, 2016.

WaPo UM

The New York Times search engine restricted to dates in 2015 returned 1,281 hits for "unarmed black", 4 hits for "unarmed white", 0 hits for "unarmed Hispanic", and 0 hits for "unarmed Asian":

nytimesUnarmedBlack

nytimesUnarmedWhite

nytimesUnarmedHispanic

nytimesUnarmedAsian

Tagged with: , , , ,

There is a common practice of discussing inequality in the United States without reference to Asian Americans, which permits the suggestion that the inequality is due to race or racial bias. Here's a recent example:

The graph reported results for Hispanics disaggregated into Cubans, Puerto Ricans, Mexicans, and other Hispanics, but the graph omitted results for Asians and Pacific Islanders, even though the note for the graph indicates that Asians/Pacific Islanders were included in the model. Here are data on Asian American poverty rates (source):

ACS

The omission of Asian Americans from discussions of inequality is a common enough practice [1, 2, 3, 4, 5] that it deserves a name. The Asian American Exclusion is as good as any.

Tagged with: , , ,

Here is a passage from Pigliucci 2013.

Steele and Aronson (1995), among others, looked at IQ tests and at ETS tests (e.g. SATs, GREs, etc.) to see whether human intellectual performance can be manipulated with simple psychological tricks priming negative stereotypes about a group that the subjects self-identify with. Notoriously, the trick worked, and as a result we can explain almost all of the gap between whites and blacks on intelligence tests as an artifact of stereotype threat, a previously unknown testing situation bias.

Racial gaps are a common and perennial concern in public education, but this passage suggests that such gaps are an artifact. However, when I looked up Steele and Aronson (1995) to discover the evidence for this result, I discovered that the black participants and the white participants in the study were all Stanford undergraduates and that the students' test performances were adjusted by the students' SAT scores. Given that the analysis contained both sample selection bias and statistical control, it does not seem reasonable to make an inference about populations based on that analysis. This error in reporting results for Steele and Aronson (1995) is apparently common enough to deserve its own article.

---

Here's a related passage from Brian at Dynamic Ecology:

A neat example on the importance of nomination criteria for gender equity is buried in this post about winning Jeopardy (an American television quiz show). For a long time only 1/3 of the winners were women. This might lead Larry Summers to conclude men are just better at recalling facts (or clicking the button to answer faster). But a natural experiment (scroll down to the middle of the post to The Challenger Pool Has Gotten Bigger) shows that nomination criteria were the real problem. In 2006 Jeopardy changed how they selected the contestants. Before 2006 you had to self-fund a trip to Los Angeles to participate in try-outs to get on the show. This required a certain chutzpah/cockiness to lay out several hundred dollars with no guarantee of even being selected. And 2/3 of the winners were male because more males were making the choice to take this risk. Then they switched to an online test. And suddenly more participants were female and suddenly half the winners were female. [emphasis added]

I looked up the 538 post linked to in the passage, which reported: "Almost half of returning champions this season have been women. In the year before Jennings's streak, fewer than 1 in 3 winners were female." That passage provides two data points: this season appears to be 2015 (the year of the 538 post), and the year before Jennings's streak appears to be 2003 (the 538 post noted that Jennings's streak occurred in 2004). The 538 post reported that the rule change for the online test occurred in 2006.

So here's the relevant information from the 538 post:

  • In 2003, fewer than 1 in 3 Jeopardy winners were women.
  • In 2006, the selection process was changed to an online test.
  • Presumably in 2015, through early May, almost half of Jeopardy winners have been women.

It does not seem that comparison of a data point from 2003 to a partial data point from 2015 permits use of the descriptive term "suddenly."

It's entirely possible -- and perhaps probable -- that the switch to an online test for qualification reduced gender inequality in Jeopardy winners. But that inference needs more support than the minimal data reported in the 538 post.

Tagged with: , , ,

Here's a tweet that I happened upon:

The graph is available here. The idea of the graph appears to be that the average 2012 science scores on the PISA test were similar for boys and girls, so the percentage of women should be similar to the percentage of men among university science graduates in 2010.

The graph would be more compelling if STEM workers were drawn equally from the left half and the right half of the bell curve of science and math ability. But that's probably not what happens. It's more likely that college graduates who work in STEM fields have on average more science and math ability than the average person. If that's true, then it is not a good idea to compare average PISA scores for boys and girls in this case; it would be a better idea to compare PISA scores for boys and girls in the right tail of science and math ability because that is where the bulk of STEM workers likely come from.

Stoet and Geary 2013 reported on sex distributions in the right tail of math ability on the PISA:

For the 33 countries that participated in all four of the PISA assessments (i.e., 2000, 2003, 2006, and 2009), a ratio of 1.7–1.9:1 [in mathematics performance] was found for students achieving above the 95th percentile, and a 2.3–2.7:1 ratio for students scoring above the 99th percentile.

So there is a substantial sex difference in mathematics scores to the advantage of boys in the PISA data. There is also a substantial sex difference in reading scores to the advantage of girls in the PISA data, but reading ability is less useful than math ability for success in most or all STEM fields.

There is a smaller advantage for boys over girls in the right tail of science scores on the 2012 PISA, according to this report:

Across OECD countries, 9.3% of boys are top performers in science (performing at Level 5 or 6), but only 7.4% of girls are.

I'm not sure what percentile a Level 5 or 6 score is equivalent to. I'm also not sure whether math scores or science scores are more predictive for future science careers. But I am sure that it's better to examine right tail distributions than mean distributions for understanding representation in STEM.

Tagged with: , ,

From the abstract of Bucolo and Cohn 2010 (gated, ungated):

'Playing the race card' reduced White juror racial bias as White jurors' ratings of guilt for Black defendants were significantly lower when the defence attorney's statements included racially salient statements. White juror ratings of guilt for White defendants and Black defendants were not significantly different when race was not made salient.

The second sentence reports that white mock juror ratings of guilt were not significantly different for black defendants and white defendants when race was not made salient, but the first sentence claims that "playing the race card" reduced white juror racial bias. But if the data can't support the inference that there is bias without the race card ("not significantly different"), then how can the data support the inference that "playing the race card" reduced bias?

For the answer, let's look at the Results section (p. 298). Guilt ratings were reported on a scale from -5 (definitely not guilty) to +5 (definitely guilty):

A post hoc t test (t(75) = .24, p = .81) revealed that ratings of guilt for a Black defendant (M = 1.10, SD = 2.63) were not significantly different than ratings of guilt for a White defendant (M = .95, SD = 2.92) when race was not made salient. When race was made salient, a post hoc t test (t(72) = 3.57, p =.001) revealed that ratings of guilt were significantly lower for a Black defendant (M = -1.32, SD = 2.91) than a White defendant (M = 1.31, SD = 2.96).

More simply, when race was not made salient, white mock jurors rated the black defendant roughly 5% of a standard deviation more guilty than the white defendant, which is a difference that would often fall within the noise created by sampling error (p=0.81). However, when race was made salient by playing the race card, white mock jurors rated the black defendant roughly 90% of a standard deviation less guilty than the white defendant, which is a difference that would often not fall within the noise created by sampling error (p=0.001).

---

Here is how Bucolo and Cohn 2010 was described in a 2013 statement from the Peace Psychology division of the American Psychological Association:

Ignoring race often harms people of color, primarily because biases and stereotypes go unexamined. A study by Donald Bucolo and Ellen Cohn at the University of New Hampshire found that the introduction of race by the defense attorney of a hypothetical Black client reduced the effects of racial bias compared to when race was not mentioned (Bucolo & Cohn, 2010). One error in the state's approach in the George Zimmerman murder trial may have been the decision to ignore issues of race and racism.

But a change from 5% of a standard deviation bias against black defendants to 90% of a standard deviation bias against white defendants is not a reduction in the effects of racial bias.

---

Note that the point of this post is not to present Bucolo and Cohn 2010 as representative of racial bias in the criminal justice system. There are many reasons to be skeptical of the generalizability of experimental research on undergraduate students acting as mock jurors at a university with few black students. Rather, the point of the post is to identify another example of selective concern in social science.

Tagged with: , , ,

Looks like #addmaleauthorgate is winding down. I tried throughout the episode to better understand when, if ever, gender diversity is a good idea. I posted and tweeted and commented because I perceived a tension between (1) the belief that gender diversity produces benefits, and (2) the belief that it was sexist for a peer reviewer to suggest that gender diversity might produce benefits for a particular manuscript on gender bias.

---

I posted a few comments at Dynamic Ecology as I was starting to think about #addmaleauthorgate. The commenters there were nice, but I did not get much insight about how to resolve the conflict that I perceived.

I posted my first blog post on the topic, which WT excerpted here in a comment. JJ, Ph.D posted a reply comment here that made me think, but on reflection I thought that the JJ, Ph.D comment was based on an unnecessary assumption. One of the comments at that blog post did lead to my second #addmaleauthorgate blog post.

---

I received a comment on my first blog post, from Marta, which specified Marta's view of the sexism in the review:

Suggesting getting male input to fix the bias is sexist - the reviewer implies that the authors would not have come to the same conclusions if a male had read the paper.

That's a perfectly defensible idea, but its generalization has implications, such as it being sexist to suggest that a woman be placed on a team investigating gender bias; after all, the implication in suggesting gender diversity in that case would be that an all-male team is unable to draft a report on gender bias without help from a woman.

---

The most dramatic interaction occurred on Twitter. After that, I figured that it was a good time to stop asking questions. However, I subsequently received two additional substantive responses. First, Zuleyka Zevallos posted a comment at Michael Eisen's blog that began:

Gender diversity is a term that has a specific meaning in gender studies – it comes out of intersectional feminist writing that demonstrates how cis-gender men, especially White men, are given special privileges by society and that the views, experiences and interests of women and minorities should be better represented.

Later that day, Karen James tweeted:

...diversity & inclusion are about including traditionally oppressed or marginalized groups. Men are not one of those groups.

Both comments refer to the asymmetry-in-treatment explanation that I referred to in note 4 of my first #addmaleauthorgate post. That is certainly a way to reconcile the two beliefs that I mentioned at the top of this post.

---

Some more housekeeping. My comments here and here and here did not get very far in terms of attracting responses that disagreed with me. I followed up on a tweet characterizing the "whole review" by asking for the whole review to be made public, but that went nowhere; it seems suboptimal that there is so much commentary about a peer review that has been selectively excerpted.

A writer for Science Insider wrote an article indicating that Science Insider had access to the whole review. I asked for the writer to post the whole review, but the writer tweeted that I should contact the authors for this particular newsworthy item. I don't think that is how journalism is supposed to work.

I replied to a post on the topic in Facebook and might have posted comments elsewhere online. I make no claim about the exhaustiveness of the above links. The links aren't chronological, either.

---

One more larger point. It seems that much of the negative commentary on this peer review mischaracterizes the peer review. This mischaracterization is another method by which to make it easier to dismiss thoughtful consideration of ideas that one does not want to consider.

Here is a description of the peer review:

...that someone would think it was OK to submit a formal review of a paper that said "get a male co-author"

Very strange use of quotes in that case, given that the quoted passage did not appear in the public part of the review. Notice also the generalization to "paper" instead of "paper on gender bias" and the more forceful description of "get" as opposed to "It would probably also be beneficial."

Here is more coverage of the peer review:

A scientific journal sparked a Twitter firestorm when it rejected two female scientists' work partly because the paper they submitted did not have male co-authors.

If there is any evidence that the same manuscript would not have been rejected or would have had a lesser chance of being rejected if the manuscript had male co-authors, please let me know.

One more example, from a radio station:

This week the dishonour was given to academic journal PLos One for rejecting a paper written by two female researchers on the basis that they needed to add a male co-author to legitimize their work.

I would be interested in understanding which part of the review could be characterized with the word "needed" and "legitimize." Yes, it would be terribly sexist if the reviewer wrote that the female researchers "needed to add a male co-author to legitimize their work"; however, that did not happen.

that someone would think it was OK to submit a formal review of a paper that said “get a male co-author” - See more at: http://www.michaeleisen.org/blog/?p=1700#sthash.o0RkigoR.dpuf
that someone would think it was OK to submit a formal review of a paper that said “get a male co-author” - See more at: http://www.michaeleisen.org/blog/?p=1700#sthash.o0RkigoR.dpuf
Tagged with: , ,