Vox has a post about racial bias and police shootings. The story by Vox writer Jenée Desmond-Harris included quotes from Joshua Correll, who investigated racial bias in police shootings with a shooter game, in his co-authored 2007 study, "Across the Thin Blue Line: Police Officers and Racial Bias in the Decision to Shoot" (gated, ungated).

Desmond-Harris emphasized the Correll et al. 2007 finding about decision time:

When Correll performed his experiment specifically on law enforcement officers, he found that expert training significantly reduced their fatal mistakes overall, but no matter what training they had, most participants were quicker to shoot at a black target.

For readers who only skim the Vox story, this next sentence appears in larger blue font:

No matter what training they had, most participants were quicker to shoot at a black target.

That finding, about the speed of the response, is fairly characterized as racial bias. But maybe you're wondering whether the law enforcement officers in the study were more likely to incorrectly shoot the black targets than the white targets. That's sort of important, right? Well, Desmond-Harris does not tell you that. But you can open the link to the Correll et al. 2007 study and turn to page 1020, where you will find this passage:

For officers (and, temporarily, for trained undergraduates), however, the stereotypic interference ended with reaction times. The bias evident in their latencies did not translate to the decisions they ultimately made.

I wonder why the Vox writer did not mention that research finding.

---

I doubt that the aggregate level of racial bias in the decision of police officers to shoot is exactly zero, and it is certainly possible that other research has found or will find such a nonzero bias. Let me know if you are aware of any such studies.

Tagged with: , , ,

There has recently been much commentary on the peer review received by female researchers regarding their manuscript about gender bias in academic biology (see here, here, and here). The resulting Twitter hashtag #addmaleauthorgate indicates the basis for the charge of sexism. Here is the relevant part of the peer review:

It would probably also be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors), in order to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically based assumptions.

I am interested in an explanation of what was sexist about this suggestion. At a certain level of abstraction, the peer reviewer suggested that a manuscript on gender bias written solely by authors of one sex might be improved by having authors of another sex read or contribute to the manuscript in order to provide a different perspective.

The part of the peer review that is public did not suggest that the female authors consult male authors to improve the manuscript's writing or to improve the manuscript's statistics; the part of the peer review that is public did not suggest consultation with male authors on a manuscript that had nothing to do with sex. It would be sexist to suggest that persons of one sex consult persons of another sex to help with statistics or to help interpret results from a chemical reaction. But that did not happen here: the suggestion was only that members of one sex consult members of the other sex in the particular context of helping to improve the *interpretation of data* in a manuscript *about gender bias.*

Consider this hypothetical. The main professional organization in biology decides to conduct research and draft a statement on gender bias in biology. The team selected to perform this task includes only men. The peer reviewer from this episode suggests that including women on the team would help "serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically based assumptions." Is that sexism, too? If not, why not? If so, then when ‒ if ever ‒ is it not sexist to suggest that gender diversity might be beneficial?

---

Six notes:

1. I am not endorsing the peer review. I think that the peer review should have instead suggested having someone read the manuscript who would be expected to provide help thinking of and addressing alternate explanations; there is no reason to expect a man to necessarily provide such assistance.

2. The peer review mentioned particular sex differences as possible alternate explanations for the data. Maybe suggesting those alternate explanations reflects sexism, but I think that hypotheses should be characterized in terms such as substantiated or unsubstantiated instead of in terms such as sexist or inappropriate.

3. It is possible that the peer reviewer would not have suggested in an equivalent case that male authors consult female authors; that would be fairly characterized as sexism, but there is, as far as I know, no evidence of the result of this counterfactual; moreover, what the peer reviewer would have done in an equivalent case concerns only the sexism of the peer reviewer and not the sexism of the peer review.

4. I have no doubt that women in academia face bias in certain situations, and I can appreciate why this episode might be interpreted as additional evidence of gender bias. If the argument is that there is an asymmetry that makes it inappropriate to think about this episode in general terms, I can understand that position. But I would appreciate guidance about the nature and extent of this asymmetry.

5. Maybe writing a manuscript is an intimate endeavor, such that suggesting new coauthors is offensive in a way that suggesting new coauthors for a study by a professional organization is not. But that's an awfully nuanced position that would have been better articulated in an #addauthorgate hashtag.

6. Maybe the problem is that gender diversity works only or best in a large group. But that seems backwards, given that the expectation would be that a lone female student would have more of a positive influence in a class of 50 male students than in a class of 2 male students.

---

UPDATE (May 4, 2015)

Good response here by JJ, Ph.D to my hypothetical.

Tagged with: , ,

The American National Election Studies 2008 Time Series Study included an Affect Misattribution Procedure (AMP) that measured implicit attitudes. The 2008 ANES User's Guide, located here, noted that, "[d]uring this module, respondents attributed a 'pleasant' or 'unpleasant' characteristic to Chinese-character graphic images, each of which was displayed to the respondent following a briefly flashed photo image of a young male."

Here are the photos of the young males, from Appendix A:

ANES AMP Faces

As you can see, this procedure measured implicit attitudes about mustaches.

Tagged with: , ,

Here is Adam Davidson in the New York Times Magazine:

And yet the economic benefits of immigration may be the ­most ­settled fact in economics. A recent University of Chicago poll of leading economists could not find a single one who rejected the proposition.

For some reason, the New York Times online article did not link to that poll, so readers who do not trust the New York Times -- or readers who might be interested in characteristics of the poll, such as sample size, representativeness, and question wording -- must track down the poll themselves.

It appears that the poll cited by Adam Davidson is here and is limited to the aggregate effect of high-skilled immigrants:

The average US citizen would be better off if a larger number of highly educated foreign workers were legally allowed to immigrate to the US each year.

But concern about immigration is not limited to high-skilled immigrants and is not limited to the aggregate effect: a major concern is that low-skilled immigrants will have a negative effect on the poorest and most vulnerable Americans. There was a recent University of Chicago poll of leading economists on that concern, and that poll found more than a single economist to agree with that proposition; fifty percent, actually:

ImmigrationLowB---

Related: Here's what the New York Times did not mention about teacher grading bias

Related: Here's what the New York Times did not mention about the bus bias study

My comment at the New York Times summarizing this post, available after nine hours in moderation.

Tagged with: , , , ,

describes an experiment:

With more than 1,500 observations, the study uncovered substantial, statistically significant race discrimination. Bus drivers were twice as willing to let white testers ride free as black testers (72 percent versus 36 percent of the time). Bus drivers showed some relative favoritism toward testers who shared their own race, but even black drivers still favored white testers over black testers (allowing free rides 83 percent versus 68 percent of the time).

The title of Ayres' op-ed was: "When Whites Get a Free Pass: Research Shows White Privilege Is Real."

The op-ed linked to this study, by Redzo Mujcic and Paul Frijters, which summarized some of the study's results in the figure below:

Mujcic Frijters

The experiment involved members of four races, but the op-ed ignored results for Asians and Indians. I can't think of a good reason to ignore results for Asians and Indians, but it does make it easier for Ayres to claim that:

A field experiment about who gets free bus rides in Brisbane, a city on the eastern coast of Australia, shows that even today, whites get special privileges, particularly when other people aren't around to notice.

It would be nice if the blue, red, green, and orange bars in the figure were all the same height. But it would also be nice if the New York Times would at least acknowledge that there were four bars.

--

H/T Claire Lehmann

Related: Here's what the New York Times did not mention about teacher grading bias

Tagged with: , , , ,

You might have seen a Tweet or Facebook post on a recent study about sex bias in teacher grading:

Here is the relevant section from Claire Cain Miller's Upshot article in the New York Times describing the study's research design:

Beginning in 2002, the researchers studied three groups of Israeli students from sixth grade through the end of high school. The students were given two exams, one graded by outsiders who did not know their identities and another by teachers who knew their names.

In math, the girls outscored the boys in the exam graded anonymously, but the boys outscored the girls when graded by teachers who knew their names. The effect was not the same for tests on other subjects, like English and Hebrew. The researchers concluded that in math and science, the teachers overestimated the boys' abilities and underestimated the girls', and that this had long-term effects on students' attitudes toward the subjects.

The Upshot article does not mention that the study's first author had previously published another study using the same methodology, but with the other study finding a teacher grading bias against boys:

The evidence presented in this study confirms that the previous belief that schoolteachers have a grading bias against female students may indeed be incorrect. On the contrary: on the basis of a natural experiment that compared two evaluations of student performance–a blind score and a non-blind score–the difference estimated strongly suggests a bias against boys. The direction of the bias was replicated in all nine subjects of study, in humanities and science subjects alike, at various level of curriculum of study, among underperforming and best-performing students, in schools where girls outperform boys on average, and in schools where boys outperform girls on average (p. 2103).

This earlier study was not mentioned in the Upshot article and does not appear to have been mentioned in the New York Times ever. The Upshot article appeared in the print version of the New York Times, so it appears that Dr. Lavy has also conducted a natural experiment in media bias: report two studies with the same methodology but opposite conclusions, to test whether the New York Times will report on only the study that agrees with liberal sensibilities. That hypothesis has been confirmed.

Tagged with: , , , , ,

Social science correlations over 0.90 are relatively rare, at least for correlations of items that aren't trying to measure the same thing, so I thought I'd post about the 0.92 correlation that I came across in the data from the Leslie et al. 2015 Science article. Leslie et al. co-author Andrei Cimpian emailed me the data in Excel form, which made the analysis a lot easier.

Leslie et al. asked faculty, postdoctoral fellows, and graduate students in a given discipline to respond to this item: "Even though it's not politically correct to say it, men are often more suited than women to do high‐level work in [discipline]." Responses were made on a scale from 1 (strongly disagree) to 7 (strongly agree). Responses to that suitability stereotype item correlated at -0.19 (p=0.44, n=19) with the mean GRE verbal reasoning score for a discipline and at 0.92 (p<0.0001, n=19) with the mean GRE quantitative reasoning score for a discipline [source].

suitabilitystereotype

Tagged with: ,