There is a common practice of discussing inequality in the United States without reference to Asian Americans, which permits the suggestion that the inequality is due to race or racial bias. Here's a recent example:

The graph reported results for Hispanics disaggregated into Cubans, Puerto Ricans, Mexicans, and other Hispanics, but the graph omitted results for Asians and Pacific Islanders, even though the note for the graph indicates that Asians/Pacific Islanders were included in the model. Here are data on Asian American poverty rates (source):

ACS

The omission of Asian Americans from discussions of inequality is a common enough practice [1, 2, 3, 4, 5] that it deserves a name. The Asian American Exclusion is as good as any.

Tagged with: , , ,

Vox has a post about racial bias and police shootings. The story by Vox writer Jenée Desmond-Harris included quotes from Joshua Correll, who investigated racial bias in police shootings with a shooter game, in his co-authored 2007 study, "Across the Thin Blue Line: Police Officers and Racial Bias in the Decision to Shoot" (gated, ungated).

Desmond-Harris emphasized the Correll et al. 2007 finding about decision time:

When Correll performed his experiment specifically on law enforcement officers, he found that expert training significantly reduced their fatal mistakes overall, but no matter what training they had, most participants were quicker to shoot at a black target.

For readers who only skim the Vox story, this next sentence appears in larger blue font:

No matter what training they had, most participants were quicker to shoot at a black target.

That finding, about the speed of the response, is fairly characterized as racial bias. But maybe you're wondering whether the law enforcement officers in the study were more likely to incorrectly shoot the black targets than the white targets. That's sort of important, right? Well, Desmond-Harris does not tell you that. But you can open the link to the Correll et al. 2007 study and turn to page 1020, where you will find this passage:

For officers (and, temporarily, for trained undergraduates), however, the stereotypic interference ended with reaction times. The bias evident in their latencies did not translate to the decisions they ultimately made.

I wonder why the Vox writer did not mention that research finding.

---

I doubt that the aggregate level of racial bias in the decision of police officers to shoot is exactly zero, and it is certainly possible that other research has found or will find such a nonzero bias. Let me know if you are aware of any such studies.

Tagged with: , , ,

Here is Adam Davidson in the New York Times Magazine:

And yet the economic benefits of immigration may be the ­most ­settled fact in economics. A recent University of Chicago poll of leading economists could not find a single one who rejected the proposition.

For some reason, the New York Times online article did not link to that poll, so readers who do not trust the New York Times -- or readers who might be interested in characteristics of the poll, such as sample size, representativeness, and question wording -- must track down the poll themselves.

It appears that the poll cited by Adam Davidson is here and is limited to the aggregate effect of high-skilled immigrants:

The average US citizen would be better off if a larger number of highly educated foreign workers were legally allowed to immigrate to the US each year.

But concern about immigration is not limited to high-skilled immigrants and is not limited to the aggregate effect: a major concern is that low-skilled immigrants will have a negative effect on the poorest and most vulnerable Americans. There was a recent University of Chicago poll of leading economists on that concern, and that poll found more than a single economist to agree with that proposition; fifty percent, actually:

ImmigrationLowB---

Related: Here's what the New York Times did not mention about teacher grading bias

Related: Here's what the New York Times did not mention about the bus bias study

My comment at the New York Times summarizing this post, available after nine hours in moderation.

Tagged with: , , , ,

describes an experiment:

With more than 1,500 observations, the study uncovered substantial, statistically significant race discrimination. Bus drivers were twice as willing to let white testers ride free as black testers (72 percent versus 36 percent of the time). Bus drivers showed some relative favoritism toward testers who shared their own race, but even black drivers still favored white testers over black testers (allowing free rides 83 percent versus 68 percent of the time).

The title of Ayres' op-ed was: "When Whites Get a Free Pass: Research Shows White Privilege Is Real."

The op-ed linked to this study, by Redzo Mujcic and Paul Frijters, which summarized some of the study's results in the figure below:

Mujcic Frijters

The experiment involved members of four races, but the op-ed ignored results for Asians and Indians. I can't think of a good reason to ignore results for Asians and Indians, but it does make it easier for Ayres to claim that:

A field experiment about who gets free bus rides in Brisbane, a city on the eastern coast of Australia, shows that even today, whites get special privileges, particularly when other people aren't around to notice.

It would be nice if the blue, red, green, and orange bars in the figure were all the same height. But it would also be nice if the New York Times would at least acknowledge that there were four bars.

--

H/T Claire Lehmann

Related: Here's what the New York Times did not mention about teacher grading bias

Tagged with: , , , ,

You might have seen a Tweet or Facebook post on a recent study about sex bias in teacher grading:

Here is the relevant section from Claire Cain Miller's Upshot article in the New York Times describing the study's research design:

Beginning in 2002, the researchers studied three groups of Israeli students from sixth grade through the end of high school. The students were given two exams, one graded by outsiders who did not know their identities and another by teachers who knew their names.

In math, the girls outscored the boys in the exam graded anonymously, but the boys outscored the girls when graded by teachers who knew their names. The effect was not the same for tests on other subjects, like English and Hebrew. The researchers concluded that in math and science, the teachers overestimated the boys' abilities and underestimated the girls', and that this had long-term effects on students' attitudes toward the subjects.

The Upshot article does not mention that the study's first author had previously published another study using the same methodology, but with the other study finding a teacher grading bias against boys:

The evidence presented in this study confirms that the previous belief that schoolteachers have a grading bias against female students may indeed be incorrect. On the contrary: on the basis of a natural experiment that compared two evaluations of student performance–a blind score and a non-blind score–the difference estimated strongly suggests a bias against boys. The direction of the bias was replicated in all nine subjects of study, in humanities and science subjects alike, at various level of curriculum of study, among underperforming and best-performing students, in schools where girls outperform boys on average, and in schools where boys outperform girls on average (p. 2103).

This earlier study was not mentioned in the Upshot article and does not appear to have been mentioned in the New York Times ever. The Upshot article appeared in the print version of the New York Times, so it appears that Dr. Lavy has also conducted a natural experiment in media bias: report two studies with the same methodology but opposite conclusions, to test whether the New York Times will report on only the study that agrees with liberal sensibilities. That hypothesis has been confirmed.

Tagged with: , , , , ,

Here's the abstract of a PLoS One article, "Racial Bias in Perceptions of Others' Pain":

The present work provides evidence that people assume a priori that Blacks feel less pain than do Whites. It also demonstrates that this bias is rooted in perceptions of status and the privilege (or hardship) status confers, not race per se. Archival data from the National Football League injury reports reveal that, relative to injured White players, injured Black players are deemed more likely to play in a subsequent game, possibly because people assume they feel less pain. Experiments 1–4 show that White and Black Americans–including registered nurses and nursing students–assume that Black people feel less pain than do White people. Finally, Experiments 5 and 6 provide evidence that this bias is rooted in perceptions of status, not race per se. Taken together, these data have important implications for understanding race-related biases and healthcare disparities.

Here are descriptions of the samples for each experiment, after exclusions of respondents who did not meet criteria for inclusion:

  • Experiment 1: 240 whites from the University of Virginia psychology pool or MTurk
  • Experiment 2: 35 blacks from the University of Virginia psychology pool or MTurk
  • Experiment 3: 43 registered nurses or nursing students
  • Experiment 4: 60 persons from MTurk
  • Experiment 5: 104 persons from MTurk
  • Experiment 6: 245 persons from MTurk

Not the most representative samples, of course. If you're thinking that it would be interesting to see whether results hold in a nationally representative sample with a large sample size, well, that was tried, with a survey experiment as part of the Time Sharing Experiments in the Social Sciences. Here's the description of the results listed on the TESS site for the study:

Analyses yielded mixed evidence. Planned comparison were often marginal or non-significant. As predicted, White participants made (marginally) lower pain ratings for Black vs. White targets, but only when self-ratings came before target ratings. When target ratings came before self-ratings, White participants made (marginally) lower pain ratings for White vs. Black targets. Follow-up analyses suggest that White participants may have been reactant. White participants reported that they were most similar to the Black target and least similar to the White target, contrary to prediction and previous work both in our lab and others' lab. Moreover, White participants reported that Blacks were most privileged and White participants least privileged, again contrary to prediction and previous work both in our lab and others' lab.

The results of this TESS study do not invalidate the results of the six experiments and one archival study reported in the PLoS One article, but the non-reporting of the TESS study does raise questions about whether there were other unreported experiments and archival studies.

The TESS study had an unusually large and diverse sample: 586 non-Hispanic whites, 526 non-Hispanic blacks, 520 non-Hispanic Asians, and 528 Hispanics. It's too bad that these data were placed into a file drawer.

Tagged with: , , ,

Christopher D. DeSante published an article in the American Journal of Political Science titled, "Working Twice as Hard to Get Half as Far: Race, Work Ethic, and America’s Deserving Poor" (57: 342-356, April 2013). The title refers to survey evidence that DeSante reported indicating that, compared to hypothetical white applicants for state assistance, hypothetical black applicants for state assistance received less reward for hard work and more punishment for laziness.

The study had a clever research design: respondents were shown two applications for state assistance, and each applicant was said to need $900, but there was variation in the names of the applicants (Emily, Laurie, Keisha, Latoya, or no name provided) and in the Worker Quality Assessment of the applicant (poor, excellent, or no assessment section provided); respondents were then asked to divide $1500 between the applicants or to use some or all of the $1500 to offset the state budget deficit.

Table 1 below indicates the characteristics of the conditions and the mean allocations made to each alternative. In condition 5, for example, 64 respondents were asked to divide $1500 between hardworking Laurie, lazy Emily, and offsetting the state budget deficit: hardworking Laurie received a mean allocation of $682, lazy Emily received a mean allocation of $566, and the mean allocation to offset the state budget deficit was $250.

DeSanteReproductionTable1blog

---

I'm going to quote DeSante (2013: 343) and intersperse comments about the claims. For the purpose of this analysis, let's presume that respondents interpreted Emily and Laurie as white applicants and Keisha and Latoya as black applicants. Reported p-values for my analysis below are two-tailed p-values. Here's the first part of our DeSante (2013: 343) quote.

Through a nationally representative survey experiment in which respondents were asked to make recommendations regarding who should receive government assistance, I find that American “principles” of individualism, hard work, and equal treatment serve to uniquely benefit whites in two distinct ways. First, the results show that compared to African Americans, whites are not automatically perceived as more deserving of government assistance.

Condition 7 paired Laurie with Keisha, neither of whom had a Worker Quality Assessment. Laurie received a mean allocation of $556, and Keisha received a mean allocation of $600. Keisha received $44 more than Laurie, a $44 difference that is statistically significant at p<0.01. So DeSante is technically correct that "whites are not automatically perceived as more deserving of government assistance," but this claim overlooks evidence from condition 7 that a white applicant was given LESS government assistance than an equivalent black applicant.

Instead of reporting these straightforward results from condition 7, how did DeSante compare allocations to black and white applicants? Below is an image from Table 2 of DeSante (2013), which reported results from eleven t-tests. Tests 3 and 4 provided the evidence for DeSante's claim that, "compared to African Americans, whites are not automatically perceived as more deserving of government assistance."

DeSante2013Table2

Here's what DeSante did in test 3: DeSante took the $556 allocated to Laurie in condition 7 when Laurie was paired with Keisha and compared that to the $546 allocated to Latoya in condition 10 when Latoya was paired with Keisha; that $9 advantage (bear with the rounding error) for Laurie over Latoya (when both applicants were paired with Keisha and neither had a Worker Quality Assessment) did not reach conventional levels of statistical significance.

Here's what DeSante did in test 4: DeSante took the $587 allocated to Emily in condition 4 when Emily was paired with Laurie and compared that to the $600 allocated to Keisha in condition 7 when Keisha was paired with Laurie; that $12 advantage for Keisha over Emily (when both applicants were paired with Laurie and neither had a Worker Quality Assessment) did not reach conventional levels of statistical significance.

So which of these three tests is the best test? My test had more observations, compared within instead of across conditions, and had a lower standard error. But DeSante's tests are not wrong or meaningless: the problem is that tests 3 and 4 provide incomplete information for the purposes of testing for racial bias against applicants with no reported Worker Quality Assessment.

---

Here's the next part of that quote from DeSante (2013: 343):

Instead, the way hard work and "laziness" are treated is conditioned by race: whites gain more for the same level of effort, and blacks are punished more severely for the same level of "laziness."

Here's what DeSante did to produce this inference. Emily received a mean allocation of $587 in condition 4 when paired with Laurie and neither applicant had a Worker Quality Assessment; but hard-working Emily received $711 in condition 6 when paired with lazy Laurie. This $123 difference can be interpreted as a reward for Emily's hard work, at least in relation to Laurie's laziness.

Now we do the same thing for Keisha paired with Laurie: Keisha received a mean allocation of $600 in condition 7 when paired with Laurie and neither applicant had a Worker Quality Assessment; but hard-working Keisha received $607 in condition 9 when paired with lazy Laurie. This $7 difference can be interpreted as a reward for Keisha's hard work, at least in relation to Laurie's laziness.

Test 7 indicates that the $123 reward to Emily for her hard work was larger than the $7 reward to Keisha for her hard work (p=0.03).

But notice that DeSante could have conducted another set of comparisons:

Laurie received a mean allocation of $556 in condition 7 when paired with Keisha and neither applicant had a Worker Quality Assessment; but hard-working Laurie received $620 in condition 8 when paired with lazy Keisha. This $64 difference can be interpreted as a reward for Laurie's hard work, at least in relation to Keisha's laziness.

Now we do the same thing for Latoya paired with Keisha: Latoya received a mean allocation of $546 in condition 10 when paired with Keisha and neither applicant had a Worker Quality Assessment; but hard-working Latoya received $627 in condition 11 when paired with lazy Keisha. This $81 difference can be interpreted as a reward for Latoya's hard work, at least in relation to Keisha's laziness.

The $16 difference between Laurie's $64 reward for hard work and Latoya's $81 reward for hard work (rounding error, again) is not statistically significant at conventional levels (p=0.76). The combined effect of the DeSante test and my alternate test is not statistically significant at conventional levels (effect of $49, p=0.20), so -- in this dataset -- there is a lack of evidence at a statistically significant level for the claim that "whites gain more for the same level of effort."

I conducted a similar set of alternate tests for the inference that "blacks are punished more severely for the same level of "laziness"; the effect size was smaller in my test compared to DeSante's test, but evidence for the the combined effect was believable: a $74 effect, with p=0.06.

---

Here's the next part of that quote from DeSante (2013: 343):

Second, and consistent with those who take the "principled ideology" approach to the new racism measures, the racial resentment scale is shown to predict a desire for smaller government and less government spending. However, in direct opposition to this ideology-based argument, this effect is conditional upon the race of the persons placing demands on the government: the effect of racial resentment on a desire for a smaller government greatly wanes when the beneficiaries of that government spending are white as opposed to black. This represents strong evidence that racial resentment is more racial animus than ideology.

DeSante based this inference on results reported in Table 3, reproduced below:

DeSante2013Table3

Notice the note at the bottom: "White respondents only." DeSante reported results in Table 3 based on responses only from respondents coded as white, but reported results in Table 2 based on responses from respondents coded as white, black, Asian, Native American, mixed race, or Other. Maybe there's a good theoretical reason for changing the sample. DeSante's data and code are posted here if you are interested in what happens to p-values when Table 2 results are restricted to whites and Table 3 results include all respondents.

But let's focus on the bold RRxWW line in Table 3. RR is racial resentment, and WW is a dichotomous variable for the conditions in which both applicants were white. Model 3 includes categories for WW (two white applicants paired together), BB (two black applicants paired together), and WB (one white applicant paired with one black applicant); this is very important, because these included terms must be interpreted in relation to the omitted category that I will call NN (two unnamed applicants paired together). Therefore, the -337.92 coefficient on the RRxWW variable in model 3 indicates that -- all other model variables held constant -- white respondents allocated $337.92 less to offset the state budget deficit when both applicants were white compared to when both applicants were unnamed.

The -196.43 coefficient for the RRxBB variable in model 3 indicates that -- all other model variables held constant -- white respondents allocated $196.43 less to offset the state budget deficit when both applicants were black compared to when both applicants were unnamed. This -$196.43 coefficient did not reach statistical significance, but the coefficient is important because the bias in favor of the two white applicants relative to the two black applicants is only -$337.92 minus -$196.43; so whites allocated $141.49 less to offset the state budget deficit when both applicants were white compared to when both applicants were black, but the p-value for this difference was 0.41.

---

Here's a few takeaways from the above analysis:

1. The limited choice of statistical tests reported in DeSante (2013) produced inferences that overestimated the extent of bias against black applicants and missed evidence of bias against white applicants.

2. Takeaway 1 depends on the names reflecting only race of the applicant. But the names might have reflected something other than race; for instance, in condition 10, Keisha received a mean allocation $21 higher than the mean allocation to Latoya (p=0.03): such a difference is not expected if Keisha and Latoya were "all else equal."

3. Takeaway 1 would likely not have been uncovered had the AJPS not required the posting of data and replication files from its published articles.

4. Pre-registration would eliminate suspicion about research design decisions, such as decisions to restrict only some analyses to whites and to report some comparisons but not others.

---

In case you are interested in reproducing the results that I discussed, the data are here, code is here, and the working paper is here. Comments are welcome.

---

UPDATE (Nov 2, 2014)

I recently received a rejection for the manuscript describing the results reported above; the second reviewer suggested portraying the raw data table as a graph: I couldn't figure out an efficient way to do that, but the suggestion did get me to realize a good way to present the main point of the manuscript more clearly with visuals.

The figure below illustrates the pattern of comparison for DeSante 2013 tests 1 and 2: solid lines represent comparisons reported in DeSante 2013 and dashed lines represent unreported equivalent or relevant comparisons; numbers in square brackets respectively indicate the applicant and the condition, so that [1/2] indicates applicant 1 in condition 2.

 

Tests 1 and 2

---

The figure below indicates the pattern of reported and unreported comparisons for black applicants and white applicants with no Worker Quality Assessment: the article reported two small non-statistically significant differences when comparing applicants across conditions, but the article did not report the larger statistically significant difference favoring the black applicant when a black applicant and a white applicant were compared within conditions.

Tests 3 and 4---

The figure below indicates the pattern of reported and unreported comparisons for the main takeaway of the article. The left side of the figure indicates that one of the black applicants received a lesser reward for an excellent Worker Quality Assessment and received a larger penalty for a poor Worker Quality Assessment, compared to the reward and penalty for the corresponding white applicant; however, neither the lesser reward for an excellent Worker Quality Assessment nor the larger penalty for a poor Worker Quality Assessment was present at a statistically significant level in the comparisons on the right, which were not reported in the article (p=0.76 and 0.31, respectively).

Tests Rest---

Data for the reproduction are here. Reproduction code is here.

---

UPDATE (Mar 8, 2015)

The above analysis has been published here by Research & Politics.

Tagged with: , , , ,