One notable finding in the racial discrimination literature is the boomerang/backlash effect reported in Peffley and Hurwitz 2007:

"...whereas 36% of whites strongly favor the death penalty in the baseline condition, 52% strongly favor it when presented with the argument that the policy is racially unfair" (p. 1001).

The racially-unfair argument shown to participants was: "[Some people say/FBI statistics show] that the death penalty is unfair because most of the people who are executed are African Americans" (p. 1002). Statistics reported in Peffley and Hurwitz 2007 Table 1 indicate that responses differed at p<=0.05 for Whites in the baseline no-argument condition compared to Whites in the argument condition.

However, the boomerang/backlash effect did not appear at p<=0.05 in large-N MTurk direct and conceptual replication attempts reported on in Butler et al. 2017 or in my analysis of a nearly-direct replication attempt using a large-N sample of non-Hispanic Whites in a TESS study by Spencer Piston and Ashley Jardina with data collection by GfK, with a similar null result for a similar racial-bias-argument experiment regarding three strikes laws.

For the weighted TESS data, on a scale from 0 for strongly oppose to 1 for strongly favor, support for the death penalty for persons convicted of murder was 0.015 units lower (p=0.313, n=2018) in the condition in which participants were told "Some people say that the death penalty is unfair because most of the people who are executed are black", compared to the condition in which participants did not receive that statement, with controls for the main experimental conditions for the TESS study, which appeared earlier in the survey. This lack of statistical significance remained when the weighted sample was limited to liberals and extreme liberals; slight liberals, liberals, and extreme liberals; conservatives and extreme conservatives; and slight conservatives, conservatives, and extreme conservatives. There was also no statistically-significant difference between conditions in my analysis of the unweighted data. Regarding missing data, 7 of 1,034 participants in the control condition and 9 of 1,000 participants in the experimental condition did not provide a response.

Moreover, in the prior item on the survey, on a 0-to-1 scale, responses were 0.013 units higher (p=0.403, n=2025) for favoring three strikes laws in the condition in which participants were told that "...critics argue that these laws are unfair because they are especially likely to affect black people", compared to the compared to the condition in which participants did not receive that statement, with controls for the main experimental conditions for the TESS study, which appeared earlier in the survey. This lack of statistical significance remained when the weighted sample was limited to liberals and extreme liberals; slight liberals, liberals, and extreme liberals; conservatives and extreme conservatives; and slight conservatives, conservatives, and extreme conservatives. There was also no statistically-significant difference between conditions in my analysis of the unweighted data. Regarding missing data, 6 of 986 participants in the control condition and 3 of 1,048 participants in the experimental condition did not provide a response.

Null results might be attributable to participants not paying attention, so it is worth noting that the main treatment in the TESS experiment was that participants in one of the three conditions were given a passage to read entitled "Genes May Cause Racial Difference in Heart Disease" and participants in another of the three conditions were given a passage to read entitled "Social Conditions May Cause Racial Difference in Heart Disease". There was a statically-significant difference between these conditions in responses to an item about whether there are biological differences between blacks and whites (p=0.008, n=2,006), with responses in the Genes condition indicating greater estimates of biological differences between blacks and whites.

---

NOTE:

Data for the TESS study are available here. My Stata code is available here.

Tagged with: , , ,

I recently blogged about the Betus, Lemieux, and Kearns Monkey Cage post (based on this Kearns et al. working paper) that claimed that "U.S. media outlets disproportionately emphasize the smaller number of terrorist attacks by Muslims".

I asked Kearns and Lemieux to share their data (I could not find an email for Betus). My request was denied until the paper was published. I tweeted a few questions to the coauthors about their data, but these tweets have not yet received a reply. Later, I realized that it would be possible to recreate or at least approximate their dataset because Kearns et al. included their outcome variable coding in the appendix of their working paper. I built a dataset based on [A] their outcome variable, [B] the Global Terrorism Database that they used, and [C] my coding of whether a given perpetrator was Muslim.

My analysis indicated that these data do not appear to support the claim of disproportionate media coverage of terror attacks by Muslims. In models with no control variables, terror attacks by Muslim perpetrators were estimated to receive 5.0 times as much media coverage as other terror attacks (p=0.008), but, controlling for the number of fatalities, this effect size drops to 1.53 times as much media coverage (p=0.480), which further drops to 1.30 times as much media coverage (p=0.622) after adding a control for attacks by unknown perpetrators, so that terror attacks by Muslim perpetrators are compared to terror attacks by known perpetrators who are not Muslim. See the Stata output below, in which "noa" is the number of articles and coefficients represent incident rate ratios:

kearns et al 1My code contains descriptions of corrections and coding decisions that I made. Data from the Global Terrorism Database is not permitted to be posted online without permission, so the code is the only information about the dataset that I am posting for now. However, the code describes how you can build your own dataset with Stata.

Below is the message that I sent to Kearns and Lemieux on March 17. Question 2 refers to the possibility that the Kearns et al. outcome variable includes news articles published before the identities of the Boston Marathon bombers were known; that lack of knowledge of who the perpetrators were makes it difficult to assign that early media coverage to the Muslim identity of the perpetrators. Question 3 refers to the fact that the coefficient on the Muslim perpetrator predictor is larger as the number of fatalities in that attack is smaller; the Global Terrorism Database lists four rows of data for the Tsarnaev case, the first of which has only one fatality, so I wanted to check to make sure that there is no error about this in the Kearns et al. data.

Hi Erin,

I created a dataset from the Global Terrorism Database and the data in the appendix of your SSRN paper. I messaged the Monkey Cage about writing a response to your post, and I received the suggestion to communicate with you about the planned response post.

For now, I have three requests:

  1. Can you report the number of articles in your dataset for Bobby Joe Rogers [id 201201010020] and Ray Lazier Lengend? The appendix of your paper has perpetrator Ray Lazier Lengend associated with the id for Bobby Joe Rogers.
  1. Can you report the earliest published date and the latest published date among the 474 articles in your dataset for the Tsarnaev case?
  1. Can you report the number killed in your dataset for the Tsarnaev case?

I have attached a do file that can be used to construct my dataset and run my analyses in Stata. Let me know if you have any questions, see any errors, or have any suggestions.

Thanks,

L.J

I have not yet received a reply to this message.

I pitched a response post to the Monkey Cage regarding my analysis, but the pitch was not accepted, at least while the Kearns et al. paper is unpublished.

---

NOTES:

[1] Data from the The Global Terrorism Database have this citation: National Consortium for the Study of Terrorism and Responses to Terrorism (START). (2016). Global Terrorism Database [Data file]. Retrieved from https://www.start.umd.edu/gtd.

[2] The method for eliminating news articles in the Kearns et al. working paper included this choice:

"We removed the following types of articles most frequently: lists of every attack of a given type, political or policy-focused articles where the attack or perpetrators were an anecdote to a larger debate, such as abortion or gun control, and discussion of vigils held in other locations."

It is worth assessing the degree to which this choice disproportionately reduces the count of articles for the Dylann Roof terror attack, which served as a background for many news articles about the display of the Confederate flag. It's not entirely clear why these types of articles should not be considered when assessing whether terror attacks by Muslims receive disproportionate media coverage.

[3] Controlling for attacks by unknown perpetrators, controlling for fatalities, and removing the Tsarnaev case drops the point estimate for the incident rate ratio to 0.89 (p=0.823).

Tagged with: , , ,

Here's part of the abstract from Rios Morrison and Chung 2011, published in the Journal of Experimental Social Psychology:

In both studies, nonminority participants were randomly assigned to mark their race/ethnicity as either "White" or "European American" on a demographic survey, before answering questions about their interethnic attitudes. Results demonstrated that nonminorities primed to think of themselves as White (versus European American) were subsequently less supportive of multiculturalism and more racially prejudiced, due to decreases in identification with ethnic minorities.

So asking white respondents to select their race/ethnicity as "European American" instead of "White" influenced whites' attitudes toward and about ethnic minorities. The final sample for study 1 was a convenience sample of 77 self-identified whites and 52 non-whites, and the final sample for study 2 was 111 white undergraduates.

Like I wrote before, if you're thinking that it would be interesting to see whether results hold in a nationally representative sample with a large sample size, well, that was tried, with a survey experiment as part of the Time Sharing Experiments in the Social Sciences. Here are the results:

mc2011reanalysis

I'm mentioning these results again because in October 2014 the journal that published Rios Morrison and Chung 2011 desk rejected the manuscript that I submitted describing these results. So you can read in the Journal of Experimental Social Psychology about results for the low-powered test on convenience samples for the "European American" versus "White" self-identification hypothesis, but you won't be able to read in the JESP about results when that hypothesis was tested with a higher-powered test on a nationally-representative sample with data collected by a disinterested third party.

I submitted a revision of the manuscript to Social Psychological and Personality Science, which extended a revise-and-resubmit offer conditional on inclusion of a replication of the TESS experiment. I planned to conduct an experiment with an MTurk sample, but I eventually declined the revise-and-resubmit opportunity for various reasons.

The most recent version of the manuscript is here. Links to data and code.

Tagged with: , , , , , ,

In the Political Behavior article, "The Public's Anger: White Racial Attitudes and Opinions Toward Health Care Reform", Antoine J. Banks presented evidence that "anger uniquely pushes racial conservatives to be more opposing of health care reform while it triggers more support among racial liberals" (p. 493). Here is how the outcome variable was measured in the article's reported analysis (p. 511):

Health Care Reform is a dummy variable recoded 0-1 with 1 equals opposition to reform. The specific item is "As of right now, do you favor or oppose Barack Obama and the Democrats' Health Care reform bill". The response options were yes = I favor the health care bill or no = I oppose the health care bill.

However, the questionnaire for the study indicates that there were multiple items used to measure opinions of health care reform:

W2_1. Do you approve or disapprove of the way Barack Obama is handling Health Care? Please indicate whether you approve strongly, approve somewhat, neither approve nor disapprove, disapprove somewhat, or disapprove strongly.

W2_2. As of right now, do you favor or oppose Barack Obama and the Democrats' Health Care reform bill?

[if "favor" on W2_2] W2_2a. Do you favor Barack Obama and the Democrats' Health Care reform bill very strongly, or not so strongly?

[if "oppose" on W2_2] W2_2b. Do you oppose Barack Obama and the Democrats' Health Care reform bill very strongly, or not so strongly?

The bold item above is the only item reported on as an outcome variable in the article. The reported analysis omitted results for one outcome variable (W2_1) and reported dichotomous results for the other outcome variable (W2_2) for which the apparent intention was to have a four-pronged outcome variable from oppose strongly to favor strongly.

---

Here is the manuscript that I submitted to Political Behavior in March 2015 describing the results using the presumed intended outcome variables and a straightforward research design (e.g., no political discussion control, no exclusion of cases, cases from all conditions analyzed at the same time). Here's the main part of the main figure:

Banks2014Reproduction

The takeaway is that, with regard to opposition to health care reform, the effect of the fear condition on symbolic racism differed at a statistically significant level from the effect of the baseline relaxed condition on symbolic racism; however, contra Banks 2014, the effect of anger on symbolic racism did not differ at a statistically significant level from the effect of the relaxed condition on symbolic racism. The anger condition had a positive effect on symbolic racism, but it was not a unique influence.

The submission to Political Behavior was rejected after peer review. Comments suggested analyzing the presumed intended outcome variables while using the research design choices in Banks 2014. Using the model in Table 2 column 1 of Banks 2014, the fear interaction term and the fear condition term are statistically significant at p<0.05 for predicting the two previously-unreported non-dichotomous outcome variables and for predicting the scale of these two variables; the anger interaction term and the anger condition term are statistically significant at p<0.05 for predicting two of these three outcome variables, with p-values for the residual "Obama handling" outcome variable at roughly 0.10. The revised manuscript describing these results is here.

---

Data are here, and code for the initial submission is here.

---

Antoine Banks has published several studies on anger and racial politics (here, for example) that should be considered when making inferences about the substance of the effect of anger on racial attitudes. Banks had a similar article published in the AJPS, with Nicholas Valentino. Data for that article are here. I did not see any problems with that analysis, but I didn't look very hard, because the posted data were not the raw data: the posted data that I checked omitted, for example, the variables used to construct the outcome variable.

Tagged with: , , , , , , ,

This periodically-updated page is to acknowledge researchers who have shared data and/or code and/or have answered questions about their research. I tried to acknowledge everyone who provided data, code, or information, but let me know if I missed anyone who should be on the list. The list is chronological based on the date that I first received data and/or code and/or information.

Aneeta Rattan for answering questions about and providing data used in "Race and the Fragility of the Legal Distinction between Juveniles and Adults" by Aneeta Rattan, Cynthia S. Levine, Carol S. Dweck, and Jennifer L. Eberhardt.

Maureen Craig for code for "More Diverse Yet Less Tolerant? How the Increasingly Diverse Racial Landscape Affects White Americans' Racial Attitudes" and for "On the Precipice of a 'Majority-Minority' America", both by Maureen A. Craig and Jennifer A. Richeson.

Michael Bailey for answering questions about his ideal point estimates.

Jeremy Freese for answering questions and conducting research about past studies of the Time-sharing Experiments for the Social Sciences program.

Antoine Banks and AJPS editor William Jacoby for posting data for "Emotional Substrates of White Racial Attitudes" by Antoine J. Banks and Nicholas A. Valentino.

Gábor Simonovits for data for "Publication Bias in the Social Sciences: Unlocking the File Drawer" by Annie Franco, Neil Malhotra, and Gábor Simonovits.

Ryan Powers for posting and sending data and code for "The Gender Citation Gap in International Relations" by Daniel Maliniak, Ryan Powers, and Barbara F. Walter. Thanks also to Daniel Maliniak for answering questions about the analysis.

Maya Sen for data and code for "How Judicial Qualification Ratings May Disadvantage Minority and Female Candidates" by Maya Sen.

Antoine Banks for data and code for "The Public's Anger: White Racial Attitudes and Opinions Toward Health Care Reform" by Antoine J. Banks.

Travis L. Dixon for the codebook for and for answering questions about "The Changing Misrepresentation of Race and Crime on Network and Cable News" by Travis L. Dixon and Charlotte L. Williams.

Adam Driscoll for providing summary statistics for "What's in a Name: Exposing Gender Bias in Student Ratings of Teaching" by Lillian MacNell, Adam Driscoll, and Andrea N. Hunt.

Andrei Cimpian for answering questions and providing more detailed data than available online for "Expectations of Brilliance Underlie Gender Distributions across Academic Disciplines" by Sarah-Jane Leslie, Andrei Cimpian, Meredith Meyer, and Edward Freeland.

Vicki L. Claypool Hesli for providing data and the questionnaire for "Predicting Rank Attainment in Political Science" by Vicki L. Hesli, Jae Mook Lee, and Sara McLaughlin Mitchell.

Jo Phelan for directing me to data for "The Genomic Revolution and Beliefs about Essential Racial Differences A Backdoor to Eugenics?" by Jo C. Phelan, Bruce G. Linkb, and Naumi M. Feldman.

Spencer Piston for answering questions about "Accentuating the Negative: Candidate Race and Campaign Strategy" by Yanna Krupnikov and Spencer Piston.

Amanda Koch for answering questions and providing information about "A Meta-Analysis of Gender Stereotypes and Bias in Experimental Simulations of Employment Decision Making" by Amanda J. Koch, Susan D. D'Mello, and Paul R. Sackett.

Kevin Wallsten and Tatishe M. Nteta for answering questions about "Racial Prejudice Is Driving Opposition to Paying College Athletes. Here's the Evidence" by Kevin Wallsten, Tatishe M. Nteta, and Lauren A. McCarthy.

Hannah-Hanh D. Nguyen for answering questions and providing data for "Does Stereotype Threat Affect Test Performance of Minorities and Women? A Meta-Analysis of Experimental Evidence" by Hannah-Hanh D. Nguyen and Ann Marie Ryan.

Solomon Messing for posting data and code for "Bias in the Flesh: Skin Complexion and Stereotype Consistency in Political Campaigns" by Solomon Messing, Maria Jabon, and Ethan Plaut.

Sean J. Westwood for data and code for "Fear and Loathing across Party Lines: New Evidence on Group Polarization" by Sean J. Westwood and Shanto Iyengar.

Charlotte Cavaillé for code and for answering questions for the Monkey Cage post "No, Trump won't win votes from disaffected Democrats in the fall" by Charlotte Cavaillé.

Kris Byron for data for "Women on Boards and Firm Financial Performance: A Meta-Analysis" by Corrine Post and Kris Byron.

Hans van Dijk for data for "Defying Conventional Wisdom: A Meta-Analytical Examination of the Differences between Demographic and Job-Related Diversity Relationships with Performance" by Hans van Dijk, Marloes L. van Engen, and Daan van Knippenberg.

Alexandra Filindra for answering questions about "Racial Resentment and Whites' Gun Policy Preferences in Contemporary America" by Alexandra Filindra and Noah J. Kaplan.

Tagged with: , ,