I left this as a comment here.

For what it's worth, here are questions that I ask when evaluating research:

1. Did the researchers preregister their research design choices so that we can be sure that the research design choices were not made based on the data? If not, are the research design choices consistent with the choices that the researcher has previously made in other research?

2. Have the researchers publicly posted documentation and all the data that were collected, so that other researchers can check the analysis for errors and assess the robustness of the reported results?

3. Did the researchers declare that there are no unreported file drawer studies, unreported manipulations, and unreported variables that were measured?

4. Were the data collected by an independent third party?

5. Is the sample representative of the population of interest?

Tagged with:

Here are posts on R graphs, some of which are for barplots. I recently graphed a barplot with labels on the bars, so I'll post the code here. I cropped the y-axis numbers for the printed figure; from what I can tell, it takes a little more code and a lot more trouble (for me, at least) to code a plot without the left set of numbers.

Heritage

---

This loads the Hmisc package, which has the mgp.axis command that will be used later:

require(Hmisc)

---

This command tells R to make a set of graphs with 1 row and 0 columns (mfrow), to have margins of 6, 6, 2, and 1 for the bottom, left, top, and right (mar), and to have margins for the axes with characteristics of 0, 4, and 0, for the location of the labels, tick-mark labels, and tick marks (mgp):

par(mfrow=c(1,0), mar=c(6, 6, 2, 1), mgp=c(0,4,0))

---

This command enters the data for the barplot:

heritage <- c(66, 0, 71, 10, 49, 36)

---

This command enters the colors for the barplot bars:

colors <- c("royalblue4", "royalblue4", "cornflowerblue", "cornflowerblue", "navyblue", "navyblue")

---

This command enters the labels for the barplot bars, with \n indicating a new line:

names <- c("Flag reminds of\nSouthern heritage\nmore than\nwhite supremacy", "Flag reminds of\nwhite supremacy\nmore than\nSouthern heritage", "Proud of what\nthe Confederacy\nstood for", "Not proud of what\nthe Confederacy\nstood for", "What happens to\nSoutherners\naffects my life\na lot", "What happens to\nSoutherners\naffects my life\nnot very much")

---

This command plots a barplot of heritage with the indicated main title, no y-axis labels, from 0 to 90 on the y-axis, with horizontal labels, with colors from the "colors" set and names from the "names" set.

bp <- barplot (heritage, main="Percentage who Preferred the Georgia Flag with the Confederate Battle Emblem", ylab=NA, ylim=c(0,90), las=1, col=colors, names=names)

---

This command plots a y-axis (2) with the indicated labels being horizontal (las=2):

mgp.axis(2, at=c(0, 20, 40, 60, 80), las=2)

The above code is for the rightmost set of y-axis labels in the figure.

---

This command enters the text for the barplot bars:

labels <-c("66%\n(n=301)", "0%\n(n=131)", "71%\n(n=160)", "10%\n(n=104)", "49%\n(n=142)", "36%\n(n=122)")

---

This command plots the labels at the coordinates (barplot value, heritage value+ + 2):

text(bp, heritage+2, labels, cex=1, pos=3)

---

Full code is here.

Tagged with: ,

This periodically-updated page is to acknowledge researchers who have shared data and/or code and/or have answered questions about their research. I tried to acknowledge everyone who provided data, code, or information, but let me know if I missed anyone who should be on the list. The list is chronological based on the date that I first received data and/or code and/or information.

Aneeta Rattan for answering questions about and providing data used in "Race and the Fragility of the Legal Distinction between Juveniles and Adults" by Aneeta Rattan, Cynthia S. Levine, Carol S. Dweck, and Jennifer L. Eberhardt.

Maureen Craig for code for "More Diverse Yet Less Tolerant? How the Increasingly Diverse Racial Landscape Affects White Americans' Racial Attitudes" and for "On the Precipice of a 'Majority-Minority' America", both by Maureen A. Craig and Jennifer A. Richeson.

Michael Bailey for answering questions about his ideal point estimates.

Jeremy Freese for answering questions and conducting research about past studies of the Time-sharing Experiments for the Social Sciences program.

Antoine Banks and AJPS editor William Jacoby for posting data for "Emotional Substrates of White Racial Attitudes" by Antoine J. Banks and Nicholas A. Valentino.

Gábor Simonovits for data for "Publication Bias in the Social Sciences: Unlocking the File Drawer" by Annie Franco, Neil Malhotra, and Gábor Simonovits.

Ryan Powers for posting and sending data and code for "The Gender Citation Gap in International Relations" by Daniel Maliniak, Ryan Powers, and Barbara F. Walter. Thanks also to Daniel Maliniak for answering questions about the analysis.

Maya Sen for data and code for "How Judicial Qualification Ratings May Disadvantage Minority and Female Candidates" by Maya Sen.

Antoine Banks for data and code for "The Public's Anger: White Racial Attitudes and Opinions Toward Health Care Reform" by Antoine J. Banks.

Travis L. Dixon for the codebook for and for answering questions about "The Changing Misrepresentation of Race and Crime on Network and Cable News" by Travis L. Dixon and Charlotte L. Williams.

Adam Driscoll for providing summary statistics for "What's in a Name: Exposing Gender Bias in Student Ratings of Teaching" by Lillian MacNell, Adam Driscoll, and Andrea N. Hunt.

Andrei Cimpian for answering questions and providing more detailed data than available online for "Expectations of Brilliance Underlie Gender Distributions across Academic Disciplines" by Sarah-Jane Leslie, Andrei Cimpian, Meredith Meyer, and Edward Freeland.

Vicki L. Claypool Hesli for providing data and the questionnaire for "Predicting Rank Attainment in Political Science" by Vicki L. Hesli, Jae Mook Lee, and Sara McLaughlin Mitchell.

Jo Phelan for directing me to data for "The Genomic Revolution and Beliefs about Essential Racial Differences A Backdoor to Eugenics?" by Jo C. Phelan, Bruce G. Linkb, and Naumi M. Feldman.

Spencer Piston for answering questions about "Accentuating the Negative: Candidate Race and Campaign Strategy" by Yanna Krupnikov and Spencer Piston.

Amanda Koch for answering questions and providing information about "A Meta-Analysis of Gender Stereotypes and Bias in Experimental Simulations of Employment Decision Making" by Amanda J. Koch, Susan D. D'Mello, and Paul R. Sackett.

Kevin Wallsten and Tatishe M. Nteta for answering questions about "Racial Prejudice Is Driving Opposition to Paying College Athletes. Here's the Evidence" by Kevin Wallsten, Tatishe M. Nteta, and Lauren A. McCarthy.

Hannah-Hanh D. Nguyen for answering questions and providing data for "Does Stereotype Threat Affect Test Performance of Minorities and Women? A Meta-Analysis of Experimental Evidence" by Hannah-Hanh D. Nguyen and Ann Marie Ryan.

Solomon Messing for posting data and code for "Bias in the Flesh: Skin Complexion and Stereotype Consistency in Political Campaigns" by Solomon Messing, Maria Jabon, and Ethan Plaut.

Sean J. Westwood for data and code for "Fear and Loathing across Party Lines: New Evidence on Group Polarization" by Sean J. Westwood and Shanto Iyengar.

Charlotte Cavaillé for code and for answering questions for the Monkey Cage post "No, Trump won't win votes from disaffected Democrats in the fall" by Charlotte Cavaillé.

Kris Byron for data for "Women on Boards and Firm Financial Performance: A Meta-Analysis" by Corrine Post and Kris Byron.

Hans van Dijk for data for "Defying Conventional Wisdom: A Meta-Analytical Examination of the Differences between Demographic and Job-Related Diversity Relationships with Performance" by Hans van Dijk, Marloes L. van Engen, and Daan van Knippenberg.

Alexandra Filindra for answering questions about "Racial Resentment and Whites' Gun Policy Preferences in Contemporary America" by Alexandra Filindra and Noah J. Kaplan.

Tagged with: , ,

Here are four items typically used to measure symbolic racism, in which respondents are asked to indicate their level of agreement with the statements:

1. Irish, Italians, Jewish and many other minorities overcame prejudice and worked their way up. Blacks should do the same without any special favors.

2. Generations of slavery and discrimination have created conditions that make it difficult for blacks to work their way out of the lower class.

3. Over the past few years, blacks have gotten less than they deserve.

4. It's really a matter of some people not trying hard enough; if blacks would only try harder they could be just as well off as whites.

These four items are designed such that an antiblack racist would tend to respond the same way as a non-racist principled conservative. Many researchers realize this conflation problem and make an effort to account for this conflation. For example, here is an excerpt from Rabinowitz, Sears, Sidanius, and Krosnick 2010, explaining how responses to symbolic racism items might be influenced in part by non-racial values:

Adherence to traditional values—without concomitant racial prejudice—could drive Whites' responses to SR [symbolic racism] measures and their opinions on racial policy issues. For example, Whites' devotion to true equality may lead them to oppose what they might view as inherently inequitable policies, such as affirmative action, because it provides advantages for some social groups and not others. Similarly affirmative action may be perceived to violate the traditional principle of judging people on their merits, not their skin color. Consequently, opposition to such policies may result from their perceived violation of widely and closely held principles rather than racism.

However, this nuance is sometimes lost. Here is an excerpt from the Pasek, Krosnick, and Tompson 2012 manuscript that was discussed by the Associated Press shortly before the 2012 presidential election:

Explicit racial attitudes were gauged using questions designed to measure "Symbolic Racism" (Henry & Sears, 2002).

...

The proportion of Americans expressing explicit anti-Black attitudes held steady between 47.6% in 2008 and 47.3% in 2010, and increased slightly and significantly to 50.9% in 2012.

---

See here and here for a discussion of the Pasek et al. 2012 manuscript.

Tagged with: , , , , ,

From the abstract of Bucolo and Cohn 2010 (gated, ungated):

'Playing the race card' reduced White juror racial bias as White jurors' ratings of guilt for Black defendants were significantly lower when the defence attorney's statements included racially salient statements. White juror ratings of guilt for White defendants and Black defendants were not significantly different when race was not made salient.

The second sentence reports that white mock juror ratings of guilt were not significantly different for black defendants and white defendants when race was not made salient, but the first sentence claims that "playing the race card" reduced white juror racial bias. But if the data can't support the inference that there is bias without the race card ("not significantly different"), then how can the data support the inference that "playing the race card" reduced bias?

For the answer, let's look at the Results section (p. 298). Guilt ratings were reported on a scale from -5 (definitely not guilty) to +5 (definitely guilty):

A post hoc t test (t(75) = .24, p = .81) revealed that ratings of guilt for a Black defendant (M = 1.10, SD = 2.63) were not significantly different than ratings of guilt for a White defendant (M = .95, SD = 2.92) when race was not made salient. When race was made salient, a post hoc t test (t(72) = 3.57, p =.001) revealed that ratings of guilt were significantly lower for a Black defendant (M = -1.32, SD = 2.91) than a White defendant (M = 1.31, SD = 2.96).

More simply, when race was not made salient, white mock jurors rated the black defendant roughly 5% of a standard deviation more guilty than the white defendant, which is a difference that would often fall within the noise created by sampling error (p=0.81). However, when race was made salient by playing the race card, white mock jurors rated the black defendant roughly 90% of a standard deviation less guilty than the white defendant, which is a difference that would often not fall within the noise created by sampling error (p=0.001).

---

Here is how Bucolo and Cohn 2010 was described in a 2013 statement from the Peace Psychology division of the American Psychological Association:

Ignoring race often harms people of color, primarily because biases and stereotypes go unexamined. A study by Donald Bucolo and Ellen Cohn at the University of New Hampshire found that the introduction of race by the defense attorney of a hypothetical Black client reduced the effects of racial bias compared to when race was not mentioned (Bucolo & Cohn, 2010). One error in the state's approach in the George Zimmerman murder trial may have been the decision to ignore issues of race and racism.

But a change from 5% of a standard deviation bias against black defendants to 90% of a standard deviation bias against white defendants is not a reduction in the effects of racial bias.

---

Note that the point of this post is not to present Bucolo and Cohn 2010 as representative of racial bias in the criminal justice system. There are many reasons to be skeptical of the generalizability of experimental research on undergraduate students acting as mock jurors at a university with few black students. Rather, the point of the post is to identify another example of selective concern in social science.

Tagged with: , , ,

Jeffrey A. Segal and Albert D. Cover developed the Segal-Cover scores that are widely used to proxy the political ideology of Supreme Court nominees. Segal-Cover scores are described here (gated) and here (ungated). The scores are based on the coding of newspaper editorials, with each paragraph in the editorial coded as liberal, conservative, moderate, or not applicable (p. 559).

Segal and Cover helpfully provided examples of passages that would cause a paragraph to be coded as liberal, conservative, or moderate. Here is Segal and Cover's first example of a passage that would cause a paragraph to be coded liberal:

Scarcely more defensible were the numerous questions about Judge Harlan's affiliation with the Atlantic Union. The country would have a sorry judiciary indeed, if appointees were to be barred for belonging to progressive and respectable organizations.

Here is Segal and Cover's first example of a passage that would cause a paragraph to be coded conservative:

Judge Carswell himself admits to some amazement now at what he said in that 1948 speech. He should, for his were the words of pure and simple racism.

I can't think of a better example of conservatism than that.

Tagged with: , ,

The American National Election Studies 2008 Time Series Study included an Affect Misattribution Procedure (AMP) that measured implicit attitudes. The 2008 ANES User's Guide, located here, noted that, "[d]uring this module, respondents attributed a 'pleasant' or 'unpleasant' characteristic to Chinese-character graphic images, each of which was displayed to the respondent following a briefly flashed photo image of a young male."

Here are the photos of the young males, from Appendix A:

ANES AMP Faces

As you can see, this procedure measured implicit attitudes about mustaches.

Tagged with: , ,