The Open Science Framework has a preregistration for the Election Research Preacceptance Competition posted in March 2017 for contributors Erin Cassese and Tiffany D. Barnes, for a planned analysis of data from the 2016 American National Election Studies Time Series Study. The preregistration was titled "Unpacking White Women's Political Loyalties".

The Cassese and Barnes 2019 Political Behavior article "Reconciling Sexism and Women's Support for Republican Candidates: A Look at Gender, Class, and Whiteness in the 2012 and 2016 Presidential Races" reported results from analyses of data for the 2016 American National Election Studies Time Series Study that addressed content similar to that of the aforementioned preregistration: 2016 presidential vote choice, responses on a scale measuring sexism, a comparison of how vote choice associates with sexism among men and among women, perceived discrimination against women, and a comparison of 2016 patterns to 2012 patterns.

But, from what I can tell, the analysis in the Political Behavior article did not follow the preregistered plan, and the article did not even reference the preregistration.

---

Moreover, some of the hypotheses in the preregistration appear to differ from hypotheses in the article. For example, the preregistration did not expect vote choice to associate with sexism differently in 2012 compared to 2016, but the article did. From the preregistration (emphasis added):

H5: When comparing the effects of education and modern sexism on abstentions, candidate evaluations, and vote choice in the 2012 and 2016 ANES data, we expect comparable patterns and effect sizes to emerge. (This is a non-directional hypothesis; essentially, we expect to see no difference and conclude that these relationships are relatively stable across election years. The alternative is that the direction and significance of the estimated effects in these models varies across the two election years.)

From the article (emphasis added):

To evaluate our expectations, we compare analysis of the 2012 and 2016 ANES surveys, with the expectation that hostile sexism and perceptions of discrimination had a larger impact on voters in 2016 due to the salience of sexism in the campaign (Hypothesis 5).

I don't think that the distinction between modern sexism and hostile sexism in the above passages matters: for example, the preregistration placed the "women complain" item in a modern sexism measure, but the article placed the "women complain" item in a hostile sexism measure.

---

Another instance, from the preregistration (emphasis added):

H3: The effect of modern sexism differs for white men and women. (This is a non-directional hypothesis. For women, modern sexism is an ingroup orientation, pertaining to women’s own group or self-interest, while for men it is an outgroup orientation. For this reason, the connection between modern sexism, candidate evaluations, and vote choice may vary, but we do not have strong a priori assumptions about the direction of the difference.)

From the article (emphasis added):

Drawing on the whiteness and system justification literatures, we expect these beliefs about gender will influence vote choice in a similar fashion for both white men and women (Hypothesis 4).

---

I think that readers of the Political Behavior article should be informed of the preregistration because preregistration, as I understand it, is intended to remove flexibility in research design, and preregistration won't be effective at removing research design flexibility if researchers retain the flexibility to not inform readers of the preregistration. I can imagine a circumstance in which the analyses reported in a publication do not need to follow the associated preregistration, but I can't think a good justification for Cassese and Barnes 2019 readers not being informed of the Cassese and Barnes 2017 preregistration.

---

NOTES

1. Cassese and Barnes 2019 indicated that (footnote omitted and emphasis added):

To evaluate our hypothesis that disadvantaged white women will be most likely to endorse hostile sexist beliefs and more reluctant to attribute gender-based inequality to discrimination, we rely on the hostile sexism scale (Glick and Fiske 1996). The ANES included two questions from this scale: (1) Do women demanding equality seek special favors? and (2) Do women complaining about discrimination cause more problems than they solve? Items were combined to form a mean-centered scale. We also rely on a single survey item asking respondents how much discrimination women face in the United States. Responses were given on a 5-point Likert scale ranging from none to a great deal. This item taps modern sexism (Cassese et al. 2015). Whereas both surveys contain other items gauging gender attitudes (e.g., the 2016 survey contains a long battery of hostile sexism items), the items we use here are the only ones found in both surveys and thus facilitate direct comparisons, with accurate significance tests, between 2012 and 2016.

However, from what I can tell, the ANES 2012 Time Series Codebook and the ANES 2016 Time Series Codebook both contain a modern sexism item about media attention (modsex_media in 2012, MODSEXM_MEDIAATT in 2016) and a gender attitudes item about bonding (women_bond in 2012, and WOMEN_WKMOTH in 2016). The media attention item is listed in the Cassese and Barnes 2017 preregistration as part of the modern sexism dependent variable / mediating variable, and the preregistration indicates that:

We have already completed the analysis of the 2012 ANES data and found support for hypotheses H1-H4a in the pre-Trump era. The analysis plan presented here is informed by that analysis.

2. Some of the content from this post is from a "Six Things Peer Reviewers Can Do To Improve Political Science" manuscript. In June 2018, I emailed the Cassese and Barnes 2019 corresponding author a draft of the manuscript, which redacted criticism of the work of other authors that I had not yet informed of my criticism of their work. For another example from the "Six Things" manuscript, click here.

Tagged with: , ,

The Chudy 2021 Journal of Politics article "Racial Sympathy and Its Political Consequences" concerns White racial sympathy for Blacks.

More than a decade ago, Hutchings 2009 reported evidence about White racial sympathy for Blacks. Below is a table from Hutchings 2009 indicating that, among White liberals and White conservatives, sympathy for Blacks predicted at p<0.05 support for government policies explicitly intended to benefit Blacks such as government aid to Blacks, controlling for factors such as anti-Black stereotypes:

Chudy 2021 thanked Vincent Hutchings in the acknowledgments, and Vincent Hutchings is listed as co-chair of Jennifer Chudy's "Racial Sympathy in American Politics" dissertation. But see whether you can find in the Chudy 2021 JOP article an indication that Hutchings 2009 had reported evidence that White racial sympathy for Blacks predicted support for government policies explicitly intended to benefit Blacks.

Here is a passage from Chudy 2021 referencing Hutchings 2009:

I start by examining white support for "government aid to blacks," a broad policy area that has appeared on the ANES since the 1970s. The question asks respondents to place themselves on a 7-point scale that ranges from "Blacks Should Help Themselves" to "Government Should Help Blacks." Previous research on this question has found that racial animus leads some whites to oppose government aid to African Americans (Hutchings 2009). This analysis examines whether racial sympathy leads some white Americans to offer support for this contentious policy area.

I think that the above passages can be reasonably read as suggesting an incorrect claim that the Hutchings 2009 "previous research on this question" did not examine "whether racial sympathy leads some white Americans to offer support for this contentious policy area [of government aid to African Americans]".

---

NOTES:

1. Chudy 2021 reported results from an experiment that varied the race of a target culprit and asked participants to recommend a punishment. Chudy 2021 Figure 2 plotted estimates of recommended punishments at different levels of racial sympathy.

The Chudy 2021 analysis used a linear regression, which produced an estimated difference by race on a 0-to-100 scale of -22 at the lowest level of racial sympathy and of 41 at the highest level of racial sympathy. These differences can be seen in my plot below to the left, with a racial sympathy index coded from 0 through 16.

However, a linear relationship might not be a correct presumption. The plot to the right reports estimates calculated at each level of the racial sympathy index, so that the estimate at the highest level of racial sympathy is not influenced by cases at other levels of racial sympathy.

2. Chudy 2021 Figure 2 plots results from Chudy 2021 Table 5, but using a reversed outcome variable for some reason.

3. Chudy 2021 used the term "predicted probability" to discuss the Figure 2 / Table 5 results, but these results are predicted levels of an outcome variable that had eight levels, from "0-10 hours" to "over 70 hours" (see the bottom of the final page in the Chudy 2021 supplemental web appendix).

4. The bias detected in this experiment across all levels of racial sympathy was 13 units on a 0-to-100 scale, disfavoring the White culprit relative to the Black culprit (p=0.01) [svy: reg commservice whiteblackculprit].

5. Code for my analyses.

Tagged with: