Is there a citation bias against research findings that undercut liberal assumptions or policy preferences? [UPDATE: Credit to Lee Jussim]

[Please see the March 13, 2019 update below]

Studies have indicated that there are more liberals than conservatives in the social sciences (e.g., Rothman et al. 2005, Gross and Simmons 2007). If social scientists on average are more likely to cite publications that support rather than undercut their assumptions about the world and/or are more likely to cite publications that support rather than undercut their policy preferences, then it is reasonable to expect that, all else equal, publications reporting findings that support liberal assumptions or policy preferences will receive a higher number of citations than publications reporting findings that undercut liberal assumptions or policy preferences.

---

Here is a sort-of natural experiment to assess this potential ideological citation bias. From an April 2015 Scott Alexander post at Slate Star Codex (paragraph breaks omitted):

Williams and Ceci just released National Hiring Experiments Reveal 2:1 Faculty Preference For Women On STEM Tenure Track, showing a strong bias in favor of women in STEM hiring...Two years ago Moss-Racusin et al released Science Faculty's Subtle Gender Biases Favor Male Students, showing a strong bias in favor of men in STEM hiring. The methodology was almost identical to this current study, but it returned the opposite result. Now everyone gets to cite whichever study accords with their pre-existing beliefs.

It has been more than three years since that Slate Star Codex post, so let's compare the number of citations received by the article with the finding that supports liberal assumptions or policy preferences (Moss-Racusin et al. 2012) to the number of citations received by the article with the finding that undercuts liberal assumptions or policy preferences (Williams and Ceci 2015). Both articles were published in the same journal, and both articles have a mixed-sex authorship team with a woman as the first author, and both of these factors help eliminate a few alternate explanations for any difference in citation counts to the articles.

Based on Web of Science data collected August 24, 2018, Moss-Racusin et al. 2012 has been cited these numbers of times in the given year, with the number of years from the article's publication year in square brackets:

  • 5 in 2012 [0]
  • 39 in 2013 [1]
  • 74 in 2014 [2]
  • 109 in 2015 [3]
  • 111 in 2016 [4]
  • 131 in 2017 [5]
  • 105 in 2018 to date [6]

Based on Web of Science data collected August 24, 2018, Williams and Ceci 2015 has been cited these numbers of times in the given year, with the number of years from the article's publication year in square brackets:

  • 4 in 2015 [0]
  • 21 in 2016 [1]
  • 27 in 2017 [2]
  • 15 in 2018 to date [3]

So, in the second year from the article's publication year, Williams and Ceci 2015 was cited 27 times, and Moss-Racusin et al. 2012 was cited 74 times. Over the first three years, Williams and Ceci 2015 was cited 52 times, and Moss-Racusin et al. 2012 was cited 118 times.

---

The potential citation bias against research findings that undercut liberal assumptions or policy preferences might be something that tenure-and-promotion committees should be aware of. Such a citation bias would also be relevant for assessing the status of the journal that research is published in and whether research is even published. Suppose that a journal editor were given a choice of publishing either Moss-Racusin et al. 2012 or Williams and Ceci 2015. Based on the above data, an editor publishing Williams and Ceci 2015 instead of Moss-Racusin et al. 2012 would, three years in, be forfeiting roughly 66 citations to an article in their journal (118 minus 52). Editors who prefer higher impact factors for their journal might therefore prefer to publish a manuscript with research findings that support liberal assumptions or policy preferences, compared to an equivalent manuscript with research findings that undercut liberal assumptions or policy preferences.

---

NOTES

1. Williams and Ceci 2015 was first published online or in print earlier in the year (April 8, 2015) than Moss-Racusin et al. 2012 (Sept 17, 2012), so this earlier publication date in the publication year for Williams and Ceci 2015 should bias upward citations in the publication year or in a given year from the publication year for Williams and Ceci 2015 relative to Moss-Racusin et al. 2012, given that Williams and Ceci 2015 had more time in the publication year to be cited.

2. There might be non-ideological reasons for Moss-Racusin et al. 2012 to be enjoying a 2:1 citation advantage over Williams and Ceci 2015, so comments are open for ideas about any such reasons and for other ideas on this topic. The articles have variation in the number of authors—2 for Williams and Ceci 2015, and 5 for Moss-Racusin et al. 2012—but that seems unlikely to me to be responsible for the entire citation difference.

3. Some of my publications might be considered to fall into the category of research findings that undercut liberal assumptions or policy preferences.

---

UPDATE (Nov 30, 2018)

Here is another potential article pair:

The 1996 study about items measuring sexism against women was published earlier and in a higher-ranked journal than the 1999 study about items measuring sexism against men, but there is to date an excess of 1,238 citations for the 1996 study, which I suspect cannot be completely assigned to the extra three years in circulation and the journal ranking.

---

UPDATE (Mar 13, 2019)

Lee Jussim noted that Moss-Racusin et al. (2012) has been cited much more often than Williams and Ceci (2015) has been (and note the differences in inferences between articles), before I did. Lee's tweet below is from May 28, 2018:

https://twitter.com/PsychRabble/status/1001250104676929542

Tagged with:

6 Comments on “Is there a citation bias against research findings that undercut liberal assumptions or policy preferences? [UPDATE: Credit to Lee Jussim]

  1. Could the different timings of the publications make a difference? Sometimes neing the first to report a finding means more attention - and when someone comes later to refute the claim, the "consensus" has already been reached. No idea how to control for this.

    • Hi Joel,

      Thanks for the comment: that's a great point. I think that the earlier publication date for MR2012 than for WC2015 is a problem for inferences to the extent that the citation of only MR2012 is due to the citation being to something about MR2012 that is shared by WC2015, so that there is no need to cite both articles; in that case, the preference for the earlier publication would be reasonable and unrelated to ideology. I'm thinking of something (hypothetically) along the lines of "...some research has assessed sex discrimination in STEM fields (e.g., Moss-Racusin et al. 2012, ...)"; in that case, WC2015 would be a redundant citation to MR2012.

      But I think that the earlier publication date for MR2012 than for WC2015 would not be a problem for articles that use MR2012 to support a claim about sex discrimination and then don't discuss evidence that contradicts this claim. For example, this publication [https://rethinkingecology.pensoft.net/articles.php?id=24333] cited MR2012 but not WC2015: "...there remains ample gender-bias across scientific disciplines. Experimental evidence shows that scholars tend to rate men-authored writings higher (Knobloch-Westerwick et al. 2013), and that academic scientists tend to favour men for lab-manager positions (Moss-Racusin et al. 2012)". In that case, citing WC2015 would complicate the claim of sex discrimination that MR2012 is cited to support and therefore citation of WC2015 would not be redundant with the citation of MR2012.

      Since and including 2016 (up to Aug 24, 2018), there have been 347 citations of MR2012, so it might useful to review these citations to assess the percentage of citations to only MR2012 in which MR2012 and WC2015 would be redundant so that citation of only MR2012 might be attributable to the earlier publication of MR2012.

      Another reason for the earlier publication of MR2012 to matter is, as you indicated, because the earlier study might reasonably get more attention for being first. The WC2015 results were discussed at Slate, CNN.com, Inside Higher Ed, and the Nature website, among other outlets, so I'd guess that a lot of the researchers who would be in position to cite a study on sex discrimination in STEM might be aware of WC2015. MR2012 was discussed in Inside Higher Ed, but I don’t think that MR2012 received as much prominent media attention as WC2015 did.

      But, yes, I think that it's correct that the earlier publication of MR2012 is important to consider and that it would plausibly bias upward the number of citations to MR2012 relative to WC2015, to the extent that MR2012 is cited instead of WC2015 because MR2012 was published first.

      • Another confounder might be general paper appeal, in terms of readability, tone, and content. I haven't read the papers but it seems plausible that this might affect the number of citations.

        The only way I can think of to overcome these confounders is to increase the sample size beyond N=1 pairs of contradicting papers. Neither of the discussed issues should be more prominent on either side of the political spectrum so they should cancel out.

        Fortunately, social science is hard and the data is inherently noisy, so it should be easy to find more such pairs of contradicting papers.

        What do you think?

        • Yes, more pairs that are plausible equivalents would be helpful. The sex discrimination literature might be a good literature to check for equivalent pairs because I sense that there is more of a balance in the direction of results in that literature than in the racial discrimination literature. Audit studies might be good, as well, because there tends to be a relatively standard research design for audit studies. Maybe ideal would be a novel topic in which there were early published results in different directions.

          One problem with finding equivalent pairs of already-published articles is that the decision of what is considered equivalent might be influenced by what is known about the citation patterns to date; the nice thing about the Slate Star Codex post is that it paired WC2015 and MR2012 soon after WC2015 was published, when it was not known whether WC2015 would outperform or underperform MR2012 in citations.

          One concern with a larger-sample statistical analysis--or even with an analysis of pairs--is that ideological asymmetry in published results might mean that the expectation would not necessarily be equal citations for all articles. For example, if the racial discrimination literature has 10 equivalent articles that report evidence of discrimination against Blacks and only 1 equivalent article that reports evidence of discrimination against Whites, then it might be reasonable that--instead of researchers citing all 11 articles each time--researchers cite one anti-Black discrimination article and one anti-White discrimination article to illustrate the mixture of findings in the literature; but if that were to happen, then the article showing discrimination against Whites would be cited more due to the novelty in results instead of the ideological valence of results.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.