According to a 2018-06-18 "survey roundup" blog post by Karthick Ramakrishnan and Janelle Wong (with a link to the blog post tweeted by Jennifer Lee):

Regardless of the question wording, a majority of Asian American respondents express support for affirmative action, including when it is applied specifically to the context of higher education.

However, a majority of Asian American respondents did not express support for affirmative action in data from the National Asian American Survey 2016 Post-Election Survey [data here, dataset citation: Karthick Ramakrishnan, Jennifer Lee, Taeku Lee, and Janelle Wong. National Asian American Survey (NAAS) 2016 Post-Election Survey. Riverside, CA: National Asian American Survey. 2018-03-03.]

Tables below contain item text from the questionnaire. My analysis sample was limited to participants coded 1 for "Asian American" in the dataset's race variable. The three numeric columns in the tables for each item are respectively for: [1] data that are unweighted; [2] data with the nweightnativity weight applied, described in the dataset as "weighted by race/ethnicity and state, nativity, gender, education (raking method"; and [3] data with the pidadjweight weight applied, described in the dataset as "adjusted for partyID variation by ethnicity in re-interview cooperation rate for". See slides 4 and 14 here for more details on the study methodology.

The table below reports on results for items about opinions of particular racial preferences in hiring and promotion. A majority of Asian American respondents did not support these race-based affirmative action policies:

NAAS-Post3

The next table reports on results for items about opinions of particular uses of race in university admissions decisions. A majority of Asian American respondents did not support these race-based affirmative action policies:

NAAS-Post4

I'm not sure why these post-election data were not included in the 2018-06-18 blog post survey roundup or mentioned in this set of slides. I'm also not sure why the manipulations for the university admissions decisions items include only treatments in which the text suggests that Asian American applicants are advantaged by consideration of race instead of or in addition to including treatments in which the text suggests that Asian American applicants are disadvantaged by consideration of race, which would have been perhaps as or more plausible.

---

Notes:

1. Code to reproduce my analyses is here. Including Pacific Islanders and restricting the Asian American sample to U.S. citizens did not produce majority support for any affirmative action item reported on above or for the sex-based affirmative action item (Q7.2).

2. The survey had a sex-based affirmative action item (Q7.2) and had items about whether the participant, a close relative of the participant, or a close personal friend of the participant was advantaged or was disadvantaged by affirmative action (Q7.8 to Q7.11). For the Asian American sample, support for preferential hiring and promotion of women in Q7.2 was at 46% unweighted and at 44% when either weighting variable was applied.

3. This NAAS webpage indicates a 2017-12-05 date for the pre-election survey dataset, and on 2017-12-06 the @naasurvey account tweeted a blurb about these data being available for download. However, that same NAAS webpage lists a 2018-03-03 date for the post-election survey dataset, but I did not see an @naasurvey tweet for that release, and that NAAS webpage did not have a link to the post-election data at least as late as 2018-08-16. I tweeted a question about the availability of the post-election data on 2018-08-31 and then sent in an email and later found the data available at the webpage. I think that this might be the NSF grant for the post-election survey, which indicated that the data were to be publicly released through ICPSR in June 2017.

Tagged with: ,

[Please see the March 13, 2019 update below]

Studies have indicated that there are more liberals than conservatives in the social sciences (e.g., Rothman et al. 2005, Gross and Simmons 2007). If social scientists on average are more likely to cite publications that support rather than undercut their assumptions about the world and/or are more likely to cite publications that support rather than undercut their policy preferences, then it is reasonable to expect that, all else equal, publications reporting findings that support liberal assumptions or policy preferences will receive a higher number of citations than publications reporting findings that undercut liberal assumptions or policy preferences.

---

Here is a sort-of natural experiment to assess this potential ideological citation bias. From an April 2015 Scott Alexander post at Slate Star Codex (paragraph breaks omitted):

Williams and Ceci just released National Hiring Experiments Reveal 2:1 Faculty Preference For Women On STEM Tenure Track, showing a strong bias in favor of women in STEM hiring...Two years ago Moss-Racusin et al released Science Faculty's Subtle Gender Biases Favor Male Students, showing a strong bias in favor of men in STEM hiring. The methodology was almost identical to this current study, but it returned the opposite result. Now everyone gets to cite whichever study accords with their pre-existing beliefs.

It has been more than three years since that Slate Star Codex post, so let's compare the number of citations received by the article with the finding that supports liberal assumptions or policy preferences (Moss-Racusin et al. 2012) to the number of citations received by the article with the finding that undercuts liberal assumptions or policy preferences (Williams and Ceci 2015). Both articles were published in the same journal, and both articles have a mixed-sex authorship team with a woman as the first author, and both of these factors help eliminate a few alternate explanations for any difference in citation counts to the articles.

Based on Web of Science data collected August 24, 2018, Moss-Racusin et al. 2012 has been cited these numbers of times in the given year, with the number of years from the article's publication year in square brackets:

  • 5 in 2012 [0]
  • 39 in 2013 [1]
  • 74 in 2014 [2]
  • 109 in 2015 [3]
  • 111 in 2016 [4]
  • 131 in 2017 [5]
  • 105 in 2018 to date [6]

Based on Web of Science data collected August 24, 2018, Williams and Ceci 2015 has been cited these numbers of times in the given year, with the number of years from the article's publication year in square brackets:

  • 4 in 2015 [0]
  • 21 in 2016 [1]
  • 27 in 2017 [2]
  • 15 in 2018 to date [3]

So, in the second year from the article's publication year, Williams and Ceci 2015 was cited 27 times, and Moss-Racusin et al. 2012 was cited 74 times. Over the first three years, Williams and Ceci 2015 was cited 52 times, and Moss-Racusin et al. 2012 was cited 118 times.

---

The potential citation bias against research findings that undercut liberal assumptions or policy preferences might be something that tenure-and-promotion committees should be aware of. Such a citation bias would also be relevant for assessing the status of the journal that research is published in and whether research is even published. Suppose that a journal editor were given a choice of publishing either Moss-Racusin et al. 2012 or Williams and Ceci 2015. Based on the above data, an editor publishing Williams and Ceci 2015 instead of Moss-Racusin et al. 2012 would, three years in, be forfeiting roughly 66 citations to an article in their journal (118 minus 52). Editors who prefer higher impact factors for their journal might therefore prefer to publish a manuscript with research findings that support liberal assumptions or policy preferences, compared to an equivalent manuscript with research findings that undercut liberal assumptions or policy preferences.

---

NOTES

1. Williams and Ceci 2015 was first published online or in print earlier in the year (April 8, 2015) than Moss-Racusin et al. 2012 (Sept 17, 2012), so this earlier publication date in the publication year for Williams and Ceci 2015 should bias upward citations in the publication year or in a given year from the publication year for Williams and Ceci 2015 relative to Moss-Racusin et al. 2012, given that Williams and Ceci 2015 had more time in the publication year to be cited.

2. There might be non-ideological reasons for Moss-Racusin et al. 2012 to be enjoying a 2:1 citation advantage over Williams and Ceci 2015, so comments are open for ideas about any such reasons and for other ideas on this topic. The articles have variation in the number of authors—2 for Williams and Ceci 2015, and 5 for Moss-Racusin et al. 2012—but that seems unlikely to me to be responsible for the entire citation difference.

3. Some of my publications might be considered to fall into the category of research findings that undercut liberal assumptions or policy preferences.

---

UPDATE (Nov 30, 2018)

Here is another potential article pair:

The 1996 study about items measuring sexism against women was published earlier and in a higher-ranked journal than the 1999 study about items measuring sexism against men, but there is to date an excess of 1,238 citations for the 1996 study, which I suspect cannot be completely assigned to the extra three years in circulation and the journal ranking.

---

UPDATE (Mar 13, 2019)

Lee Jussim noted that Moss-Racusin et al. (2012) has been cited much more often than Williams and Ceci (2015) has been (and note the differences in inferences between articles), before I did. Lee's tweet below is from May 28, 2018:

https://twitter.com/PsychRabble/status/1001250104676929542

Tagged with: