The Monkey Cage published a post by Dawn Langan Teele and Kathleen Thelen: "Some of the top political science journals are biased against women. Here's the evidence." The evidence presented for the claim of bias appears to be that women represent a larger percentage of the political science discipline than of authors in top political science journals. But that doesn't mean that the journals are biased against women, and the available data that I am aware of also doesn't indicate that the journals are biased against women:

1. Discussing data from World Politics (1999-2004), International Organization (2002), and Comparative Political Studies and International Studies Quarterly (three undisclosed years), Breuning and Sanders 2007 reported that "women fare comparatively well and appear in each journal at somewhat higher rates than their proportion among submitting authors" (p. 350).

2. Data for the American Journal of Political Science reported by Rick Wilson here indicated that 32% of submissions from 2010 to 2013 had at least one female author and 35% of accepted articles had at least one female author.

3. Based on data from 1983 to 2008 in the Journal of Peace Research, Østby et al. 2013 reported that: "If anything, female authors are more likely to be selected for publication [in JPR]".

4. Data below from Ishiyama 2017 for the American Political Science Review from 2012 to 2016 indicate that women served as first author for 27% of submitted manuscripts and 25% of accepted manuscripts.

APSR Data---

The data across the four points above do not indicate that these journals or corresponding peer reviewers are biased against women in this naive analysis. Of course, causal identification of bias would require a more representative sample beyond the largely volunteered data above and would require, for claims of bias among peer reviewers, statistical control for the quality of submissions and, for claims of bias at the editor level, statistical control for peer reviewer recommendations; analyses would get even more complicated accounting for the possibility that editor bias can influence peer reviewers selection, which can make the process easier or more difficult than would occur with unbiased assignment to peer reviewers.

Please let me know if you are aware of any other relevant data for political science journals.

---

NOTE

1 The authors of the Monkey Cage post have an article that cites Breuning and Sanders 2007 and Østby et al. 2013, but these data were not mentioned in the Monkey Cage post.

Tagged with: , ,

Based on a sample of undergraduate students at a university in Texas, Anderson et al. 2009 reported (p. 216) that:

Contrary to popular beliefs, feminists reported lower levels of hostility toward men than did nonfeminists.

But this stereotype-inconsistent pattern was based a coding of "feminist" that reflected whether a participant had defined "feminist" "in a way consistent with our operational definition of feminism" (p. 220) and not based on whether the participant self-identified as a feminist, a self-identification for which the researchers had data.

---

I assessed claims about self-identified feminists' views of men using data from the ANES 2016 Time Series Survey national sample. My first predictor was a dichotomous measure of sex, coded 1 for female and 0 for male. My second predictor was self-identified feminist, coded as 1 for a participant who identified as a feminist or strong feminist in variable V161345.

The best available dataset measures to construct a measure of negative attitudes toward men were measures of perceived levels of discrimination against men and women in the United States (V162363 and V162362, respectively). I coded participants as 1 in a dichotomous variable if the participant indicated "none at all" for the amount of discrimination against men in the United States but indicated a nonzero level of discrimination against women in the United States. Denial of discrimination is a plausible measure of negative attitudes toward a group that faces discrimination, and there is statistical evidence that men in the United States face discrimination in areas such as criminal sentencing (e.g., Doerner 2012 and Starr 2015); moreover, men are formally excluded from certain opportunities, such as opportunities at the NSF-funded Visions in Methodology conference.

---

In weighted regressions, 37% of nonfeminist women reported no discrimination against men and a nonzero level of discrimination against women, compared to 46% of feminist women, with a p-value of p=0.002 for the 9 percentage-point difference. However, the gap between feminist men and nonfeminist men was 20 percentage points, with 28% of nonfeminist men reporting no discrimination against men and a nonzero level of discrimination against women, compared to 48% of feminist men, with a p-value less than 0.001 for the difference. Feminist identification was thus associated with an 11 percentage-point larger difference in anti-male attitudes for men than for women, with a p-value for the difference of p=0.012.

Output for the interaction model is below:

denialDM

---

NOTES

1. My Stata code is here. ANES 2016 Time Series Study data is available here.

2. The denialDM output variable is dichotomous, but estimates and inferences do not change if logit is used instead of linear regression.

3. The dataset has another question (V161346) that asked participants how well "feminist" described them, on a 5-point scale (extremely well, very well, somewhat well, not very well, and not at all); inferences are the same using that measure. Inferences are also the same using V161345 to make a 3-part feminist measure coded from non-feminist to strong feminist. See the Stata code.

4. Hat tip to Nathaniel Bechhofer, who retweeted this tweet, which led to this post.

Tagged with:

I had a recent Twitter exchange about a Monkey Cage post:

Below, I use statistical power calculations to explain why the Ahlquist et al. paper, or at least the list experiment analysis cited in the Monkey Cage post, is not compelling.

---

Discussing the paper (published version here), Henry Farrell wrote:

So in short, this research provides exactly as much evidence supporting the claim that millions of people are being kidnapped by space aliens to conduct personally invasive experiments on, as it does to support Trump's claim that millions of people are engaging in voter fraud.

However, a survey with a sample size of three would also not be able to differentiate the percentage of U.S. residents who commit vote fraud from the percentage of U.S. residents abducted by aliens. For studies that produce a null result, it is necessary to assess the ability of the study to detect an effect of a particular size, to get a sense of how informative that null result is.

The Ahlquist et al. paper has a footnote [31] that can be used to estimate the statistical power for their list experiments: more than 260,000 total participants would be needed for a list experiment to have 80% power to detect a 1 percentage point difference between treatment and control groups, using an alpha of 0.05. The power calculator here indicates that the corresponding estimated standard deviation is at least 0.91 [see note 1 below].

So let's assume that list experiment participants are truthful and that we combine the 1,000 participants from the first Ahlquist et al. list experiment with the 3,000 participants from the second Ahlquist et al. list experiment, so that we'd have 2,000 participants in the control sample and 2,000 participants in the treatment sample. Statistical power calculations using an alpha of 0.05 and a standard deviation of 0.91 indicate that there is:

  • a 5 percent chance of detecting a 1% rate of vote fraud.
  • an 18 percent chance of detecting a 3% rate of vote fraud.
  • a 41 percent chance of detecting a 5% rate of vote fraud.
  • a 79 percent chance of detecting an 8% rate of vote fraud.
  • a 94 percent chance of detecting a 10% rate of vote fraud.

---

Let's return to the claim that millions of U.S. residents committed vote fraud and use 5 million for the number of adult U.S. residents who committed vote fraud in the 2016 election, eliding the difference between illegal votes and illegal voters. There are roughly 234 million adult U.S. residents (reference), so 5 million vote fraudsters would be 2.1% of the adult population, and a 4,000-participant list experiment would have about an 11 percent chance of detecting that 2.1% rate of vote fraud.

Therefore, if 5 million adult U.S. residents really did commit vote fraud, a list experiment with the sample size of the pooled Ahlquist et al. 2014 list experiments would produce a statistically-significant detection of vote fraud about 1 of every 9 times the list experiment was conducted. The fact that Ahlquist et al. 2014 didn't detect voter impersonation at a statistically-significant level doesn't appear to compel any particular belief about whether the rate of voter impersonation in the United States is large enough to influence the outcome of presidential elections.

---

NOTES

1. Enter 0.00 for mu1, 0.01 for mu2, 0.91 for sigma, 0.05 for alpha, and a 130,000 sample size for each sample; then hit Calculate. The power will be 0.80.

2. I previously discussed the Ahlquist et al. list experiments here and here. The second link indicates that an Ahlquist et al. 2014 list experiment did detect evidence of attempted vote buying.

Tagged with: , , ,