The Peterson et al. 2019 PLOS ONE article "Mitigating gender bias in student evaluations of teaching" reported on an experiment conducted with students across four Spring 2018 courses: an introduction to biology course taught by a female instructor, an introduction to biology course taught by a male instructor, an introduction to American politics course taught by a female instructor, and an introduction to American politics course taught by a male instructor. Students completing evaluations of these teachers were randomly assigned to receive or to not receive a statement about how student evaluations of teachers are often biased against women and instructors of color.

The results clearly indicated that "this intervention improved the SET scores for the female faculty" (p. 8). But that doesn't address the mitigation of bias in the title of the article because, as the article indicates, "It is also possible that the students with female instructors who received the anti-bias language overcompensated their evaluations for the cues they are given" (p. 8).

---

For the sake of illustration, let's assume that the two American politics teachers were equal to each other and that the two biology teachers were equal to each other; if so, data from the Peterson et al. 2019 experiment for the v19 overall evaluation of teaching item illustrate how the treatment can both mitigate and exacerbate gender bias in student evaluations.

Here are the mean student ratings on v19 for the American politics instructors:

4.65     Male American politics teacher CONTROL

4.17     Female American politics teacher CONTROL

4.58     Male American politics teacher TREATMENT

4.53     Female American politics teacher TREATMENT

So, for the American politics teachers, the control had a 0.49 disadvantage for the female teacher (p=0.02), but the treatment had only a 0.05 disadvantage for the female teacher (p=0.79). But here are the means for the biology teachers:

3.72     Male biology teacher CONTROL

4.02     Female biology teacher CONTROL

3.73     Male biology teacher TREATMENT

4.44     Female biology teacher TREATMENT

So, for the biology teachers, the control had a 0.29 disadvantage for the male teacher (p=0.25), and the treatment had a 0.71 disadvantage for the male teacher (p<0.01).

---

I did not see any data reported on in the PLOS ONE article that can resolve whether the treatment mitigated or exacerbated or did not affect gender bias in the student evaluations of the biology teachers or the American politics teachers. The article's claim about addressing the mitigation of bias is, by my read of the article, rooted in the "decidedly mixed" (p. 2) literature and, in particular, on their reference 5, to MacNell et al. 2015. For example, from Peterson et al. 2019:

These effects [from the PLOS ONE experiment] were substantial in magnitude; as much as half a point on a five-point scale. This effect is comparable with the effect size due to gender bias found in the literature [5].

The MacNell et al. 2015 sample was students evaluating assistant instructors for an online course, with sample sizes for the four cells (actual instructor gender X perceived instructor gender) of 8, 12, 12, and 11. That's the basis for "the effect size due to gender bias found in the literature": a non-trivially underpowered experiment with 43 students across four cells evaluating *assistant* instructors in an *online* course.

It seems reasonable that, before college or university departments use the Peterson et al. 2019 treatment, there should be more research to assess whether the treatment mitigates, exacerbates, or does not change gender bias in student evaluations in situations in which the treatment is used. For what it's worth, the gender difference has been reported to be about 0.13 on a five-point scale based on a million or so Rate My Professors evaluations, using the illustration of 168 additional steps for a 5,117-step day. If the true gender bias in student evaluations were 0.13 units against women, the roughly 0.4-unit or 0.5-unit Peterson et al. 2019 treatment effect would have exacerbated gender bias in student evaluations of teaching.

---

NOTES:

1. Thanks to Dave Peterson for comments.

2. From what I can tell, if the treatment truly mitigated gender bias among students evaluating the biology teachers, that would mean that the male biology teacher truly did a worse job teaching than the female biology teacher did.

3. I created a index combining the v19, v20, and v23 items, which respectively are the overall evaluation of teaching, a rating of teaching effectiveness, and the overall evaluation of the course. Here are the mean student ratings on the index for the American politics instructors:

4.56     Male American politics teacher CONTROL

4.21     Female American politics teacher CONTROL

4.36     Male American politics teacher TREATMENT

4.46     Female American politics teacher TREATMENT

So, for the American politics teachers, the control had a 0.35 disadvantage for the female teacher (p=0.07), but the treatment had a 0.10 advantage for the female teacher (p=0.59). But here are the means for the biology teachers:

3.67     Male biology teacher CONTROL

3.90     Female biology teacher CONTROL

3.64     Male biology teacher TREATMENT

4.39     Female biology teacher TREATMENT

So, for the biology teachers, the control had a 0.23 disadvantage for the male teacher (p=0.35), and the treatment had a 0.75 disadvantage for the male teacher (p<0.01).

4. Regarding MacNell et al. 2015 being underpowered, if we use the bottom right cell of MacNell et al. 2015 Table 2 to produce a gender bias estimate of 0.50 standard deviations, the statistical power was 36% for an experiment with 20 student evaluations of instructors who were a woman or a man pretending to be a woman and 23 student evaluations of instructors who were a man or a woman pretending to be a man. If the true effect of gender bias in student evaluations is, say, 0.25 standard deviations, then the MacNell et al. study had a 13% chance of detecting that effect.

R code:

library(pwr)

pwr.t2n.test(n1=20, n2=23, d=0.50, sig.level=0.05)

pwr.t2n.test(n1=20, n2=23, d=0.25, sig.level=0.05)

5. Stata code:

* Overall evaluation of teaching

ttest v19 if bio==0 & treatment==0, by(female)

ttest v19 if bio==0 & treatment==1, by(female)

ttest v19 if bio==1 & treatment==0, by(female)

ttest v19 if bio==1 & treatment==1, by(female)

* Teaching effectiveness:

ttest v20 if bio==0 & treatment==0, by(female)

ttest v20 if bio==0 & treatment==1, by(female)

ttest v20 if bio==1 & treatment==0, by(female)

ttest v20 if bio==1 & treatment==1, by(female)

* Overall evaluation of the course

ttest v23 if bio==0 & treatment==0, by(female)

ttest v23 if bio==0 & treatment==1, by(female)

ttest v23 if bio==1 & treatment==0, by(female)

ttest v23 if bio==1 & treatment==1, by(female)

 

sum v19 v20 v23

pwcorr v19 v20 v23

factor v19 v20 v23, pcf

gen index = (v19 + v20 + v23)/3

sum index v19 v20 v23

 

ttest index if bio==0 & treatment==0, by(female)

ttest index if bio==0 & treatment==1, by(female)

ttest index if bio==1 & treatment==0, by(female)

ttest index if bio==1 & treatment==1, by(female)

Tagged with: , ,

In the 2019 PS: Political Science & Politics article "How Many Citations to Women Is 'Enough'? Estimates of Gender Representation in Political Science", Michelle L. Dion and Sara McLaughlin Mitchell address a question about "the normative standard for the amount women should be cited" (p. 1).

The first proposed Dion and Mitchell 2019 measure is the proportion of female members of the American Political Science Association (APSA) by section and primary field, using data from 2018. According to Dion and Mitchell 2019: "When political scientists compose course syllabi, graduate reading lists, and research bibliographies, these membership data provide guidance about the minimum representation of scholarship by women that should be included to be representative by gender" (p. 3).

But is APSA section membership in 2018 a reasonable benchmark for gender representation in course syllabi that include readings from throughout history?

Hardt et al. 2019 reported on data for readings assigned in the training of political science graduate students. Below are percentages of graduate student readings in these data that had a female first author:

Time PeriodFemale First Author %
Before 19703.5%
1970 to 19796.7%
1980 to 198911.3%
1990 to 199915.7%
2000 to 2009 21.0%
2010 to 201824.6%

So the pattern is increasing representation of women over time. If this pattern reflects increasing representation of women over time in APSA section membership or increasing representation of women among the set of researchers whose research interests include the topic of a particular section, then APSA section membership data from 2018 will overstate the percentage of women needed to ensure fair gender representation on syllabi or research bibliographies. For illustrative purposes, if a section had 20% women across the 1990s, 30% women across the 2000s, and 40% women across the 2010s, a fair "section membership" benchmark for gender representation on syllabi would not be 40%; rather, a fair "section membership" benchmark for gender representation on syllabi would be something like 20% women for syllabi readings across the 1990s, 30% women for syllabi readings across the 2000s, and 40% women for syllabi readings across the 2010s.

---

Dion and Mitchell 2019 propose another measure that is biased in the same direction and for the same reason: gender distribution of authors by journal from 2007 to 2016 inclusive for available years.

About 68% of readings in the Hardt et al. 2019 graduate training readings data were published prior to 2007: 15% of these pre-2007 readings had a first female author, but 24% of the 2007-2016 readings in the data had a first female author.

Older readings are included on Hardt et al. 2019 readings with decent frequency: 42% of readings that had the gender of the first author coded were published before 2000. However, the Dion and Mitchell 2019 measure of journal representation from 2007 to 2016 ignores these older readings, which produces a biased measure favoring women if fair representation means representation that matches the representation in the relevant pool of syllabi-worthy journal articles.

---

In a sense, this bias in the Dion and Mitchell 2019 measures might not matter much if the measures are used in the biased manner that Dion and Mitchell 2019 proposed (p. 6):

We remedy this gap by explicitly providing conservative estimates of gender diversity based on organization membership and journal article authorship for evaluating gender representation. Instructors, researchers, and editors who want to ensure that references are representative can reference these as floors (rather than ceilings) for minimally representative citations.

The Dion and Mitchell 2019 suggestion above is that instructors, researchers, and editors who want to ensure that references are representative use a conservative estimate as a floor. Both the conservative nature of the estimate and its use as a floor would produce a bias favoring women, so I'm not sure how that is helpful for instructors, researchers, and editors who want to ensure that references are representative.

---

NOTE:

1. Stata code for the analysis of the Hardt et al. 2019 data:

tab female1 if year<1970

tab female1 if year>=1970 & year<1980

tab female1 if year>=1980 & year<1990

tab female1 if year>=1990 & year<2000

tab female1 if year>=2000 & year<2010

tab female1 if year>=2010 & year<2019

 

tab female1

tab female1 if year<2000

di 36791/87398

Tagged with: ,

"The Gender Readings Gap in Political Science Graduate Training" by Heidi Hardt, Amy Erica Smith, Hannah June Kim, and Philippe Meister was recently published in the Journal of Politics and featured in a Monkey Cage blog post. The Kim Yi Dionne header for the Monkey Cage post indicated that:

Throughout academia, including in political science, women haven't achieved parity with men. As this series explores, implicit bias holds women back at every stage, from the readings professors assign to the student evaluations that influence promotions and pay, from journal publications to book awards.

The abstract to the JOP article indicates that "Introducing a unique data set of 88,673 citations from 905 PhD syllabi and reading lists, we find that only 19% of assigned readings have female first authors". This 19% for assigned readings is lower than the 21.5% of publications in the top three political science journals between 2000 and 2015 (bottom of page 2 of the JOP article). However, the 19% is based on assigned readings published at any time in history, including authors such as Plato and Sun Tzu. My analysis of the data for the article indicated that 22% of assigned readings have female first authors when the assigned readings are limited to assigned readings published between 2000 and 2015 inclusive. The top three publications benchmark therefore produces an estimate of the gender readings gap in political science graduate training for 2000 to 2015 publications that is less than one percent and trivially advantages women.

Figure 1 in the Hardt et al. JOP article reports percentages by subfield, with benchmarks for published top works, which I think are articles in top 10 journals; the first and third numeric columns in the table below are data reported in Figure 1. Using the benchmark for published top works, my analysis limiting the assigned readings to assigned readings published between 2000 to 2015 inclusive (the middle numeric column) produced a difference greater than 1% that disadvantaged female first authors for only one of the five subfields with benchmark data (comparative politics):

Topic% Female
1st Author
Readings
(All Time)
% Female
1st Author
Readings
(2000-2015)
% Female
1st Author
Top Pubs
(2000-2015)
Methodology 11.5713.6411.36
Political Economy 16.7518.03 NA
American 15.6618.46 19.07
Comparative 20.5523.26 28.76
IR 19.9623.41 22.42
Theory 25.0531.58 29.39

For an example topic most relevant to my work, the Hardt et al. Figure 1 gender gap for American politics is 3.41 percentage points (15.66 compared to 19.07), but falls to 0.61 percentage points (18.46 compared to 19.07) when the time frame of the assigned readings is set to the 2000-2015 time frame of the top publications benchmark. Invocation of an implicit bias that holds back women might be premature if the data indicate a gap of less than 1 percentage point in an analysis that does not include relevant control variables such as any gender gap in how "syllabus-worthy" publications are within the set of top publications. The 5.50 percentage point gender gap for comparative politics might be large enough to consider implicit bias in that subfield, but that's a localized concern.

---

NOTES

1. [*] The post title alludes to this tweet.

2. The only first authors coded female before 1776 are Titus Livy and Sun Tzu (tab surname1 if female1==1 & year<1776).

3. Code below:

* Insert this command into the Hardt et al. do file after Line 11 ("use 'Hardt et al. JOP_Replication data.dta', clear"):
keep if year>=2000 & year<=2015

* Insert these commands into the Hardt et al. do file after new Line 124 ("tab1 gender1 if gender1 < 3 [aweight=wt] // THE TOPLINE RESULTS WE REPORT EXCLUDE THOSE 304 OBSERVATIONS"):
tab1 gender1 if gender1 < 3 [aweight=wt] // This should report 21.86%
tab1 gender1 if gender1 < 3 // This should report 22.20%

* Insert this command into the Hardt et al. do file before new Line 184 ("restore"):
tab topic mn

* Run the Hardt+et+al.+JOP_Replication+code-1.do file until and including new Line 126 ("tab1 gender1 if gender1 < 3 // This should report 22.20%"). These data indicate that, of first authors coded male or female, about 22% were female.

* Run new Line 127 to new Line 184 ("tab topic mn"). Line 184 should output data for the middle column in the table in this post. See the "benchmark_teelethelen" lines for data for the right column in the table.

Tagged with: ,

I drafted a manuscript entitled "Six Things Peer Reviewers Can Do To Improve Political Science". It was rejected once in peer review, so I'll post at least some of the ideas to my blog. This first blog post is about comments on the Valentino et al. 2018 "Mobilizing Sexism" Public Opinion Quarterly article. I sent this draft of the manuscript to Valentino et al. on June 11, 2018, limited to the introduction and parts that focus on Valentino et al. 2018; the authors emailed me back comments on June 12, 2018, which Dr. Valentino asked me to post and that I will post after my discussion.

1. Unreported tests for claims about group differences

Valentino et al. (2018) report four hypotheses, the second of which is:

Second, compared to recent elections, the impact of sexism should be larger in 2016 because an outwardly feminist, female candidate was running against a male who had espoused disdain for women and the feminist project (pp. 219-220).

Here is the discussion of their Study 2 results in relation to that expectation:

The pattern of results is consistent with expectations, as displayed in table 2. Controlling for the same set of predispositions and demographic variables as in the June 2016 online study, sexism was significantly associated with voting for the Republican candidate only in 2016 (b = 1.69, p < .05) (p.225).

However, as Gelman and Stern 2006 observed, "comparisons of the sort, 'X is statistically significant but Y is not,' can be misleading" (p. 331). In Table 2 of Valentino et al. 2018, the sexism predictor in the 2016 model had a logit coefficient of 1.69 and a standard error of 0.81, and the p-value under .05 for this sexism predictor provides information about only whether the 2016 sexism coefficient differs from zero; this p-value under .05 does not indicate whether, at p<.05, the 2016 sexism coefficient differs from the imprecisely estimated sexism coefficients of 0.23, 0.94, and 0.34 for 2012, 2008, and 2004. That difference in coefficients between sexism in 2016 and sexism in the other years is what would be needed to test the second hypothesis about the impact of sexism being larger in 2016.

2. No summary statistics reported for a regression-based inference about groups

Valentino et al. 2018 Table 2 indicates that, compared to lower levels of participant modern sexism, higher levels of participant modern sexism associate with a greater probability of a participant reported vote for Donald Trump in 2016. But the article does not report the absolute mean levels of modern sexism among Trump voters or Clinton voters. These absolute mean levels are in the figure below, limited to participants in face-to-face interviews (per Valentino et al. 2019 footnote 8):

Results in the above image indicate that the mean response across Trump voters represented beliefs:

  • that the news media should pay the same amount of attention to discrimination against women that they have been paying lately;
  • that, when women complain about discrimination, they cause more problems than they solve less than half the time;
  • and that, when women demand equality these days, less than half of the time they are actually seeking special favors.

These don't appear to be obviously sexist beliefs in the sense that I am aware of evidence that the beliefs incorrectly or unfairly disadvantage or disparage women or men, but comments are open below if you know of evidence or have an argument that the mean Trump voter response is sexist for any of these three items. Moreover, it's not clear to me that sexism can be inferred based on measures about only one sex; if, for instance, a participant believes that, when women complain about discrimination, they cause more problems than they solve, and the participant also believes that, when men complain about discrimination, they cause more problems than they solve, then it does not seem reasonable to code that person as a sexist, without more information.

---

Response from Valentino et al.

Here is the response that I received from Valentino et al.

1) Your first concern was that we did not discuss one of the conditions in our MTurk study, focusing on disgust. The TESS reference is indeed the same study. However, we did not report results from the disgust condition because we did not theorize about disgust in this paper. Our theory focuses on the differential effects of fear vs. anger. We are in fact quite transparent throughout, indicating where predicted effects are non-significant. We also include a lengthy appendix with several robustness checks, etc. 

2) We never claim all Trump voters are sexist. We do claim that in 2016 gender attitudes are a powerful force, and more conservative scores on these measures significantly increase the likelihood of voting for Trump. The evidence from our work and several other studies supports this simple claim handsomely. Here is a sample of other work that replicates the basic finding in regarding the power of sexism in the 2016 election. Many of these studies use ANES data, as we do, but there are also several independent replications using different datasets. You might want to reference them in your paper. 

Blair, K. L. (2017). Did Secretary Clinton lose to a ‘basket of deplorables’? An examination of Islamophobia, homophobia, sexism and conservative ideology in the 2016 US presidential election. Psychology & Sexuality8(4), 334-355. 

Bock, J., Byrd-Craven, J., & Burkley, M. (2017). The role of sexism in voting in the 2016 presidential election. Personality and Individual Differences119, 189-193. 

Bracic, A., Israel-Trummel, M., & Shortle, A. F. (2018). Is sexism for white people? Gender stereotypes, race, and the 2016 presidential election. Political Behavior, 1-27.

Cassese, E. C., & Barnes, T. D. (2018). Reconciling Sexism and Women's Support for Republican Candidates: A Look at Gender, Class, and Whiteness in the 2012 and 2016 Presidential Races. Political Behavior, 1-24. 

Cassese, E., & Holman, M. R. Playing the woman card: Ambivalent sexism in the 2016 US presidential race. Political Psychology

Frasure-Yokley, L. (2018). Choosing the Velvet Glove: Women Voters, Ambivalent Sexism, and Vote Choice in 2016. Journal of Race, Ethnicity and Politics3(1), 3-25. 

Ratliff, K. A., Redford, L., Conway, J., & Smith, C. T. (2017). Engendering support: Hostile sexism predicts voting for Donald Trump over Hillary Clinton in the 2016 US presidential election. Group Processes & Intergroup Relations, 1368430217741203. 

Schaffner, B. F., MacWilliams, M., & Nteta, T. (2018). Understanding white polarization in the 2016 vote for president: The sobering role of racism and sexism. Political Science Quarterly133(1), 9-34. 

3) We do not statistically compare the coefficients across years, but neither do we claim to do so. We claim the following:

"Controlling for the same set of predispositions and demographic variables as in the June 2016 online study, sexism was significantly associated with voting for the Republican candidate only in 2016 (b = 1.69, p < .05). ...In conclusion, evidence from two nationally representative surveys demonstrates sexism to be powerfully associated with the vote in the 2016 election, for the first time in at least several elections, above and beyond the impact of other typically influential political predispositions and demographic characteristics."

Therefore, we predict (and show) sexism was a strong predictor in 2016 but not in other years. Our test is also quite conservative, since we include in these models all manner of predispositions that are known to be correlated with sexism. In Table 2, the confidence interval around our 2016 estimate for sexism in these most conservative models contains the estimate for 2008 in that analysis, and is borderline for 2004 and 2012, where the impact of sexism was very close to zero. However, the bivariate logit relationships between sexism and Trump voting are much more distinct, with 2016 demonstrating a significantly larger effect than the other years. These results are easy to produce with ANES data.

---

Regarding the response from Valentino et al.:

1. My concern is that the decision about what to focus on in a paper is influenced by the results of the study. If a study has a disgust condition, then a description of the results of that disgust condition should be reported when results of that study are reported; otherwise, selective reporting of conditions could bias the literature.

2. I'm not sure that anything in their point 2 addresses anything my manuscript.

3. I realize that Valentino et al. 2018 did not report or claim to report results for a statistical test comparing the sexism coefficient in 2016 to sexism coefficients in prior years. But that reflects my criticism: that, for the hypothesis that "compared to recent elections, the impact of sexism should be larger in 2016…" (Valentino et al. 2018: 219-220), the article should have reported a statistical test to assess the evidence that the sexism coefficient in 2016 was different than than the sexism coefficient in prior recent elections.

---

NOTE

Code for the figure.

Tagged with:

The Monkey Cage published a post by Dawn Langan Teele and Kathleen Thelen: "Some of the top political science journals are biased against women. Here's the evidence." The evidence presented for the claim of bias appears to be that women represent a larger percentage of the political science discipline than of authors in top political science journals. But that doesn't mean that the journals are biased against women, and the available data that I am aware of also doesn't indicate that the journals are biased against women:

1. Discussing data from World Politics (1999-2004), International Organization (2002), and Comparative Political Studies and International Studies Quarterly (three undisclosed years), Breuning and Sanders 2007 reported that "women fare comparatively well and appear in each journal at somewhat higher rates than their proportion among submitting authors" (p. 350).

2. Data for the American Journal of Political Science reported by Rick Wilson here indicated that 32% of submissions from 2010 to 2013 had at least one female author and 35% of accepted articles had at least one female author.

3. Based on data from 1983 to 2008 in the Journal of Peace Research, Østby et al. 2013 reported that: "If anything, female authors are more likely to be selected for publication [in JPR]".

4. Data below from Ishiyama 2017 for the American Political Science Review from 2012 to 2016 indicate that women served as first author for 27% of submitted manuscripts and 25% of accepted manuscripts.

APSR Data---

The data across the four points above do not indicate that these journals or corresponding peer reviewers are biased against women in this naive analysis. Of course, causal identification of bias would require a more representative sample beyond the largely volunteered data above and would require, for claims of bias among peer reviewers, statistical control for the quality of submissions and, for claims of bias at the editor level, statistical control for peer reviewer recommendations; analyses would get even more complicated accounting for the possibility that editor bias can influence peer reviewers selection, which can make the process easier or more difficult than would occur with unbiased assignment to peer reviewers.

Please let me know if you are aware of any other relevant data for political science journals.

---

NOTE

1 The authors of the Monkey Cage post have an article that cites Breuning and Sanders 2007 and Østby et al. 2013, but these data were not mentioned in the Monkey Cage post.

Tagged with: , ,

Based on a sample of undergraduate students at a university in Texas, Anderson et al. 2009 reported (p. 216) that:

Contrary to popular beliefs, feminists reported lower levels of hostility toward men than did nonfeminists.

But this stereotype-inconsistent pattern was based a coding of "feminist" that reflected whether a participant had defined "feminist" "in a way consistent with our operational definition of feminism" (p. 220) and not based on whether the participant self-identified as a feminist, a self-identification for which the researchers had data.

---

I assessed claims about self-identified feminists' views of men using data from the ANES 2016 Time Series Survey national sample. My first predictor was a dichotomous measure of sex, coded 1 for female and 0 for male. My second predictor was self-identified feminist, coded as 1 for a participant who identified as a feminist or strong feminist in variable V161345.

The best available dataset measures to construct a measure of negative attitudes toward men were measures of perceived levels of discrimination against men and women in the United States (V162363 and V162362, respectively). I coded participants as 1 in a dichotomous variable if the participant indicated "none at all" for the amount of discrimination against men in the United States but indicated a nonzero level of discrimination against women in the United States. Denial of discrimination is a plausible measure of negative attitudes toward a group that faces discrimination, and there is statistical evidence that men in the United States face discrimination in areas such as criminal sentencing (e.g., Doerner 2012 and Starr 2015); moreover, men are formally excluded from certain opportunities, such as opportunities at the NSF-funded Visions in Methodology conference.

---

In weighted regressions, 37% of nonfeminist women reported no discrimination against men and a nonzero level of discrimination against women, compared to 46% of feminist women, with a p-value of p=0.002 for the 9 percentage-point difference. However, the gap between feminist men and nonfeminist men was 20 percentage points, with 28% of nonfeminist men reporting no discrimination against men and a nonzero level of discrimination against women, compared to 48% of feminist men, with a p-value less than 0.001 for the difference. Feminist identification was thus associated with an 11 percentage-point larger difference in anti-male attitudes for men than for women, with a p-value for the difference of p=0.012.

Output for the interaction model is below:

denialDM

---

NOTES

1. My Stata code is here. ANES 2016 Time Series Study data is available here.

2. The denialDM output variable is dichotomous, but estimates and inferences do not change if logit is used instead of linear regression.

3. The dataset has another question (V161346) that asked participants how well "feminist" described them, on a 5-point scale (extremely well, very well, somewhat well, not very well, and not at all); inferences are the same using that measure. Inferences are also the same using V161345 to make a 3-part feminist measure coded from non-feminist to strong feminist. See the Stata code.

4. Hat tip to Nathaniel Bechhofer, who retweeted this tweet, which led to this post.

Tagged with:

According to its website, Visions in Methodology "is designed to address the broad goal of supporting women who study political methodology" and "serves to connect women in a field where they are under-represented." The Call for Proposals for the 2017 VIM conference indicates that submissions were restricted to women:

We invite submissions from female graduate students and faculty that address questions of measurement, causal inference, the application of advanced statistical methods to substantive research questions, as well as the use of experimental approaches (including incentivized experiments)...Please consider applying, or send this along to women you believe may benefit from participating in VIM!

Here is the program for the 2016 VIM conference, which lists activities restricted to women, lists conference participants (which appear to be only women), and has a photo that appears to be from the conference (which appears to have only women in the photo).

The 2017 VIM conference webpage indicates that the conference is sponsored by several sources such as the National Science Foundation and the Stony Brook University Graduate School. But page 118 of the NSF's Proposal & Award Policies & Procedures Guide (PAPPG) of January 2017 states:

Subject to certain exceptions regarding admission policies at certain religious and military organizations, Title IX of the Education Amendments of 1972 (20 USC §§ 1681-1686) prohibits the exclusion of persons on the basis of sex from any education program or activity receiving Federal financial assistance.  All NSF grantees must comply with Title IX.

The VIM conference appears to be an education program or activity receiving Federal financial assistance and, as such, submissions and conference participation should not be restricted by sex.

---

NOTES:

1. This Title IX Legal Manual discusses what constitutes an education program or activity:

While Title IXs antidiscrimination protections, unlike Title VI’s, are limited in coverage to "education" programs or activities, the determination as to what constitutes an "education program" must be made as broadly as possible in order to effectuate the purposes of both Title IX and the CRRA. Both of these statutes were designed to eradicate sex-based discrimination in education programs operated by recipients of federal financial assistance, and all determinations as to the scope of coverage under these statutes must be made in a manner consistent with this important congressional mandate.

2. I think that the relevant NSF award is SES 1324159, which states that part of the project will "continue a series of small meetings for women methodologists that deliberately mix senior leaders in the subfield with young, emerging scholars who can benefit substantially from such close personal interaction." This page indicates that the 2014 VIM conference received support from NSF grant SES 1120976.

---

UPDATE [June 20, 2019]

I learned from a National Science Foundation representative of a statute (42 U.S. Code § 1885a) that permits the National Science Foundation to fund women-only activities listed in the statute. However, the Visions in Methodology conference has been funded by host organizations such as Stony Brook University, and I have not yet uncovered any reason why host institutional covered by Title IX would not be in violation of Title IX in funding single-sex educational opportunities.

Tagged with: ,