I posted to OSF data, code, and a report for my unpublished "Public perceptions of human evolution as explanations for racial group differences" [sic] project that was from a survey that YouGov ran for me in 2017, using funds from Illinois State University New Faculty Start-up Support and the Illinois State University College of Arts and Sciences. The report describes results from preregistered analyses, but below I'll highlight key results.

---

The key item asked participants whether God's design and/or evolution, or neither, helped cause a particular racial difference:

Some racial groups have [...] compared to other racial groups. Select ALL of the reasons below that you think help cause this difference:
□ Differences in how God designed these racial groups
□ Genetic differences that evolved between these racial groups
○ None of the above

Participants were randomly assigned to receive one racial difference in the part of the item marked [...] above. Below are the racial differences asked about, along with the percentage assigned to that item who selected only the "evolved" response option:

70% a greater risk for certain diseases
55% darker skin on average
54% more Olympic-level runners
49% different skull shapes on average
26% higher violent crime rates on average
24% higher math test scores on average
21% lower math test scores on average
18% lower violent crime rates on average

---

Another item on the survey (discussed at this post) asked about evolution. The reports that I posted for these items removed all or a lot of the discussion and citation of literature from the manuscripts that I had submitted to journals but were rejected, in case I can use that material for a later manuscript.

Tagged with: , , , ,

Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,

I posted to OSF data, code, and a report for my unpublished "Public Perceptions of the Potential for Human Evolution" project that was from a survey that YouGov ran for me in 2017, using funds from Illinois State University New Faculty Start-up Support and the Illinois State University College of Arts and Sciences. The report describes results from preregistered analyses, but below I'll highlight key results.

---

"Textbook" evolution

About half of participants received an item that asked about what I think might be reasonably described as a textbook description of evolution, in which one group is more reproductively successful than another group. The experimental manipulations involved whether the more successful group had high intelligence or low intelligence and whether the response options mentioned or did not mention "evolved".

Here is the "high intelligence" item, with square brackets indicating the "evolved" manipulation:

If, in the future, over thousands of years, people with high intelligence have more children and grandchildren than people with low intelligence have, which of the following would be most likely to happen?

  • The average intelligence of humans would [increase/evolve to be higher].
  • The average intelligence of humans would [remain the same/not evolve to be higher or lower].
  • The average intelligence of humans would [decrease/evolve to be lower].

Percentages from analyses weighted to reflect U.S. population percentages were 55% for the "increase" option (N=245) and 49% for the "evolve to be higher" option (N=260), with the residual category including other responses and non-responses. So about half of participants selected what I think is the intuitive response.

Here is the "low intelligence" item:

If, in the future, over thousands of years, people with low intelligence have more children and grandchildren than people with high intelligence have, which of the following would be most likely to happen?

  • The average intelligence of humans would [increase/evolve to be higher].
  • The average intelligence of humans would [remain the same/not evolve to be higher or lower].
  • The average intelligence of humans would [decrease/evolve to be lower].

Percentages from analyses weighted to reflect U.S. population percentages were 41% for the "decrease" option (N=244) and 35% for the "evolve to be lower" option (N=244), with the residual category including other responses and non-responses.

So, compared to the "high intelligence" item, participants were less likely (p<0.05) to select what I think is the intuitive response for the "low intelligence" item.

---

Evolution due to separation into different environments

Participants not assigned to the aforementioned item received an item about whether the participant would expect differences to arise between groups separated into different environments, but the item did not include an indication of particular differences in the environments. The experimental manipulations were whether the item asked about intelligence or height and whether the response options mentioned or did not mention "evolved".

Here is the intelligence item, with square brackets indicating the "evolved" manipulation:

Imagine two groups of people. Each group has some people with high intelligence and some people with low intelligence, but the two groups have the same average intelligence as each other. If these two groups were separated from each other into different environments for tens of thousands of years and had no contact with any other people, which of the following would be more likely to happen?

  • After tens of thousands of years, the two groups would still have the same average intelligence as each other.
  • After tens of thousands of years, one group would [be/have evolved to be] more intelligent on average than the other group.

Percentages from analyses weighted to reflect U.S. population percentages were 32% for the "be more intelligent" option (N=260) and 29% for the "evolved to be more intelligent" option (N=236), with the residual category including other responses and non-responses.

Here is the height item:

Imagine two groups of people. Each group has some short people and some tall people, but the two groups have the same average height as each other. If these two groups were separated from each other into different environments for tens of thousands of years and had no contact with any other people, which of the following would be more likely to happen?

  • After tens of thousands of years, the two groups would still have the same average height as each other.
  • After tens of thousands of years, one group would [be/have evolved to be] taller on average than the other group.

Percentages from analyses weighted to reflect U.S. population percentages were 32% for the "be taller" option (N=240) and 32% for the "evolved to be taller" option (N=271), with the residual category including other responses and non-responses.

So not much variation in these percentages between the intelligence version of the item and the height version of the item. And only about 1/3 of participants indicated an expectation of intelligence or height differences arising between groups separated from each other into different environments for tens of thousands of years.

---

Another item on the survey (eventually discussed at this post) asked about evolution and racial differences. The reports that I posted for these items removed all or a lot of the discussion and citation of literature from the manuscripts that I had submitted to journals but were rejected, in case I can use that material for a later manuscript.

Tagged with: , , ,

Criminology recently published Schutten et al 2021 "Are guns the new dog whistle? Gun control, racial resentment, and vote choice".

---

I'll focus on experimental results from Schutten et al 2021 Figure 1. Estimates for respondents low in racial resentment indicated a higher probability of voting for a hypothetical candidate:

[1] when the candidate was described as Democrat, compared to when the candidate was described as a Republican,

[2] when the candidate was described as supporting gun control, compared to when the candidate was described as having a policy stance on a different issue, and

[3] when the candidate was described as not being funded by the NRA, compared to when the candidate was described as being funded by the NRA.

Patterns were reversed for respondents high in racial resentment. The relevant 95% confidence intervals did not overlap for five of the six patterns, with the exception being for the NRA funding manipulation among respondents high in racial resentment; eyeballing, it doesn't look like the p-value is under p=0.05 for that estimated difference.

---

For the estimate that participants low in racial resentment were less likely to vote for a hypothetical candidate described as being funded by the NRA than for a hypothetical candidate described as not being funded by the NRA, Schutten et al 2021 suggested that this might reflect a backlash against of "the use of gun rights rhetoric to court prejudiced voters" (p. 20). But, presuming that the content of the signal provided by the mention of NRA funding is largely or completely racial, the "backlash" pattern is also consistent with a backlash against support of a constitutional right that many participants low in racial resentment might perceive to be disproportionately used by Whites and/or rural Whites.

Schutten et al 2021 conceptualized participants low in racial resentment as "nonracists" (p. 3) and noted that "recent evidence suggests that those who score low on the racial resentment scale 'favor' Blacks (Agadjanian et al., 2021)" (p. 21), but I don't know why the quotation marks around "favor" are necessary, given that there is good reason to characterize a nontrivial percentage of participants low in racial resentment as biased against Whites: for example, my analysis of data from the ANES 2020 Time Series Study indicated that about 40% to 45% of Whites (and about 40% to 45% of the general population) that fell at least one standard deviation under the mean level of racial resentment rated Whites lower on the 0-to-100 feeling thermometers than they rated Blacks, and Hispanics, and Asians/Asian-Americans. (This is not merely respondents rating Whites on average lower than Blacks, Hispanics, and Asians/Asian-Americans, but is rating Whites lower than each of these three groups).

Schutten et al 2021 indicated that (p. 4):

Importantly, dog whistling is not an attempt to generate racial prejudice among the public but to arouse and harness latent resentments already present in many Americans (Mendelberg, 2001).

Presumably, this dog whistling can activate the racial prejudice against Whites that many participants low in racial resentment have been comfortable expressing on feeling thermometers.

---

NOTES

1. Schutten et al 2021 claimed that (p. 8):

If racial resentment is primarily principled conservatism, its effect on support for government spending should not depend on the race of the recipient.

But if racial resentment were, say, 70% principled ideology and 30% racial prejudice, racial resentment should still associate with racial discrimination due to the 30%.

And I think that it's worth considering whether racial resentment should also be described as being influenced by progressive ideology. If principled conservatism can cause participants to oppose special favors for Blacks, presumably a principled progressivism can cause participants to support special favors for Blacks. If so, it seems reasonable to also conceptualize racial resentment as the merging of principled progressivism and prejudice against Whites, given that both could presumably cause support for special favors for Blacks.

2. Schutten et al 2021 claimed that (p. 16):

The main concern about racial resentment is that it is a problematic measure of racial prejudice among conservatives but a suitable measure among nonconservatives (Feldman & Huddy, 2005).

But I think that major concerns about racial resentment are present even among nonconservatives. As I indicated in a prior blog post, I think that the best case against racial resentment has two parts. First, racial resentment captures racial attitudes in a way that is difficult if not impossible to disentangle from nonracial attitudes; that concern remains among nonconservatives, such as the possibility that a nonconservative would oppose special favors for Blacks because of a nonracial opposition to special favors.

Second, many persons at low racial resentment have a bias against Whites, and limiting the sample to nonconservatives if anything makes it more likely that the estimated effect of racial resentment is capturing the effect of bias against Whites.

3. Figure 1 would have provided stronger evidence about p<0.05 differences between estimates if plotting 83.4% confidence intervals.

4. [I deleted this comment because Justin Pickett (co-author on Schutten et al 2021) noted in review of a draft version of this post that this comment suggested an analysis that was reported in Schutten et al 2021, that an analysis be limited to participants low in racial resentment and an analysis be limited to participants high in racial resentment. Thanks to Justin for catching that.]

5. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , ,

The Journal of Race, Ethnicity, and Politics published Nelson 2021 "You seem like a great candidate, but…: Race and gender attitudes and the 2020 Democratic primary".

Nelson 2021 is an analysis of racial attitudes and gender attitudes that makes inferences about the effect of "gender attitudes" using measures that ask only about women, without any appreciation of the need to assess whether the effect of gender attitudes about women are offset by the effect of gender attitudes about men.

But Nelson 2021 has another element that I thought worth blogging about. From pages 656 and 657:

Importantly, though, I hypothesized that the respondent's race will be consequential for whether these race and gender attitudes matter—specifically, that I expect it is white respondents who are driving these relationships. To test this hypothesis, I reran all 16 logit models from above with some minor adjustments. First, I replaced the IVs "Black" and "Latina/o/x" with the dichotomous variable "white." This variable is coded 1 for those respondents who identify as white and 0 otherwise. I also added interaction terms between the key variables of interest—hostile sexism, modern sexism, and racial resentment—and "white." These interactions will help assess whether white respondents display different patterns than respondents of color...

This seems like a good research design: if, for instance, the p-value is less than p=0.05 for the "Racial resentment X White" interaction term, then we can infer that, net of controls, racial resentment associated with the outcome among White respondents differently than racial resentment associated with the outcome among respondents of color.

---

But, instead of reporting the p-value for the interaction terms, Nelson 2021 compared the statistical significance for an estimate among White respondents to the statistical significance for the corresponding estimate among respondents of color, such as:

In seven out of eight cases where racial resentment predicts the likelihood of choosing Biden or Harris, the average marginal effect for white respondents is statistically significant. In those same seven cases, the average marginal effect for respondents of color on the likelihood of choosing Biden or Harris is insignificant...

But the problem with comparing statistical significance for estimates is that a difference in statistical significance doesn't permit an inference that the estimates differ.

For example, Nelson 2021 Table A5 indicates that, for the association of racial resentment and the outcome of Kamala Harris's perceived electability, the 95% confidence interval among White respondents is [-.01, -.001]; this 95% confidence interval doesn't include zero, so that's a statistically significant estimate. The corresponding 95% confidence interval among respondents of color is [-.01, .002]; this 95% confidence interval includes zero, so that's not a statistically significant estimate.

But the corresponding point estimates are reported as -0.01 among White respondents and -0.01 among respondents of color, so there doesn't seem to be sufficient evidence to claim that these estimates differ from each other. Nonetheless, Nelson 2021 counts this as one of the seven cases referenced in the aforementioned passage.

Nelson 2021 Table 1 indicates that the sample had 906 White respondents and 466 respondents of color. The larger sample for Whites than respondents of color biases the analysis toward a better chance of detecting statistical significance among White respondents than among respondents of colors.

---

Table A5 provides sufficient evidence that some interaction terms had a p-value less than p=0.05, such as for the policy outcome for Joe Biden, with non-overlapping 95% confidence intervals for hostile sexism of [-.02, .0004] for respondents of color and [.002, .02] for White respondents.

But I'm not sure how much this matters, without evidence about how well hostile sexism measured gender attitudes among White respondents, compared to how well hostile sexism measured gender attitudes among respondents of color.

Tagged with: , ,

PLOS ONE recently published Gillooly et al 2021 "Having female role models correlates with PhD students' attitudes toward their own academic success".

Colleen Flaherty at Inside Higher Ed quoted Gillooly et al 2021 co-author Amy Erica Smith discussing results from the article. From the Flaherty story, with "she" being Amy Erica Smith:

"When we showed students a syllabus with a low percentage of women authors, men expressed greater confidence than women in their ability to do well in the class" she said. "When we showed students syllabi with more equal gender representation, men's self-confidence declined, but women and men still expressed equal confidence in their ability to do well. So making the curriculum more fair doesn't actually hurt men relative to women."

Figure 1 of Gillooly et al 2021 presented evidence of this male student backlash, with the figure note indicating that the analysis controlled for "orientations toward quantitative and qualitative methods". Gillooly et al 2021 indicated that these "orientation" measures incorporate respondent ratings of their interest and ability in quantitative methods and qualitative methods.

But the "Grad_Experiences_Final Qualtrics Survey" file indicates that these "orientation" measures appeared on the survey after respondents received the treatment. And controlling for such post-treatment "orientation" measures is a bad idea, as discussed in Montgomery et al 2018 "How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It".

The "orientation" items were located on the same Qualtrics block as the treatment and the self-confidence/self-efficacy item, so it seems possible that these "orientation" items might have been intended as outcomes and not as controls. I didn't find any preregistration that indicates the Gillooly et al plan for the analysis.

---

I used the Gillooly et al 2021 data to assess whether there is sufficient evidence that this "male backlash" effect occurs in straightforward analyses that omit the post-treatment controls. The p-value is about p=0.20 for the command...

ologit q14recode treatment2 if female==0, robust

...which tests the null hypothesis that male students' course-related self-confidence/self-efficacy as measured on the five-point scale did not differ by the difference in percentage of women authors on the syllabus.

See the output file below for more analysis. For what it's worth, the data provided sufficient evidence at p<0.05 that, among men students, the treatment affected responses to three of the four items that Gillooly et al 2021 used to construct the "orientation" controls.

---

NOTES

1. Data. Stata code. Output file.

2. Prior post discussing a biased benchmark in research by two of the Gillooly et al 2021 co-authors.

3. Figure 1 of Gillooly et al 2021 reports 76% confidence intervals to help assess a p<0.10 difference between estimates, and Figure 2 of Gillooly et al 2021 reports 84% confidence intervals to help assess a p<0.05 difference between estimates. I would be amazed if this p=0.05 / p=0.10 variation was planned before Gillooly et al analyzed the data.

Tagged with: , , , ,

PS: Political Science & Politics published Utych 2020 "Powerless Conservatives or Powerless Findings?", which responded to arguments in my 2019 "Left Unchecked" PS symposium entry. From Utych 2020:

Zigerell (2019) presented arguments that research supporting a conservative ideology is less likely to be published than research supporting a liberal ideology, focusing on the most serious accusations of ideological bias and research malfeasance. This article considers another less sinister explanation—that research about issues such as anti-man bias may not be published because it is difficult to show conclusive evidence that it exists or has an effect on the political world.

I wasn't aware of the Utych 2020 PS article until I saw a tweet that it was published, but the PS editors kindly permitted me to publish a reply, which discussed evidence that anti-man bias exists and has an effect on the political world.

---

One of the pieces of evidence for anti-man bias mentioned in my PS reply was the Schwarz and Coppock meta-analysis of candidate choice experiments involving male candidates and female candidates. This meta-analysis was accepted at the Journal of Politics, and Steve Utych indicated on Twitter that it was a "great article" and that he was a reviewer of the article. The meta-analysis detected a bias favoring female candidates over male candidates, so I asked Steve Utych whether it is reasonable to characterize the results from the meta-analysis as reasonably good evidence that anti-man bias exists and has an effect in the political realm.

I thought that the exchange that I had with Steve Utych was worth saving (archived: https://archive.is/xFQvh). According to Steve Utych, this great meta-analysis of candidate choice experiments "doesn't present information about discrimination or biases". In the thread, Steve Utych wouldn't describe what he would accept as evidence of anti-man bias in the political realm, but he was willing to equate anti-man bias with alien abduction.

---

Suzanne Schwarz, who is the lead author of the Schwarz and Coppock meta-analysis, issued a series of tweets (archived: https://archive.is/pFSJ0). The thread was locked before I could respond, so I thought that I would blog about my comments on her points, which she labeled "first" through "third".

Her first point, about majority preference, doesn't seem to be relevant about whether anti-man bias exists and has an effect in the political realm.

For her second point, that voting in candidate choice experiments might differ from voting in real elections, I think that it's within reason to dismiss results from survey experiments, and I think that it's within reason to interpret results from survey experiments as offering evidence about the real world. But I think that each person should hold no more than one of those positions at a given time.

So if Suzanne Schwarz doesn't think that the meta-analysis provides evidence about voter behavior in real elections, there might still be time for her and her co-author to remove language from their JOP article that suggests that results from the meta-analysis provide evidence about voter behavior in real elections, such as:

Overall, our findings offer evidence against demand-side explanations of the gender gap in politics. Rather than discriminating against women who run for office, voters on average appear to reward women.

And instead of starting the article with "Do voters discriminate against women running for office?", maybe the article could instead start by quoting language from Suzanne Schwarz's tweets. Something such as:

Do "voters support women more in experiments that simulate hypothetical elections with hypothetical candidates"? And should anyone care, given that this "does not necessarily mean that those voters would support female politicians in real elections that involve real candidates and real stakes"?

I think that Suzanne Schwarz's third point is that a person's preference for A relative to B cannot be interpreted as an "anti" bias against B, without information about that person's attitudinal bias, stereotypes, or animus regarding B.

Suzanne Schwarz claimed that we would not interpret a preference for orange packaging over green packaging as evidence of an "anti-green" bias, but let's use a hypothetical involving people, of an employer who always hires White applicants over equally qualified Black applicants. I think that it would be at least as reasonable to describe that employer as having an anti-Black bias, compared to applying the Schwarz and Coppock language quoted above, to describe that employer as "appear[ing] to reward" White applicants.

---

The Schwarz and Coppock meta-analysis of 67 survey experiments seems like it took a lot of work, was published in one of the top political science journals, and, according to its abstract, was based on an experimental methodology that "[has] become a standard part of the political science toolkit for understanding the effects of candidate characteristics on vote choice", with results that add to the evidence that "voter preferences are not a major factor explaining the persistently low rates of women in elected office".

So it's interesting to see the "doesn't present information about discrimination or biases" and "does not necessarily mean that those voters would support female politicians in real elections that involve real candidates and real stakes" reactions on Twitter archived above, respectively from a peer reviewer who described the work as "great" and from one of the co-authors.

---

NOTES

1. Zach Goldberg and I have a manuscript presenting evidence that anti-man bias exists and has a political effect, based on participant feeling thermometer ratings about men and about women in data from the 2019 wave of the Democracy Fund Voter Study Group VOTER survey. Zach tweeted about a prior version of the manuscript. The idea for the manuscript goes back at least to a Twitter exchange from March 2020 (Zach, me).

Steve Utych reported on the 2019 wave of this VOTER survey in his 2021 Electoral Studies article about sexism against women, but neither his 2021 Electoral Studies article or his PS article questioning the idea of anti-man bias reported results from the feeling thermometer ratings about men and about women.

Tagged with: ,