The Journal of Race, Ethnicity, and Politics published Nelson 2021 "You seem like a great candidate, but…: Race and gender attitudes and the 2020 Democratic primary".

Nelson 2021 is an analysis of racial attitudes and gender attitudes that makes inferences about the effect of "gender attitudes" using measures that ask only about women, without any appreciation of the need to assess whether the effect of gender attitudes about women are offset by the effect of gender attitudes about men.

But Nelson 2021 has another element that I thought worth blogging about. From pages 656 and 657:

Importantly, though, I hypothesized that the respondent's race will be consequential for whether these race and gender attitudes matter—specifically, that I expect it is white respondents who are driving these relationships. To test this hypothesis, I reran all 16 logit models from above with some minor adjustments. First, I replaced the IVs "Black" and "Latina/o/x" with the dichotomous variable "white." This variable is coded 1 for those respondents who identify as white and 0 otherwise. I also added interaction terms between the key variables of interest—hostile sexism, modern sexism, and racial resentment—and "white." These interactions will help assess whether white respondents display different patterns than respondents of color...

This seems like a good research design: if, for instance, the p-value is less than p=0.05 for the "Racial resentment X White" interaction term, then we can infer that, net of controls, racial resentment associated with the outcome among White respondents differently than racial resentment associated with the outcome among respondents of color.

---

But, instead of reporting the p-value for the interaction terms, Nelson 2021 compared the statistical significance for an estimate among White respondents to the statistical significance for the corresponding estimate among respondents of color, such as:

In seven out of eight cases where racial resentment predicts the likelihood of choosing Biden or Harris, the average marginal effect for white respondents is statistically significant. In those same seven cases, the average marginal effect for respondents of color on the likelihood of choosing Biden or Harris is insignificant...

But the problem with comparing statistical significance for estimates is that a difference in statistical significance doesn't permit an inference that the estimates differ.

For example, Nelson 2021 Table A5 indicates that, for the association of racial resentment and the outcome of Kamala Harris's perceived electability, the 95% confidence interval among White respondents is [-.01, -.001]; this 95% confidence interval doesn't include zero, so that's a statistically significant estimate. The corresponding 95% confidence interval among respondents of color is [-.01, .002]; this 95% confidence interval includes zero, so that's not a statistically significant estimate.

But the corresponding point estimates are reported as -0.01 among White respondents and -0.01 among respondents of color, so there doesn't seem to be sufficient evidence to claim that these estimates differ from each other. Nonetheless, Nelson 2021 counts this as one of the seven cases referenced in the aforementioned passage.

Nelson 2021 Table 1 indicates that the sample had 906 White respondents and 466 respondents of color. The larger sample for Whites than respondents of color biases the analysis toward a better chance of detecting statistical significance among White respondents than among respondents of colors.

---

Table A5 provides sufficient evidence that some interaction terms had a p-value less than p=0.05, such as for the policy outcome for Joe Biden, with non-overlapping 95% confidence intervals for hostile sexism of [-.02, .0004] for respondents of color and [.002, .02] for White respondents.

But I'm not sure how much this matters, without evidence about how well hostile sexism measured gender attitudes among White respondents, compared to how well hostile sexism measured gender attitudes among respondents of color.

Tagged with: , ,

My new publication is a technical comment on the Schneider and Gonzalez 2021 article "Racial resentment predicts eugenics support more robustly than genetic attributions".

The experience with the journal Personality and Individual Differences was great. The journal has a correspondence section that publishes technical comments and other types of correspondence, which seems like a great way to publicly discuss research and to hopefully improve research. The authors of the article that I commented on were also great.

---

My comment highlighted a few things about the article, and I think that two of the comments are particularly generalizable. One comment, which I discussed in prior blog posts [1, 2], concerns the practice of comparing the predictive power of factors that are not or might not be equally well measured. I don't think that is a good idea, because measurement error can bias estimates.

The other comment, which I discussed in prior blog posts [1, 2], concerns analyses that model an association as constant. I think that it is more informative to not model key associations as constant, and Figure 1 of the comment illustrates an example of how this can provide useful information.

There is more in the comment. Here is a 50-day share link for the comment.

Tagged with: , ,

The American Political Science Review recently published Mason et al. 2021 "Activating Animus: The Uniquely Social Roots of Trump Support".

Mason et al. 2021 measured "animus" based on respondents' feeling thermometer ratings about groups. Mason et al. 2021 reported results for a linear measure of animus, but seemed to indicate an awareness that a linear measure might not be ideal: "...it may be that positivity toward Trump stems from animus toward Democratic groups more than negativity toward Trump stems from warmth toward Democratic groups, or vice versa" (p. 7).

Mason et al. 2021 addressed this by using a quadratic term for animus. But this retains the problem that estimates for respondents at a high level of animus against a group are influenced by responses from respondents who reported less animus toward the group and from respondents who favored the group.

I think that a better strategy to measure animus is to instead compare negatively toward the groups (i.e., ratings below the midpoint on the thermometer or at a low level) to indifference (i.e., a rating at the midpoint on the thermometer). I'll provide an example below, with another example here.

---

The Mason et al. 2021 analysis used thermometer ratings of groups measured in the 2011 wave of a survey to predict outcomes measured years later. For example, one of the regressions used feeling thermometer ratings about Democratic-aligned groups as measured in 2011 to predict favorability toward Trump as measured in 2018, controlling for variables measured in 2011 such as gender, race, education, and partisanship.

That research design might be useful for assessing change net of controls between 2011 and 2018, but it's not useful for understanding animus in 2021, which I think some readers might infer from the "motivating the left" tweet from the first author of Mason et al. 2021, that:

And it's not happening for anyone on the Democratic side. Hating Christians and White people doesn't predict favorability toward any Democratic figures or the Democratic Party. So it isn't "anti-White racism" (whatever that means) motivating the left. It's not "both sides."

The 2019 wave of the survey used in Mason et al. 2021 has feeling thermometer ratings about White Christians, and, sure enough, the mean favorability rating about Hillary Clinton in 2019 differed between respondents who rated White Christians at or near the midpoint and respondents who rated White Christians under or well under the midpoint:

Even if the "motivating the left" tweet is interpreted to refer only to the post-2011 change controlling for partisanship, ideology, and other factors, it's not clear why that restricted analysis would be important for understanding what is motivating the left. It's not like the left started to get motivated only in or after 2011.

---

NOTES

1. I think that Mason et al. 2021 used "warmth" at least once discussing results from the linear measure of animus, in which "animus" or "animosity" could have been used, in the passage below from page 4, with emphasis added:

Rather, Trump support is uniquely predicted by animosity toward marginalized groups in the United States, who also happen to fall outside of the Republican Party's rank-and-file membership. For comparison, when we analyze warmth for whites and Christians, we find that it predicts support for Trump, the Republican Party, and other elites at similar levels.

It would be another flaw of a linear measure of animus if an association can be described as having been predicted by animosity or by warmth (e.g., animosity toward Whites and Christians predicts lower levels of support for Trump and other Republicans at similar levels)

2. Stata code. Dataset. R plot: data and code.

Tagged with: , , ,

I received a few questions and comments about my use of 83.4% confidence intervals on the plot in my prior post, so I thought I would post an explanation that I can refer to later.

---

Often, political scientists use a p-value of p=0.05 as a threshold for sufficient evidence of an association, such that only p-values under p=0.05 indicate sufficient evidence. Plotting 95% confidence intervals can help readers assess whether the evidence indicates that a given estimate differs from a given value.

For example, in unweighted data from the ANES 2020 Time Series Study, the 95% confidence interval for Black respondents' mean rating about Whites is [63.0, 67.0]. The number 62 falls outside the 95% confidence interval, so that indicates that there is sufficient evidence at p=0.05 that Black respondents' mean rating about Whites is not 62. However, the number 64 falls inside the 95% confidence interval, so that indicates that there is not sufficient evidence at p=0.05 that the mean rating about Whites among Black respondents is not 64.

---

But suppose that we wanted to assess whether two estimates differ *from each other*. Below is a plot of 95% confidence intervals for Black respondents' mean rating about Whites and about Asians, in unweighted data. For a test of the null hypothesis that the estimates differ from each other, the p-value is p=0.04, indicating sufficient evidence of a difference. However, the 95% confidence intervals overlap quite a bit.

The 95% confidence intervals in this case don't do a good job of permitting readers to assess differences between estimates at the p=0.05 level.

But below is a plot that instead uses 83.4% confidence intervals. The ends of the 83.4% confidence intervals come close to each other but do not overlap. If using confidence interval touching as an approximation to p=0.05 evidence of a difference, that closeness without overlapping is what we would expect from a p-value of p=0.04.

Based on whether 83.4% confidence intervals overlap, readers can often get a good sense whether estimates differ at p=0.05. So my current practice is to plot 95% confidence intervals when the comparison of interest is of an estimate to a given number and to plot 83.4% confidence intervals when the comparison of interest is of one estimate to another estimate.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

2. R code for the plots.

Tagged with: ,

This plot reports disaggregated results from the American National Election Studies 2020 Time Series Study pre-election survey item:

On another topic: How much do you feel it is justified for people to use violence to pursue their political goals in this country?

Not shown is that 83% of White Democrats and 92% of White Republicans selected "Not at all" for this item.

Regression output controlling for party identification, gender, and race is in the Stata output file, along with uncertainty estimates for the plot percentages.

---

NOTES

1. Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Pre-Election Data [dataset and documentation]. February 11, 2021 version. www.electionstudies.org.

2. Stata code for the analysis and R code for the plot. Dataset for the R plot.

Tagged with: , , ,