The American Political Science Review recently published Mason et al. 2021 "Activating Animus: The Uniquely Social Roots of Trump Support".

Mason et al. 2021 measured "animus" based on respondents' feeling thermometer ratings about groups. Mason et al. 2021 reported results for a linear measure of animus, but seemed to indicate an awareness that a linear measure might not be ideal: "...it may be that positivity toward Trump stems from animus toward Democratic groups more than negativity toward Trump stems from warmth toward Democratic groups, or vice versa" (p. 7).

Mason et al. 2021 addressed this by using a quadratic term for animus. But this retains the problem that estimates for respondents at a high level of animus against a group are influenced by responses from respondents who reported less animus toward the group and from respondents who favored the group.

I think that a better strategy to measure animus is to instead compare negatively toward the groups (i.e., ratings below the midpoint on the thermometer or at a low level) to indifference (i.e., a rating at the midpoint on the thermometer). I'll provide an example below, with another example here.

---

The Mason et al. 2021 analysis used thermometer ratings of groups measured in the 2011 wave of a survey to predict outcomes measured years later. For example, one of the regressions used feeling thermometer ratings about Democratic-aligned groups as measured in 2011 to predict favorability toward Trump as measured in 2018, controlling for variables measured in 2011 such as gender, race, education, and partisanship.

That research design might be useful for assessing change net of controls between 2011 and 2018, but it's not useful for understanding animus in 2021, which I think some readers might infer from the "motivating the left" tweet from the first author of Mason et al. 2021, that:

And it's not happening for anyone on the Democratic side. Hating Christians and White people doesn't predict favorability toward any Democratic figures or the Democratic Party. So it isn't "anti-White racism" (whatever that means) motivating the left. It's not "both sides."

The 2019 wave of the survey used in Mason et al. 2021 has feeling thermometer ratings about White Christians, and, sure enough, the mean favorability rating about Hillary Clinton in 2019 differed between respondents who rated White Christians at or near the midpoint and respondents who rated White Christians under or well under the midpoint:

Even if the "motivating the left" tweet is interpreted to refer only to the post-2011 change controlling for partisanship, ideology, and other factors, it's not clear why that restricted analysis would be important for understanding what is motivating the left. It's not like the left started to get motivated only in or after 2011.

---

NOTES

1. I think that Mason et al. 2021 used "warmth" at least once discussing results from the linear measure of animus, in which "animus" or "animosity" could have been used, in the passage below from page 4, with emphasis added:

Rather, Trump support is uniquely predicted by animosity toward marginalized groups in the United States, who also happen to fall outside of the Republican Party's rank-and-file membership. For comparison, when we analyze warmth for whites and Christians, we find that it predicts support for Trump, the Republican Party, and other elites at similar levels.

It would be another flaw of a linear measure of animus if an association can be described as having been predicted by animosity or by warmth (e.g., animosity toward Whites and Christians predicts lower levels of support for Trump and other Republicans at similar levels)

2. Stata code. Dataset. R plot: data and code.

Tagged with: , , ,

See here for a discussion of the Rice et al. 2021 mock juror experiment.

My reading of the codebook for the Rice et al. 2021 experiment is that, among other items, the pre-election survey included at least one experiment (UMA303_rand), then a battery of items measuring racism and sexism, and then at least another experiment. Then, among other items, the post-election survey included the CCES Common Content racial resentment and FIRE items, and then the mock juror experiment.

The pre-election battery of items measuring racism and sexism included three racial resentment items, a sexism battery, three stereotypes about Blacks and Whites (laziness, intelligence, and violent), and 0-to-100 feeling thermometers about Whites and about Blacks. In this post, I'll report some analyses of how well these pre-election measures predicted discrimination in the Rice et al. 2021 mock juror experiment.

---

The first plot reports results among White participants who might be expected to have a pro-Black bias. For example, the first estimate is for White participants who had the lowest level of racial resentment. The dark error bars indicate 83.4% confidence intervals, to help compare estimates to each other. The lighter, longer error bars are 95% confidence intervals, which are more appropriate for comparing as estimate to a given number such as zero.

The plotted outcome is whether the participant indicated that the defendant was guilty or not guilty. The -29% for the top estimate indicates that, among White participants who had the lowest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 29 percentage points lower than the percentage that rated the White defendant guilty.

The plot below reports results among White participants who might be expected to have a pro-White bias. The 26% for the top estimate indicates that, among White participants who had the highest level of racial resentment on this index, the percentage that rated the Black defendant guilty was 26 percentage points higher than the percentage that rated the White defendant guilty.

---

The Stata output reports additional results, for the sentence length outcome, and for other predictors: a four-item racial resentment index from the post-election survey, plus individual stereotype items (such as for White participants who rated Blacks higher than Whites on an intelligence scale). Results for the sentence length outcome are reported for all White respondents and, in later analyses, for only those White respondents who indicated that the defendant was guilty.

---

NOTE

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Pro-Black plot: dataset and code. Pro-White plot: dataset and code.

Tagged with: , , ,

Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making".

---

See the prior post describing the mock juror experiment in Rice et al. 2021.

The Rice et al. 2021 team kindly cited my article questioning racial resentment as a valid measure of racial animus. But Rice et al. 2021 interpreted their results as evidence for the validity of racial resentment:

Our results also suggest that racial resentment is a valid measure of racial animus (Jardina and Piston 2019) as it performs exactly as expected in an experimental setting manipulating the race of the defendant.

However, my analyses of the Rice et al. 2021 data indicated that a measure of sexism sorted White participants by their propensity to discriminate for Bradley Schwartz or Jamal Gaines:

I don't think that the evidence in the above plot indicates that sexism is a valid measure of racial animus, so I'm not sure that racial resentment sorting White participants by their propensity to discriminate for Bradley or Jamal means that racial resentment is a valid measure of racial animus, either.

---

I think that the best two arguments against racial resentment as a measure of anti-Black animus are:

[1] Racial resentment on its face plausibly captures non-racial attitudes, and it is not clear that statistical control permits any post-statistical control residual association of racial resentment with an outcome to be interpreted as anti-Black animus, given that racial resentment net of statistical control often predicts outcomes that are not theoretically linked to racial attitudes.

[2] Persons at low levels of racial resentment often disfavor Whites relative to Blacks (as reported in this post and in the Rice et al. 2021 mock juror experiment), so the estimated effect for racial resentment cannot be interpreted as only the effect of anti-Black animus. Racial resentment in these cases appears to sort to low levels of racial resentment a sufficient percentage of respondents who dislike Whites in absolute or at least relative terms, so that indifference to Whites might plausibly be better represented at some location between the ends of the racial resentment measure. But the racial resentment measure does not have a clear indifference point such as 50 on a 0-to-100 feeling thermometer rating, so -- even if argument [1] is addressed so that statistical control isolates the effect of racial attitudes -- it's not clear how racial resentment could be used to accurately estimate the effect of only anti-Black animus.

---

NOTES

1. The sexism measure used responses to the items below, which loaded onto one factor among White participants in the data:

[UMA306bSB] We should do all we can to make sure that women have the same opportunities in society as men.

[UMA306c] We would have fewer problems if we treated men and women more equally.

[UMA306f] Many women are actually  seeking special favors, such as hiring policies that favor them over men, under the guise of asking for "equality."

[UMA306g] Women are too easily offended.

[UMA306h] Men are better suited for politics than are women.

[CC18_422c] When women lose to men in a fair competition, they typically complain about being discriminated against.

[CC18_422d] Feminists are making entirely reasonable demands of men.

Responses to these items loaded onto a different factor:

[UMA306d] Women should be cherished and protected by men.

[UMA306e] Many women have a quality of purity that few men possess.

2. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. Data and code for the sexism plot.

3. I plan a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Forthcoming at the Journal of Politics is Rice et al. 2021 "Same As It Ever Was? The Impact of Racial Resentment on White Juror Decision-Making". In contrast to the forthcoming Peyton and Huber 2021 article at the Journal of Politics that I recently blogged about, Rice et al. 2021 reported evidence that racial resentment predicted discrimination among Whites.

---

Rice et al. 2021 concerned a mock juror experiment regarding an 18-year-old starting point guard on his high school basketball team who was accused of criminal battery. Participants indicated whether the defendant was guilty or not guilty and suggested a prison sentence length from 0 to 60 months for the defendant. The experimental manipulation was that the target was randomly assigned to be named Bradley Schwartz or Jamal Gaines.

Section 10 of the Rice et al. 2021 supplementary material has nice plots of the estimated discrimination at given levels of racial resentment, indicating, for the guilty outcome, that White participants at low racial resentment were less likely to indicate that Jamal was guilty compared to Bradley, but that White participants at high racial resentment were more likely to indicate that Jamal was guilty compared to Bradley. Results were similar for the sentence length outcome, but the 95% confidence interval at high racial resentment overlaps zero a bit.

---

The experiment did not detect sufficient evidence of racial bias among White participants as a whole. But what about Black participants? Results indicated a relatively large favoring of Jamal over Bradley among Black participants, in unweighted data (N=41 per condition). For guilt, the bias was 29 percentage points in unweighted analyses, and 33 percentage points in weighted analyses. For sentence length, the bias was 8.7 months in unweighted analyses, and 9.4 months in weighted analyses, relative to a unweighted standard deviation of 16.1 months in sentence length among Black respondents.

Results for the guilty/not guilty outcome:

Results for the mean sentence length outcome:

The p-value was under p=0.05 for my unweighted tests of whether the size of the discrimination among Whites (about 7 percentage points for guilty, about 1.3 months for sentence length) differed from the size of the discrimination among Blacks (about 29 percentage points for guilty, about 8.7 months for sentence length); the inference is the same for weighted analyses. The evidence is even stronger considering that the point estimate of discrimination among Whites was in the pro-Jamal direction and not in the pro-ingroup direction.

---

NOTES

1. Data for Rice et al. 2021 from the JOP Dataverse. Original 2018 CCES data for the UMass-A module, which I used in the aforementioned analyses. Stata code. Stata output. "Guilty" plot: data and R code. "Sentence length" plot: data and R code.

2. I plan to publish a follow-up post about evidence for validity of racial resentment from the Rice et al. 2021 results, plus a follow-up post about how well different measures predicted racial bias in the experiment.

Tagged with: , , ,

Electoral Studies recently published Jardina and Stephens-Dougan 2021 "The electoral consequences of anti-Muslim prejudice". Jardina and Stephens-Dougan 2021 reported results from 2004 through 2020 ANES Time Series Studies, estimating the effect of anti-Muslim prejudice on vote choice, among White Americans, using feeling thermometer ratings and responses on stereotype scales.

Figure 1 of Jardina and Stephens-Dougan 2021 reports non-Hispanic Whites' mean feeling thermometer ratings about Muslims, Whites, Blacks, Hispanics, and Asians...but not about Christian fundamentalists, even though ANES data for each year in Figure 1 contain feeling thermometer ratings about Christian fundamentalists.

The code for Jardina and Stephens-Dougan 2021 includes a section for "*Robustness for anti christian fundamental affect", indicating an awareness of the thermometer ratings about Christian fundamentalists.

I drafted a quick report about how reported 2020 U.S. presidential vote choice associated with feeling thermometer ratings about Jews, Christians, Muslims, and Christian fundamentalists, using data from the ANES 2020 Time Series Study. Plots are below, with more detailed descriptions in the quick report.

This first plot is of the distributions of feeling thermometer ratings about the religious groups asked about, with categories such as [51/99] indicating the percentage that rated the indicated group at 51 through 99 on the thermometer:

This next plot is of how the ratings about a given religious group associated with 2020 two-party presidential vote choice for Trump, with demographic controls only, and a separate regression for ratings about each religious group:

This next plot added controls for partisanship, political ideology, and racial resentment, and put all ratings of religious groups into the same regression:

The above plot zooms in on y-axis percentages from 20 to 60. The plot in the quick report has a y-axis that runs from 0 to 100.

---

Based on a Google Scholar search, research is available about the political implications of attitudes about Christian fundamentalists, such as Bolce and De Maio 1999. I'll plan to add a discussion of this if I convert the quick report into a proper paper.

---

The technique in the quick report hopefully improves on the Jardina and Stephens-Dougan 2021 technique for estimating anti-Muslim prejudice. From Jardina and Stephens-Dougan 2021 (p. 5):

A one-unit change on the anti-Muslim affect measure results in a 16-point colder thermometer evaluation of Kerry in 2004, a 22-point less favorable evaluation of Obama in both 2008 and 2012, and a 17-point lower rating of Biden in 2020.

From what I can tell, this one-unit change is the difference between estimated support for a candidate, net of controls, comparing a 0 rating about Muslims on the feeling thermometers to a 100 rating about Muslims on the feeling thermometers, based on a regression in which the "Negative Muslim Affect" predictor was merely the set of feeling thermometer ratings about Muslims reversed and placed on a 0-to-1 scale.

If so, then the estimated effect size of anti-Muslim affect is identical to the estimated effect size of pro-Muslim affect. Or maybe Jardina and Stephens-Dougan 2021 considers rating Muslims at 100 to be indifference about Muslims, 99 indicates some anti-Muslim affect, 98 indicates a bit more anti-Muslim affect, and so on.

It seems more reasonable to me that some people are on net indifferent about Muslims, some people have on net positive absolute views about Muslims, and some people have on net negative absolute views about Muslims. So instead I coded feeling thermometer ratings for each religious group into six categories: zero (the coldest possible rating), 100 (the warmest possible rating), 1 through 49 (residual cold ratings), 50 (indifference), 51 through 99 (residual warm ratings), and non-responses.

The extreme categories of 0 and 100 are to estimate the outcome at the extremes, and the 50 category is to estimate the outcome at indifference. If the number of observations at the extremes is not sufficiently large for some predictors, it might make more sense to also collapse the extreme value into adjoining values on the same side of 50.

---

NOTES

1. Jardina and Stephens-Dougan 2021 footnote 24 has an unexpected-to-me criticism of Michael Tesler's work.

We note that our findings with respect to 2012 are not consistent with Tesler (2016a), who finds that anti-Muslim attitudes were predictive of voting for Obama in 2012. Tesler, however, does not control for economic evaluations in his vote choice models, despite the fact that attitudes toward the economy are notoriously important predictors of presidential vote choice (Vavreck 2009)...

I don't think that a regression should include a predictor merely because the predictor is known to be a good predictor of the outcome, so it's not clear to me that Tesler or anyone else should include participant economic evaluations when predicting vote choice merely because participant economic evaluations predict vote choice.

It seems plausible that a nontrivial part of participant economic evaluations are downstream from attitudes about the candidates. Tesler's co-authored Identity Crisis book has a plot (p. 208) illustrating the flip-flop by Republicans and Democrats on views of the economy from around November 2016, with a note that:

This is another reason to downplay the role of subjective economic dissatisfaction in the election: it was largely a consequence of partisan politics, not a cause of partisans' choices.

2. Jardina and Stephens-Dougan 2021 indicated that (p. 5):

The fact, however, that the effect size of anti-Muslim affect is often on par with the effect size of racial resentment is especially noteworthy, given that the construct is measured far less robustly than the multi-item measure of racial resentment.

The anti-Muslim affect measure is a reversed 0-to-100 feeling thermometer, which has 101 potential levels. Racial resentment is built from four items, with each item having five substantive options, so that would permit the creation of a measure that has 17 substantive levels, not counting any intermediate levels that might occur for participants with missing data for some but not all of the four items.

I'm not sure why it's particularly noteworthy that the estimated effect for the 101-measure scale is on par with the estimated effect for the 17-level measure. From what I can tell, these measures are not easily comparable, unless we know, for example, the percentage of participants that fell into the most extreme levels.

3. Jardina and Stephens-Dougan 2021 reviewed a lot of the research on the political implications about attitudes about Muslims. But no mention of Helbling and Traunmüller 2018, which, based on data from the UK, indicated that:

The results suggest that Muslim immigrants are not per se viewed more negatively than Christian immigrants. Instead, the study finds evidence that citizens' uneasiness with Muslim immigration is first and foremost the result of a rejection of fundamentalist forms of religiosity.

4. I have a prior post about selective reporting in the 2016 JOP article from Stephens-Dougan, the second author of Jardina and Stephens-Dougan 2021.

5. Quick report. Stata code. Stata output.

Tagged with: , ,

Forthcoming in the Journal of Politics is Peyton and Huber 2021 "Racial Resentment, Prejudice, and Discrimination". Study 1 estimated discrimination among White MTurk workers playing a game with a White proposer or a Black proposer. The abstract indicated that:

Study 1 used the Ultimatum Game (UG) to obtain a behavioral measure of racial discrimination and found whites engaged in anti-Black discrimination. Explicit prejudice explained which whites discriminated whereas resentment did not.

I didn't see an indication in the paper about a test for whether explicit prejudice predicted discrimination against Blacks better than racial resentment did. I think that the data had 173 workers coded non-White and and 20 workers with missing data on the race variable, but Peyton and Huber 2021 reported results for only White workers, so I'll stick with that and limit my analysis to reflect their analysis in Table S1.1, which is labeled in their code as "main analysis".

My analysis indicated that the discrimination against Black proposers was 2.4 percentage points among White workers coded as prejudiced (p=0.004) and 1.3 percentage points among White workers coded as high in racial resentment (p=0.104), with a p-value of p=0.102 for a test of whether these estimates differ from each other.

---

The Peyton and Huber 2021 sorting into a prejudiced group or a not-prejudiced group based on responses to the stereotype scales permits assessment of whether the stereotype scales sorted workers by discrimination propensities, but I was also interested in the extent to which the measure of prejudice detected discrimination because the non-prejudiced comparison category included Whites who reported more negative stereotypes of Whites relative to Blacks, on net. My analysis indicated that point estimate for discrimination was:

* 2.4 percentage points against Blacks (p=0.001), among White workers who rated Blacks more negatively than Whites on net on the stereotype scales,

* 0.9 percentage points against Blacks (p=0.173), among White workers who rated Blacks equal to Whites on net on the stereotype scales, and

* 1.8 percentage points in favor of Blacks (p=0.147), among White workers who rated Blacks more positively than Whites on net on the stereotype scales.

The p-value for the difference between the 2.4 percentage point estimate and the 0.9 percentage point estimate is p=0.106, and the p-value for the difference between the 0.9 percentage point estimate and the -1.8 percentage point estimate is also p=0.106.

---

NOTES

1. I have blogged about measuring "prejudice". The Peyton and Huber 2021 definition of prejudice is not bad:

Prejudice is a negative evaluation of another person based on their group membership, whereas discrimination is a negative behavior toward that person (Dovidio and Gaertner, 1986).

But I don't think that this is how Peyton and Huber 2021 measured prejudice. I think that instead a worker was coded as prejudiced for reporting a more negative evaluation about Blacks relative to Whites, on net for the four traits that workers were asked about. That's a *relatively* more negative perception of a *group*, not a negative evaluation of an individual person based on their group.

2. Peyton and Huber 2021 used an interaction term to compare discrimination among White workers with high racial resentment to discrimination among residual White workers, and used an interaction term to compare discrimination among White workers explicitly prejudiced against Blacks relative to Whites to discrimination among residual White workers.

Line 77 of the Peyton and Huber code tests whether, in a model including both interaction terms for the "Table S1.1, main analysis" section, the estimated discrimination gap differed between the prejudice categories and the racial resentment categories. The p-value was p=0.0798 for that test.

3. Data. Stata code for my analysis. Stata output for my analysis.

Tagged with: ,

1.

Abrajano and Lajevardi 2021 "(Mis)Informed: What Americans Know About Social Groups and Why it Matters for Politics" reported (p. 34) that:

We find that White Americans, men, the racially resentful, Republicans, and those who turn to Fox and Breitbart for news strongly predict misinformation about [socially marginalized] social groups.

But their research design is biased toward many or all of these results, given their selection of items for their 14-item set of misinformation items. I'll focus below on left/right political bias, and then discuss apparent errors in the publication.

---

2.

Item #7 is a true/false item:

Most terrorist incidents on US soil have been conducted by Muslims.

This item will code as misinformed some participants who overestimate the percentage of U.S.-based terror attacks committed by Muslims, but won't code as misinformed any participants who underestimate that percentage.

It seems reasonable to me that persons on the political Left will be more likely than persons on the Right to underestimate the percentage of U.S.-based terror attacks committed by Muslims and that persons on the political Right will be more likely than persons on the Left to overestimate the percentage of U.S.-based terror attacks committed by Muslims, so I'll code this item as favoring the political Left.

---

Four items (#11 to #14) ask about Black/White differences in receipt of federal assistance, but phrased so that Whites are the "primary recipients" of food stamps, welfare, and social security.

But none of these items measured misinformation about receipt of federal assistance as a percentage. So participants who report that the *number* of Blacks who receive food stamps is higher than the number of Whites who receive food stamps get coded as misinformed. But participants who mistakenly think that the *percentage* of Whites who receive food stamps is higher than the percentage of Blacks who receive food stamps do not get coded as misinformed.

Table 2 of this U.S. government report indicates that, in 2018, non-Hispanic Whites were 67% of households, 45% of households receiving SNAP (food stamps), and 70% of households not receiving SNAP. Respective percentages for Blacks were 12%, 27%, and 11% and for Hispanics were 13.5%, 22%, and 12%. So, based on this, it's correct that Whites are the largest racial/ethnic group that receives food stamps on a total population basis...but it's also true that Whites are the largest racial/ethnic group that does NOT receive food stamps on a total population basis.

It seems reasonable to me that the omission of percentage versions of these three public assistance items favors the political Left, in the sense that persons on the political Left are more likely to rate Blacks higher than Whites than are persons on the political Right, or, for that matter, Independents and moderates, so that these persons on the Left would presumably be more likely than persons on the Right to prefer (and thus guess) that Whites and not Blacks are the primary recipients of federal assistance. So, by my count, that's at least four items that favor the political Left.

---

As far as I can tell, Abrajano and Lajevardi 2021 didn't provide citations to justify their coding of correct responses. But it seems to me that such citation should be a basic requirement for research that codes responses as correct, except for  obvious items such as, say, who the current Vice President is. A potential problem with this lack of citation is that it's not clear to me that some responses that Abrajano and Lajevardi 2021 coded as correct are truly correct or at least are the only responses that should be coded as correct.

Abrajano and Lajevardi 2021 coded "Whites" as the only correct response for the "primary recipients" item about welfare, but this government document indicates that, for 2018, the distribution of TANF recipients was 37.8% Hispanic, 28.9% Black, 27.2% White, 2.1% multi-racial, 1.9% Asian, 1.5% AIAN, and 0.6% NHOPI.

And "about the same" is coded as the only correct response for the item about the "primary recipients" of public housing (item #14), but Table 14 of this CRS Report indicates that, in 2017, 33% of public housing had a non-Hispanic White head of household and 43% had a non-Hispanic Black head of household. This webpage permits searching for "public housing" for different years (screenshot below), which, for 2016, indicates percentages of 45% for non-Hispanic Blacks and 29% for non-Hispanic Whites.

Moreover, it seems suboptimal to have the imprecise "about the same" response be the only correct response. Unless outcomes for Blacks and Whites are exactly the same, presumably selection of one or the other group should count as the correct response.

---

Does a political bias in the Abrajano and Lajevardi 2021 research design matter? I think that the misinformation rates are close enough so that it matters: Figure A2 indicates that the Republican/Democrat misinformation gap is less than a point, with misinformed means of 6.51 for Republicans and 5.83 for Democrats.

Ironically, Abrajano and Lajevardi 2021 Table A1 indicates that their sample was 52% Democrat and 21% Republican, so -- on the "total" basis that Abrajano and Lajevardi 2021 used for the federal assistance items -- Democrats were the "primary" partisan source of misinformation about socially marginalized groups.

---

NOTES

1. Abrajano and Lajevardi 2021 (pp. 24-25) refers to a figure that isn't in the main text, and I'm not sure where it is:

When we compare the misinformation rates across the five social groups, a number of notable patterns emerge (see Figure 2)...At the same time, we recognize that the magnitude of difference between White and Asian American's [sic] average level of misinformation (3.4) is not considerably larger than it is for Blacks (3.2), nor for Muslim American respondents, who report the lowest levels of misinformation.

Table A5 in the appendix indicates that Blacks had a lower misinformation mean than Muslims did, 5.583 compared to 5.914, so I'm not sure what the aforementioned passage refers to. The passage phrasing refers to a "magnitude of difference", but 3.4 doesn't seem to refer to a social group gap or to an absolute score for any of the social groups.

2. Abrajano and Lajevardi 2021 footnote 13 is:

Recall that question #11 is actually four separate questions, which brings us to a total of thirteen questions that comprise this aggregate measure of political misinformation.

Question 11 being four separate questions means that there are 14 questions, and Abrajano and Lajevardi 2021 refers to "fourteen" questions elsewhere (pp. 6, 17).

Abrajano and Lajevardi 2021 indicated that "...we also observe about 11% of individuals who provided inaccurate answers to all or nearly all of the information questions" (p. 24, emphasis in the original), and it seems a bit misleading to italicize "all" if no one provided inaccurate responses to all 14 items.

3. Below, I'll discuss the full set of 14 "misinformation" items. Feel free to disagree with my count, but I would be interested in an argument that the 14 items do not on net bias results toward the Abrajano and Lajevardi 2021 claim that Republicans are more misinformed than Democrats about socially marginalized groups.

For the aforementioned items, I'm coding items #7 (Muslim terror %), #11 (food stamps), #12 (welfare), and #14 (public housing) as biased in favor of the political Left, because I think that these items are phrased so that the items will catch more misinformation among the political Right than among the political Left, even though the items could be phrased to catch more misinformation among the Left than among the Right.

I'm not sure about the item about social security (#13) , so I won't code that item as politically biased. So by my count that's 4 in favor of the Left, plus 1 neutral.

Item #5 seems to be a good item, measuring whether participants know that Blacks and Latinos are more likely to live in regions with environmental problems. But it's worth noting that this item is phrased in terms of rates and not, as for the federal assistance items, as the total number of persons by racial/ethnic group. So by my count that's 4 in favor of the Left, plus 2 neutral.

Item #1 is about the number of undocumented immigrants in the United States. I won't code that item as politically biased. So by my count that's 4 in favor of the Left, plus 3 neutral.

The correct response for item #2 is that most immigrants in the United States are here legally. I'll code this item as favoring the political Left for the same reason as the Muslim terror % item: the item catches participants who overestimate the percentage of immigrants here illegally, but the item doesn't catch participants who underestimate that percentage, and I think these errors are more likely on the Right and Left, respectively. So by my count that's 5 in favor of the Left, plus 3 neutral.

Item #6 is about whether *all* (my emphasis) U.S. universities are legally permitted to consider race in admissions. It's not clear to me why it's more important that this item be about *all* U.S. universities instead of about *some* or *most* U.S. universities. I think that it's reasonable to suspect that persons on the political Right will overestimate the prevalence of affirmative action and that persons on the political Left will underestimate the prevalence of affirmative action, so by my count that's 6 in favor of the Left, plus 3 neutral.

I'm not sure that items #9 and #10 have much of a bias (number of Muslims in the United States, and the country that has the largest number of Muslims), other than to potentially favor Muslims, given that the items measure knowledge of neutral facts about Muslims. So by my count that's 6 in favor of the Left, plus 5 neutral.

I'm not sure what "social group" item #8 is supposed to be about, which is about whether Barack Obama was born in the United States. I'm guessing that a good percentage of "misinformed" responses for this item are insincere. Even if it were a good idea to measure insincere responses to test a hypothesis about misinformation, I'm not sure why it would be a good idea to not also include a corresponding item about a false claim that, like the Obama item, is known to be more likely to be accepted among the political Left, such as items about race and killings by police. So I'll up the count to 7 in favor of the Left, plus 5 neutral.

Item #4 might reasonably be described as favoring the political Right, in the sense that I think that persons on the Right would be more likely to prefer that Whites have a lower imprisonment rate than Blacks and Hispanics. But the item has this unusual element of precision ("six times", "more than twice") that isn't present in items about hazardous waste and about federal assistance, so that, even if persons on the Right stereotypically guess correctly that Blacks and Hispanics have higher imprisonment rates than Whites, these persons still might not be sure that the "six times" and "more than twice" are correct.

So even though I think that this item (#4) can reasonably be described as favoring the political Right, I'm not sure that it's as easy for the Right to use political preferences to correctly guess this item as it is for the Left to use political preferences to correctly guess the hazardous waste item and the federal assistance items. But I'll count this item as favoring the Right, so by my count that's 7 in favor of the Left, 1 in favor of the Right, plus 5 neutral.

Item #3 is about whether the U.S. Census Bureau projects ethnic and racial minorities to be a majority in the United States by 2042. I think that it's reasonable that a higher percentage of persons on the political Left than the political Right would prefer this projection to be true, but maybe fear that the projection is true might bias this item in favor of the Right. So let's be conservative and count this item as favoring the Right, so that my coding of the overall distribution for the 14 misinformation items is: seven items favoring the Left, two items favoring the Right, and five politically neutral items.

4. The ANES 2020 Time Series Study has similar biases in its set of misinformation items.

Tagged with: , , , ,