The American Political Science Review published Bonilla and Tillery Jr 2020 "Which identity frames boost support for and mobilization in the #BlackLivesMatter movement? An experimental test".

---

The Bonilla and Tillery Jr 2020 Discussion and Conclusion indicates that:

Further studies should also focus on determining why African American women are mobilizing more than men in response to every frame that we exposed them to in our survey experiment.

But I don't think that the Bonilla and Tillery Jr 2020 data support the claim that every frame caused more mobilization among African American women than among African American men.

Bonilla and Tillery Jr 2020 has figures that measure support for Black Lives Matter and figures with outcomes about writing to Nancy Pelosi, but Bonilla and Tillery Jr 2020 also combines support and mobilization with the phrasing of "mobilizing positive attitudes" (p. 959), so I wanted to check what outcome the above passage was referring to. The response that I received suggested that the outcome was writing to Nancy Pelosi. But I don't see any basis for a claim about gender differences for each frame for that outcome, in Bonilla and Tillery Jr 2020 Figure 4B, the text of Bonilla and Tillery Jr 2020, or my analysis.

---

Another passage from Bonilla and Tillery Jr 2020 (p. 958):

For those not identifying as LGBTQ+, we saw a stronger negative effect in asking for support as a result of the Feminist treatment than LGBTQ+ treatment (βFeminist = -0.08, p = 0.08; βLGBTQ+ = -0.04, p = 0.34).

The p-value was p=0.45 for my test of the null hypothesis that the -0.08 coefficient for the feminist treatment equals the -0.04 coefficient for the LGBTQ+ treatment. There is thus not sufficient evidence in these data that these coefficients differ from each other, so it's not a good idea to claim that one treatment had a stronger effect than the other.

---

The lead author of Bonilla and Tillery Jr 2020 presented these data to Harvard's Women and Public Policy Program, noting at about 31:40 the evidence that the nationalist frame had a significant effect among women on the "mention police" outcome and noting at about 32:48 that "Black men in general...were much less likely than Black women to talk about the police in general". But my analysis indicated that p=0.35 for a test of the null hypothesis that the effect of the nationalist frame does not differ by gender for the "mention police" outcome.

---

Similar problem in this passage with the suggestion about results being consistent with "a differential response to the Black feminist treatment by gender" (p. 954, footnote omitted and emphasis added):

For female respondents, we see nonsignificant (positive) effects of the Black nationalist (β = 0.03, p = 0.39) and Black LGBTQ+ treatments (β = 0.03, p = 0.30), and nonsignificant (negative) effects of the Black feminist treatment (β = -0.02, p = 0.46). In contrast, we found that male respondents were much more affected by the intersectional treatments...but both the Black feminist and Black LGBTQ+ treatments decreased Black male approval of BLM (βFeminist = -0.06., p = 0.07; βLGBTQ+ = -0.09, p = 0.008).

---

NOTES

1. Bonilla and Tillery Jr 2020 had a preregistration. Here is hypothesis 3 from the preregistration...

H3: LGBTQ and Intersectional frames of the BLM movement will have no effect (or a demobilizing effect) on the perceived effectiveness of BLM African American subjects.

...and from the article (emphasis added)...

H3: Black LGBTQ+ frames of the BLM movement will have a positive effect on Black LGBTQ+ members, but they will have no effect or a demobilizing effect on Black subjects who do not identify as LGBTQ+.

I don't think that this deviation was super important, but the difference makes me wonder whether the APSR peer reviewers and/or editors bothered to check the preregistration against the article. Even if this check was made, it would be nice if the journal signaled to readers that this check was made.

2. Bonilla and Tillery Jr 2020 thanks the prior APSR editors:

Finally, we thank the three anonymous reviewers and the previous APSR editors, Professor Thomas Koenig and Professor Ken Benoit, for pushing us to significantly improve this paper through the review process.

Something else nice would be for journal articles to indicate the editorial teams responsible for the decision to publish the article and responsible for checking the manuscript for errors and flaws.

3. I was curious to see what subsequent research has discussed about Bonilla and Tillery Jr 2020. Let's start with Anoll et al 2022 from Perspectives on Politics:

This could be of great political consequence considering the importance of parents as socializing agents (Jennings and Niemi 1974; Niemi and Jennings 1991; Oxley 2017) and the necessity of building multiracial support for BLM (Bonilla and Tillery 2020; Corral 2020; Holt and Sweitzer 2020; Merseth 2018).

I'm not sure what about Bonilla and Tillery Jr 2020 supports that citation about "the necessity of building multiracial support for BLM". Let's try Hswen et al 2021 from JAMA Network Open, which cited Bonilla and Tillery Jr 2020 as footnote 14:

Although often used following fatal encounters with law enforcement, #BlackLivesMatter also became an important tool to raise awareness around health inequities in Black communities, such as HIV, adequate access to analgesia, and cancer screening.13,14

I'm not sure what about Bonilla and Tillery Jr 2020 supports that citation about BLM being "an important tool to raise awareness around health inequities in Black communities".

From Jasny and Fisher 2022 from Social Networks (sic for "complimentary"):

Research has also shown that when multiple issues or beliefs are seen as complimentary, a process called "frame alignment," the connection can boost support for social movements and motivate participation (Bonilla and Tillery, 2020, Heaney, 2021; for an overview of "frame alignment," see Snow et al., 1986).

I'm not sure what in Bonilla and Tillery Jr 2020 that support-boosting and/or participation-motivating multiple frame alignment refers to. At least Heaney 2022 from Perspectives on Politics is on target with what Bonilla and Tillery Jr 2020 is about:

If BLM is able to convey an intersectional message effectively to its supporters, then this idea is likely to be widely discussed in movement circles and internalized by participants (Bonilla and Tillery 2020).

But Heaney 2022 doesn't tell readers what information Bonilla and Tillery Jr 2020 provided about intersectional messages. Next are the "charity" citations from Boudreau et al 2022 from Political Research Quarterly, in which Bonilla and Tillery Jr 2020 is absolutely unnecessary to support the claims that the citation is used for:

From the protests against police brutality in the 1960s and 70s to the emergence of the Black Lives Matter movement in recent years, there is a long history of police-inspired political mobilization (Laniyonu 2019; Bonilla and Tillery 2020)...

The symbolic appeal of a movement that served as a focal point and mobilizer of Americans' outrage was manifested in the Black Lives Matter signs posted in windows and scrawled on sidewalks and buildings across the country (Bonilla and Tillery 2020).

I'm not even sure that Bonilla and Tillery 2020 is a good citation for those passages.

And from Krewson et al 2022 from Political Research Quarterly (footnote text omitted):

In May of 2021, we obtained a sample of 2170 high quality respondents from Qualtrics, a widely respected survey firm (Bonilla and Tiller 2020; Friedman 2019; Kane et al. 2021).6

Ah, yes, Bonilla and Tiller [sic] 2020 which, what, provided evidence that Qualitrics is widely respected? That Qualtrics provides high quality respondents? Bonilla and Tillery Jr 2020 used Qualtrics, I guess. The omitted footnote text didn't seem relevant and seems to be incorrect, based on comparing the footnotes to the working paper and based on the content of the footnotes, with, for example, footnote 6 being about the ACBC design but the main text mention of the ACBC design linking to footnote 7.

Here is a prior post about mis-citations. Caveats from that post apply to the above discussion of citations to Bonilla and Tillery 2020, with the discussion not being systematic or representative, which prevents any inference stronger than Bonilla and Tillery 2020 being miscited more often than it should be.

Tagged with: , , , ,

Political Research Quarterly published Garcia and Sadhwani 2022 "¿Quien importa? State legislators and their responsiveness to undocumented immigrants", about an experiment in which state legislators were sent messages, purportedly from a Latinx person such as Juana Martinez or from an Eastern European person such as Anastasia Popov, with message senders describing themselves as "residents", "citizens", or "undocumented immigrants".

I'm not sure of the extent to which response rates to the purported undocumented immigrants were due to state legislative offices suspecting that this was yet another audit study. Or maybe it's common for state legislators to receive messages from senders who invoke their undocumented status, as in this experiment ("As undocumented immigrants in your area we are hoping you can help us").

But that's not what this post is about.

---

1.

Garcia and Sadhwani 2022 Table 1 Model 2 reports estimates from a logit regression predicting whether a response was received from the state legislator, with predictors such as legislative professionalism. The coefficient was positive for legislative professionalism, indicating that, on average and other model variables held constant, legislators from states with higher levels of legislative professionalism were more likely to respond, compared to legislators from states with lower levels of legislative professionalism.

Another Model 2 predictor was "state", which had a coefficient of 0.007, a standard error of 0.002, and three statistical significance asterisks indicating that, on average and other model variables held constant -- what? -- legislators from states with more "state-ness" were more likely to respond? I'm pretty sure that this "state" predictor was coded with states later in the alphabet such as Wyoming assigned a higher number than states earlier in the alphabet such as Alabama. I don't think makes any sense as a predictor of response rates, but the predictor was statistically significant, so that's interesting.

The "state" variable was presumably meant to be included as a categorical predictor, based on the Garcia and Sadhwani 2022 text (emphasis added):

For example, we include the Squire index for legislative professionalism (Squire 2007), the chamber in which the legislator serves, and a fixed effects variable for states.

I think this is something that a peer reviewer or editor should catch, especially because Garcia and Sadhwani 2022 doesn't report that many results in tables or figures.

---

2.

Garcia and Sadhwani 2022 Table 1 model 2 omits the sender category of undocumented Latinx, so that results for the five included sender categories can be interpreted relative to omitted sender category of undocumented Latinx. So far so good.

But then Garcia and Sadhwani 2022 interprets the other predictors as applying to only the omitted sender category of undocumented Latinx, such as (sic for "respond do a request"):

To further examine the potential impact of sentiments toward immigrants and immigration at the state level, we included a variable ("2012 Romney states") to examine if legislators in states that went to Romney in the 2012 presidential election were less likely to respond do a request from an undocumented immigrant. We found no such relationship in the data.

This apparent misinterpretation appears in the abstract (emphasis added):

We found that legislators respond less to undocumented constituents regardless of their ethnicity and are more responsive to both the Latinx and Eastern European-origin citizen treatments, with Republicans being more biased in their responsiveness to undocumented residents.

I'm interpreting that emphasized part to mean that the Republican legislator gap in responsiveness to undocumented constituents compared to citizen constituents was larger than the non-Republican legislator gap in responsiveness to undocumented constituents compared to citizen constituents. And I don't think that's correct based on the data for Garcia and Sadhwani 2022.

My analysis used an OLS regression to predict whether a legislator responded, with only a predictor for "undocCITIZ" coded 1 for undocumented senders and 0 for citizen senders. Coefficients were -0.07 among Republican legislators and -0.11 among non-Republican legislators, so the undocumented/citizen gap was not larger among Republican legislators compared to non-Republican legislators. Percentage responses are in the table below:

SENDER         GOP NON-GOP 
Citizen EEurop 21  23
Citizen Latina 26  29
Control EEurop 25  33
Control Latina 18  20
Undocum EEurop 18  12
Undocum Latina 15  17
OVERALL        20  22

---

NOTE

1. No response yet to my Nov 17 tweet to a co-author of Garcia and Sadhwani 2022.

Tagged with: , , , ,

Political Research Quarterly published Huber and Gunderson 2022 "Putting a fresh face forward: Does the gender of a police chief affect public perceptions?". Huber and Gunderson 2022 reports on a survey experiment in which, for one of the manipulations, a police chief was described as female (Christine Carlson or Jada Washington) or male (Ethan Carlson or Kareem Washington).

---

Huber and Gunderson 2022 has a section called "Heterogeneous Responses to Treatment" that reports on results that divided the sample into "high sexism" respondents and "low sexism" respondents. For example, the mean overall support for the female police chief was 3.49 among "low sexism" respondents and was 3.41 among "high sexism" respondents, with p=0.05 for the difference. Huber and Gunderson 2022 (p. 8) claims that [sic on the absence of a "to"]:

These results indicate that respondents' sexism significantly moderates their support for a female police chief and supports role congruity theory, as individuals that are more sexist should react more negatively [sic] violations of gender roles.

But, for all we know from the results reported in Huber and Gunderson 2022, "high sexism" respondents might merely rate police chiefs lower relative to how "low sexism" respondents rate police chiefs, regardless of the gender of the police chief.

Instead of the method in Huber and Gunderson 2022, a better method to test whether "individuals that are more sexist...react more negatively [to] violations of gender roles" is to estimate the effect of the male/female treatment on ratings about the police chief among "high sexism" respondents. And, to test whether "respondents' sexism significantly moderates their support for a female police chief", we can compare the results of that test to results from a corresponding test among "low sexism" respondents.

---

Using the data and code for Huber and Gunderson 2022, I ran the code up to the section for Table 4, which is the table about sexism. I then ran my modified code of the Huber and Gunderson 2022 code for Table 4, among respondents Huber and Gunderson 2022 labeled "high sexism", which is for a score above 0.35 on the measure of sexism, and then among respondents Huber and Gunderson 2022 labeled "low sexism", which is for a score below 0.35 on the measure of sexism.

Results are below, indicating a lack of p<0.05 evidence for a male/female treatment effect among these "high sexism" respondents, along with a p<0.05 pro-female bias among the "low sexism" respondents on all but one of the Table 4 items.

HIGH SEXISM RESPONDENTS------------------
                     Female Male
                     Chief  Chief
Domestic Violence    3.23   3.16  p=0.16
Sexual Assault       3.20   3.16  p=0.45
Violent Crime Rate   3.20   3.23  p=0.45
Corruption           3.21   3.18  p=0.40
Police Brutality     3.17   3.17  p=0.94
Community Leaders    3.33   3.31  p=0.49
Police Chief Support 3.41   3.39  p=0.52

LOW SEXISM RESPONDENTS------------------
                     Female Male
                     Chief  Chief
Domestic Violence    3.40   3.21  p<0.01
Sexual Assault       3.44   3.22  p<0.01
Violent Crime Rate   3.40   3.33  p=0.10
Corruption           3.21   3.07  p=0.01
Police Brutality     3.24   3.11  p=0.01
Community Leaders    3.40   3.32  p=0.02
Police Chief Support 3.49   3.37  p<0.01

---

I'm sure that there might be more of interest, such as calculating p-values for the difference between the treatment effect among "low sexism" respondents and the treatment effect among "high sexism" respondents, and assessing whether there is stronger evidence of a treatment effect among "high sexism" respondents higher up the sexism scale than the 0.35 threshold used in Huber and Gunderson 2022.

But I at least wanted to document another example of a pro-female bias among "low sexism" respondents.

Tagged with: , , , ,

In a prior post, I criticized the questionnaire for the ANES 2020 Time Series Study, so I want to use this post to praise the questionnaire for the ANES 2022 Pilot Study, plus add some other comments.

---

1. The pilot questionnaire has items that ask participants to rate men and women on 0-to-100 feeling thermometers, which will permit assessment of the association for negative attitudes about women and men, presuming that some of the planned 1500 respondents express such negative attitudes.

2. The pilot questionnaire has items in which response options permit underestimation of the frequency of certain types of vote fraud, with a "Never" option for items about how often in the respondent's state [1] a voter casts more than one ballot and [2] votes are cast on behalf of dead people. That happened at least once recently in Arizona (see also https://www.heritage.org/voterfraud), and I suspect that this is currently a misperception that is more common on the political left.

But it doesn't seem like a good idea to phrase the vote fraud item about the respondent's state, so that coding a response as a misperception requires checking evidence in 50 states. And I don't think there is an obvious threshold for overestimating how often, say, a voter casts more than one ballot. "Rarely" seems like an appropriate response for Arizona residents, but is "Occasionally" incorrect?

3. The pilot questionnaire has an item about the genuineness of emails on Hunter Biden's laptop in which Hunter Biden "contacted representatives of foreign governments about business deals". So I guess that can be a misinformation item that liberals are more likely to be misinformed about.

4. The pilot questionnaire has items about whether being White/Black/Hispanic/Asian "comes with advantages, disadvantages, or doesn't it matter". Based on the follow up item, these items might not permit respondents to select both "advantages" and "disadvantages", and, if so, it might be better to differentiate respondents who think that, for instance, being White has only advantages from respondents who think that being White has on net more advantages than disadvantages.

5. The pilot questionnaire permits respondents to report the belief that Black and Hispanic Americans have lower socioeconomic status than White Americans because of biological differences, but respondents can't report the belief that particular less positive outcomes for White Americans relative to another group are due to biological differences (e.g., average White American K12 student math performance relative to average Asian American K12 student math performance).

---

Overall, the 2022 pilot seems like an improvement. For one thing, the pilot questionnaire, like is common for the ANES, has feeling thermometers about Whites, Blacks, Hispanics, and Asians, so that it's possible to construct a measure of negative attitudes about each included racial/ethnic group. And the feeling thermometers for men and women permit construction of a measure of negative attitudes about men and women. For another thing, respondents can report misperceptions that are presumably more common among persons on the political left. That's more than what is permitted by a lot of similar surveys.

Tagged with: , , , , ,

Political Psychology recently published Chalmers et al 2022 "The rights of man: Libertarian concern for men's, but not women's, reproductive autonomy". The basis for this claim about libertarians' selective concern is indicated in the abstract as:

Libertarianism was associated with opposition to abortion rights and support for men's right both to prevent women from having abortions (male veto) and to withdraw financial support for a child when women refuse to terminate the pregnancy (financial abortion).

The above passage represents a flawed inferential method that I'll explain below.

---

The lead author of Chalmers et al 2022 quickly responded to my request about the availability of data, code, and codebooks, with replication materials now public at the OSF site. I'll use data from Study 2 and run a simple analysis to illustrate the inferential flaw.

The only predictor that I'll use is a 0-to-6 "Libert" variable that I renamed "Libertarianism" and recoded to range from 0 to 1 for responses to the item "To what extent would you describe your political persuasion as libertarian?", with 0 for "Not at all" to 1 "Very much".

---

In the OLS linear regression below, the abSINGLE outcome variable has eight levels, from 0 for "Not at all" to 1 for "Very much", for an item about whether the respondent thinks that a pregnant woman should be able to obtain a legal abortion if she is single and does not want to marry the man.

The linear regression output below (N=575) indicates that, on average, respondent libertarianism is negatively correlated with support for permitting a woman to have an abortion if she is single and does not want to marry the man.

. reg abSINGLE Libertarianism
---------------------------------
      abSINGLE |  Coef.  p-value
---------------+-----------------
Libertarianism | -0.30   0.000 
     intercept |  0.89   0.000 
---------------------------------

In the OLS linear regression below, the maleVETO outcome variable has six levels, from 0 for "Strongly disagree" to 1 for "Strongly agree", for an item about whether the respondent thinks that a woman should not be allowed to have an abortion if the man involved really wants to keep his unborn child.

The linear regression output below (N=575) indicates that, on average, respondent libertarianism is positively correlated with support for prohibiting a woman from having an abortion if the man involved really wants to keep his unborn child.

. reg maleVETO Libertarianism
--------------------------------
      maleVETO |  Coef. p-value
---------------+----------------
Libertarianism |  0.26  0.000 
     intercept |  0.13  0.000 
--------------------------------

So what's the flaw in combining results from these two regressions to infer that libertarians have a concern for men's reproductive autonomy but not for women's reproductive autonomy?

---

The flaw is that the linear regressions above include data from non-libertarians, and patterns among non-libertarians might account for the change in the sign of the coefficient on Libertarianism.

Note, for example, that, based on the OLS regression output, the predicted support among respondents highest in libertarianism will be 0.89 + -0.30, or 0.69, for women's right to an abortion on the 0-to-1 abSINGLE item, but will be 0.13 + 0.26, or 0.39, for men's right to an abortion veto on the 0-to-1 maleVETO item.

But let's forget these linear regression results, because the appropriate method for assessing whether a group is inconsistent is to analyze data only from that group. So here are respective means, for respondents at 6 on the 0-to-6 "Libert" variable (N=18):

0.45 on abSINGLE

0.49 on maleVETO

And here are respective means, for respondents at 5 or 6 on the 0-to-6 "Libert" variable (N=46):

0.53 on abSINGLE

0.42 on maleVETO

I wouldn't suggest interpreting these results to mean that libertarians are on net consistent about women's reproductive autonomy and men's reproductive autonomy or, for that matter, that libertarians favor women's reproductive autonomy over men's. But I think that the analyses illustrate the flaw in making inferences about a group based on a linear regression involving people who aren't in that group.

The Stata log file has output of my analyses above and additional analyses, but Chalmers et al 2022 had two datasets and multiple measures for key items, so the analyses aren't exhaustive.

Tagged with: , ,

Politics & Gender published Deckman and Cassese 2021 "Gendered nationalism and the 2016 US presidential election", which, in 2022, shared an award for the best article published in Politics & Gender the prior year.

---

1.

So what is gendered nationalism? From Deckman and Cassese 2021 (p. 281):

Rather than focus on voters' sense of their own masculinity and femininity, we consider whether voters characterized American society as masculine or feminine and whether this macro-level gendering, or gendered nationalism as we call it, had political implications in the 2016 presidential election.

So how is this characterization of American society as masculine or feminine measured? The Deckman and Cassese 2021 online appendix indicates that gendered nationalism is...

Measured with a single survey item asking whether "Society as a whole has become too soft and feminine." Responses were provided on a four-point Likert scale ranging from strongly disagree to strongly agree.

So the measure of "whether voters characterized American society as masculine or feminine" (p. 281) ranged from the characterization that American society is (too) feminine to the characterization that American society is...not (too) feminine. The "(too)" is because I suspect that respondents might interpret the "too" in "too soft and feminine" as also applying to "feminine", but I'm not sure it matters much.

Regardless, there are at least three potential relevant characterizations: American society is feminine, masculine, or neither feminine nor masculine. It seems like a poor research design to combine two of these characterizations.

---

2.

Deckman and Cassese 2021 also described gendered nationalism as (p. 278):

Our project diverges from this work by focusing on beliefs about the gendered nature of American society as a whole—a sense of whether society is 'appropriately' masculine or has grown too soft and feminine.

But disagreement with the characterization that "Society as a whole has become too soft and feminine" doesn't necessarily indicate a characterization that society is "appropriately" masculine, because a respondent could believe that society is too masculine or that society is neither feminine nor masculine.

Omission of a response option indicating a belief that American society is (too) masculine might have made it easier for Deckman and Cassese 2021 to claim that "we suppose that those who rejected gendered nationalism were likely more inclined to vote for Hillary Clinton" (p. 282), as if only the measured "too soft and feminine" characterization is acceptance of "gendered nationalism" and not the unmeasured characterization that American society is (too) masculine.

---

3.

Regression results in Table 2 of Deckman and Cassese 2021 indicate that gendered nationalism predicts a vote for Trump over Clinton in 2016, net of controls for political party, a single measure of political ideology, and demographics such as class, race, and education.

Gendered nationalism is the only specific belief in the regression, and Deckman and Cassese 2021 reports no evidence about whether "beliefs about the gendered nature of American society as a whole" has any explanatory power above other beliefs about gender, such as gender roles and animus toward particular genders.

---

4.

Deckman and Cassese 2021 reported on four categories of class: lower class, working class, middle class, and upper class. Deckman and Cassese 2021 hypothesis H2 is that:

Gendered nationalism is more common among working-class men and women than among men and women with other socioeconomic class identifications.

For such situations, in which the hypothesis is that one of four categories is distinctive, the most straightforward approach is to omit from the regressions the hypothesized distinctive category, because then the p-values and coefficients for each of the three included categories will provide information about the evidence that that included category differs from the omitted category.

But the regressions in Deckman and Cassese 2021 omitted middle class, and, based on the middle model in Table 1, Deckman and Cassese 2021 concluded that:

Working-class Democrats were significantly more likely to agree that the United States has grown too soft and feminine, consistent with H2.

But the coefficients and standard errors were 0.57 and 0.26 for working class and 0.31 and 0.40 for lower class, so I'm not sure that the analysis in Table 1 contained enough evidence that the 0.57 estimate for working class differs from the 0.31 estimate for lower class.

---

5.

I think that Deckman and Cassese 2021 might have also misdescribed the class results in the Conclusions section, in the passage below, which doesn't seem limited to Democrat participants. From p. 295:

In particular, the finding that working-class voters held distinctive views on gendered nationalism is compelling given that many accounts of voting behavior in 2016 emphasized support for Donald Trump among the (white) working class.

For that "distinctive" claim, Deckman and Cassese 2021 seemed to reference differences in statistical significance (p. 289, footnote omitted):

The upper- and lower-class respondents did not differ from middle-class respondents in their endorsement of gendered nationalism beliefs. However, people who identified as working class were significantly more likely to agree that the United States has grown too soft and feminine, though the effect was marginally significant (p = .09) in a two-tailed test. This finding supports the idea that working-class voters hold a distinctive set of beliefs about gender and responded to the gender dynamics in the campaign with heightened support for Donald Trump’s candidacy, consistent with H2.

In the Table 1 baseline model predicting gendered nationalism without interactions, ologit coefficients are 0.25 for working class and 0.26 for lower class, so I'm not sure that there is sufficient evidence that working class views on gendered nationalism were distinctive from lower class views on gendered nationalism, even though the evidence is stronger that the 0.25 working class coefficient differs from zero than the 0.26 lower class coefficient differs from zero.

Looks like the survey's pre-election wave had at least twice as many working class respondents as lower class respondents. If that ratio was similar for the post-election wave, that would explain the difference in statistical significance and explain why the standard error was smaller for the working class (0.15) than for the lower class (0.23). Search for "class" at the PRRI site and use the PRRI/The Atlantic 2016 White Working Class Survey.

---

6.

At least Deckman and Cassese 2021 interpreted the positive coefficient on the interaction of college and Republican as an estimate of how the association of college and the outcome among Republicans differed from the association of college and the outcome among the omitted category.

But I'm not sure of the justification for "largely" in Deckman and Cassese 2021 (p. 293):

Thus, in accordance with our mediation hypothesis (H5), gender differences in beliefs that the United States has grown too soft and feminine largely account for the gender gap in support for Donald Trump in 2016.

Inclusion of the predictor for gendered nationalism pretty much only halves the logit coefficient for "female", from 0.80 to 0.42, and, in Figure 3, the gender gap in predicted probability of a Trump vote is pretty much only cut in half, too. I wouldn't call about half "largely", especially without addressing the obvious confound of attitudes about men and women that have nothing to do with "gendered nationalism".

---

7.

Deckman and Cassese 2021 was selected for a best article award by the editorial board of Politics & Gender. From my prior posts on publications in Politics & Gender: p < .000, misinterpreted interaction terms, and an example of the difference in statistical signifiance being used to infer an difference in effect.

---

NOTES

1. Prior post mentioning Deckman and Cassese 2021.

2. Prior post on deviations from a preregistration plan, for Cassese and Barnes 2017.

3. "Gendered nationalism" is an example of use of a general term when a better approach would be specificity, such as a measure that separates "masculine nationalism" from "feminine nationalism". Another example is racial resentment, in which a general term is used to describe only the type of racial resentment directed at Blacks. Feel free to read through participant comments in the Kam and Burge survey, in which plenty of comments from respondents who score low on the racial resentment scale indicate resentment directed at Whites.

Tagged with: , , ,

The Journal of Social and Political Psychology recently published Young et al 2022 "'I feel it in my gut:' Epistemic motivations, political beliefs, and misperceptions of COVID-19 and the 2020 U.S. presidential election", which reported in its abstract that:

Results from a US national survey from Nov-Dec 2020 illustrate that Republicans, conservatives, and those favorable towards President Trump held greater misperceptions about COVID and the 2020 election.

Young et al 2022 contains two shortcomings of too much social science: bias and error.

---

1.

In Young et al 2022, the selection of items measuring misperceptions is biased toward things that the political right is more likely than the political left to indicate a misperception about, so that the most that we can conclude from Young et al 2022 is that the political right more often reported misperceptions about things that the political right is more likely to report misperceptions about.

Young et al 2022 seems to acknowledge this research design flaw in the paragraph starting with:

Given the political valence of both COVID and election misinformation, these relationships might not apply to belief in liberal-serving misinformation.

But it's not clear to me why some misinformation about covid can't be liberal-serving. At least, there are misperceptions about covid that are presumably more common among the political left than among the political right.

For example, the eight-item Young et al 2022 covid misperceptions battery contains two items that permit respondents to underestimate the seriousness of covid-19: "Coronavirus (COVID-19 is a hoax" [sic for the unmatched parenthesis], and "The flu is more lethal than coronavirus (COVID-19)". But the battery doesn't contain corresponding items that permit respondents to overestimate the seriousness of covid-19.

Presumably, a higher percentage of the political left than the political right overestimated the seriousness of covid-19 at the time of the survey in late 2020, given that, in a different publication, a somewhat different Young et al team indicated that:

Results from a national survey of U.S. adults from Nov-Dec 2020 suggest that Trump favorability was...negatively associated with self-reported mask-wearing.

Another misperception measured in the survey is that "Asian American people are more likely to carry the virus than other people", which was not a true statement at the time. But, from what I can tell, at the time of the survey, covid rates in the United States were higher among Hispanics than among Whites, which presumably means that Hispanic Americans were more likely to carry the virus than White Americans. It's not clear to me why misinformation about the covid rate among Asians should be prioritized over misinformation about the covid rate among Hispanics, although, if someone wanted to bias the research design against the political right, that priority would make sense.

---

Similar flaw with the Young et al 2022 election 2020 misperceptions battery, which had an item that permits overestimation of the detected voter fraud ("There was widespread voter fraud in the 2020 Presidential election"), but had no item that would permit underestimation of voter fraud in 2020 (e.g., "There was no voter fraud in the 2020 Presidential election"), which is the type of error that the political left would presumably be more likely to make.

For another example, Young et al 2022 had a reverse-coded misperceptions item for "We can never be sure that Biden's win was legitimate", but had no item about whether we can be sure that Trump's 2016 win was legitimate, which would be an obvious item to pair with the Biden item to assess whether the political right and the political left are equally misinformed or at least equally likely to give insincere responses to surveys that have items such as "The coronavirus (COVID-19) vaccine will be used to implant people with microchips".

---

So I think it's less, as Young et al 2022 suggested, that "COVID misinformation and election misinformation both served Republican political goals", and more that the selection of misinformation items in Young et al 2022 was biased toward a liberal-serving conclusion.

Of course, it's entirely possible that the political right is more misinformed than the political left in general or on selected topics. But it's not clear to me how Young et al 2022 can provide a valid inference about that.

---

2.

For error, Young et al 2022 Table 3 has an unstandardized coefficient for Black race, indicating that, in the age 50 and older group, being Black corresponded to higher levels of Republicanism. I'm guessing that this coefficient is missing a negative sign, given that there is a negative sign on the standardized coefficient...The Table 2 income predictor for the age 18-49 group has an unstandardized coefficient of .04 and a standard error of .01, but no statistical significance asterisk, and has a standardized coefficient of .00, which I think might be too low...And the appendix indicates that "The analysis yielded two factors with Eigenvalues < 1.", but I think that should be a greater than symbol.

None of those potential errors are particularly important, except perhaps for inferences about phenomena such as the rigor of the peer and editorial review that Young et al 2022 went through.

---

NOTES

1. Footnotes 3 and 4 of Young et al 2022 indicate that:

Consistent with Vraga and Bode (2020), misperceptions were operationalized as COVID-related beliefs that contradicted the "best available evidence" and/or "expert consensus" at the time data were gathered.

If the purpose is to assess whether "I feel it in my gut" people are incorrect, then the perceptions should be shown to be incorrect and not merely in contradiction to expert consensus or, for that matter, in contradiction to the best available evidence.

2. The funding statement for Young et al 2022 indicates that the study was funded by the National Institute of Aging.

3. Prior posts on politically biased selection of misinformation items, in Abrajano and Lajevardi 2021 and in the American National Election Studies 2020 Time Series Study.

4. After I started drafting the above post, Social Science Quarterly published Benegal and Motta 2022 "Overconfident, resentful, and misinformed: How racial animus motivates confidence in false beliefs", which used the politically biased ANES misinformation items, in which, for example, respondents who agree that "World temperatures have not risen on average over the last 100 years" get coded as misinformed (an error presumably more common on the political right) but respondents who wildly overestimate the amount of climate change over the past 100 years don't get coded as misinformed (an error presumably more common on the political left).

5. I might be crazy, but I think that research about the correlates of misperceptions should identify respondents who have correct perceptions instead of merely identifying respondents who have particular misperceptions.

And I don't think that researchers should place particular misperceptions into the same category as the correct perception, such as by asking respondents merely whether world temperatures have risen on average over the last 100 years, any more than researchers should ask respondents merely whether world temperatures have risen on average by at least 3 degrees Celsius over the last 100 years, for which agreement would be the misperception.

Tagged with: , , ,