Homicide Studies recently published Schildkraut and Turanovic 2022 "A New Wave of Mass Shootings? Exploring the Potential Impact of COVID-19". From the abstract:

Results show that total, private, and public mass shootings increased following the declaration of COVID-19 as a national emergency in March of 2020.

I was curious how Schildkraut and Turanovic 2022 addressed the possible confound of the 25 May 2020 killing of George Floyd.

---

Below is my plot of data used in Schildkraut and Turanovic 2022, for total mass shootings:

My read of the plot is that, until after the killing of George Floyd, there is insufficient evidence that mass shootings were higher in 2020 than in 2019.

Table 1 of Schildkraut and Turanovic 2022 reports an interrupted time series analysis that does not address the killing of George Floyd, with a key estimate of 0.409 and a standard error of 0.072. Schildkraut and Turanovic 2022 reports a separate analysis about George Floyd...

However, since George Floyd's murder occurred after the onset of the COVID-19 declaration, we conducted ITSA using only the post-COVID time period (n = 53 weeks) and used the week of May 25, 2020 as the point of interruption in each time series. These results indicated that George Floyd's murder had no impact on changes in overall mass shootings (b = 0.354, 95% CI [−0.074, 0.781], p = .105) or private mass shootings (b = 0.125, 95% CI [−0.419, 0.669], p = .652), but that Floyd's murder was linked to increases in public mass shootings (b = 0.772, 95% CI [0.062, 1.483], p = .033).

...but Schildkraut and Turanovic 2022 does not report any attempt to assess whether there is sufficient evidence to attribute the increase in mass shootings to covid once the 0.354 estimate for Floyd is addressed. The lack of statistical significance for the 0.354 Floyd estimate can't be used to conclude "no impact", especially given that the analysis for the covid declaration had data for 52 weeks pre-declaration and 53 weeks post-declaration, but the analysis for Floyd had data for only 11 weeks pre-Floyd and 42 weeks post-Floyd.

Schildkraut and Turanovic 2022 also disaggregated mass shootings into public mass shootings and private mass shootings. Corresponding plots by me are below. It doesn't look like the red line for the covid declaration is the break point for the increase in 2020 relative to 2019.

Astral Codex Ten discussed methods used to try to disentangle the effect of covid from the effect of Floyd, such as using for reference prior protests and other countries.

---

NOTES

1. In the Schildkraut and Turanovic 2022 data, some dates appeared in different weeks, such as 2019 Week 11 running from March 11 to March 17, but 2020 Week 11 running from March 9 to March 15.

2. The 13 March 2020 covid declaration occurred in the middle of Week 11, but the Floyd killing occurred at the start of Week 22, which ran from 25 May 2020 to May 31 2020.

3. Data. R code for the "total" plot above.

Tagged with: , , ,

Suppose that Bob at time 1 believes that Jewish people are better than every other group, but Bob at time 2 changes his belief to be that Jewish people are no better or worse than every other group, and Bob at time 3 changes his belief to be that Jewish people are worse than every other group.

Suppose also that these changes in Bob's belief about Jewish people have a causal effect on his vote choices. Bob at time 1 will vote 100% of the time for a Jewish candidate running against a non-Jewish candidate, no matter the relative qualifications of the candidates. At time 2, a candidate's Jewish identity is irrelevant to Bob's vote choice, so that, if given a choice between a Jewish candidate and an all-else-equal non-Jewish candidate, Bob will flip a coin and vote for the Jewish candidate only 50% of the time. Bob at time 3 will vote 0% of the time for a Jewish candidate running against a non-Jewish candidate, no matter the relative qualifications of the candidates.

Based on this setup, what is your estimate of the influence of antisemitism on Bob's voting decisions?

---

I think that the effect of antisemitism is properly understood as the effect of negative attitudes about Jewish people, so that the effect can be estimated in the above setup as the difference between Bob's voting decisions at time 2, when Bob is indifferent to a candidate's Jewish identity, and Bob's voting decisions at time 3, when Bob has negative attitudes about Jewish people. Thus, the effect of antisemitism on Bob's voting decisions is a 50 percentage point decrease, from 50% to 0%.

For the first decrease, from 100% to 50%, neither belief -- the belief that Jewish people are better than every other group, or the belief that Jewish people are no better or worse than every other group -- is antisemitic, so none of this decrease should be attributed to antisemitism. Generally, I think that this means that respondents who have positive attitudes about a group should not be used to estimate the effect of negative attitudes about that group.

---

So let's discuss the Race and Social Problems article: Sharrow et al 2021 "What's in a Name? Symbolic Racism, Public Opinion, and the Controversy over the NFL's Washington Football Team Name". The key predictor was a measure of resentment against Native Americans, built from responses to the statements below, measured on a 5-point scale from "strongly agree" to "strongly disagree":

Most Native Americans work hard to make a living just like everyone else.

Most Native Americans take unfair advantage of privileges given to them by the government.

My analysis indicates that 39% of the 1500 participants (N=582) provided consistently positive responses about Native Americans on both items, agreeing or strongly agreeing with the first statement and disagreeing or strongly disagreeing with the second statement. I don't see why these 582 respondents should be included in an analysis that attempts to estimate the effect of negative attitudes about Native Americans, if these participants do not fall along the indifferent-to-negative-attitudes continuum about Native Americans.

So let's check what happens after removing these respondents from the analysis.

---

I first conducted an unweighted OLS regression using the full sample and controls to predict the summary Team Name Index outcome, which measured support for the Washington football team's name placed on a 0-to-1 scale. For this regression (N=1024), the measure of resentment against Native Americans ranged from 0 for respondents who selected the most positive responses to both resentment items to 1 for respondents who selected the most negative responses to both resentment items. In this regression, the coefficient was 0.26 (t=6.31) for resentment against Native Americans.

I then removed respondents who provided positive responses about Native Americans for both resentment items. For this next unweighted OLS regression (N=572), the measure of resentment against Native Americans still had a value of 1 for respondents who provided the most negative responses to both resentment items; however, 0 was for participants who were neutral on one resentment item but provided the most positive response on the other resentment item, such as strongly agreeing that "Most Native Americans work hard to make a living just like everyone else" but neither agreeing or disagreeing that "Most Native Americans take unfair advantage of privileges given to them by the government". In this regression, the coefficient was 0.12 (t=2.23) for resentment against Native Americans.

The drop is similar when the regressions include only the measure of resentment against Native Americans and no other predictors: the coefficient is 0.44 for the full sample, but is 0.22 after dropping respondents who provided positive responses about Native Americans for both resentment items.

---

So I think that Sharrow et al 2021 might report substantial overestimates of the effect of resentment of Native Americans, because the estimates in Sharrow et al 2021 about the effect of negative attitudes about Native Americans included the effect of positive attitudes about Native Americans.

---

NOTES

1. About 20% of the Sharrow et al 2022 sample reported a negative attitude on at least one of the two measures of resentment against Native Americans. About 6% of the sample reported a negative attitude on both measures of resentment against Native Americans.

2. Sharrow et al 2021 indicated that "Our conclusions illustrate that symbolic racism toward Native Americans is central to interpreting the public's resistance toward changing the name, in sharp contrast to Snyder's claim that the name is about 'respect.'" (p. 111).

For what it's worth, the Sharrow et al 2021 data indicate that a nontrivial percentage of respondents with positive views of Native Americans somewhat or strongly disagreed with the claim that Washington football team name is offensive (in an item that reported the name of the team at the time): 47% of respondents who provided positive responses about Native Americans for both resentment items, 47% of respondents who rated Native Americans at 100 on a 0-to-100 feeling thermometer, 40% of respondents who provided positive responses about Native Americans for both resentment items and rated Native Americans at 100 on a 0-to-100 feeling thermometer, and 32% of respondents who provided the most positive responses about Native Americans for both resentment items and rated Native Americans at 100 on a 0-to-100 feeling thermometer (although this 32% was only 22% in unweighted analyses).

3. Sharrow et a 2021 indicated a module sample of 1,500 but the sample size fell to 1,024 in model 3 of Table 1. My analysis indicates that this is largely due to missing values on the outcome variable (N=1,362), the NFL sophistication index (N=1,364), and the measure of resentment of Native Americans (N=1,329).

4. Data for my analysis. Stata code and output.

5. Social Science Quarterly recently published Levin et al 2022 "Validating and testing a measure of anti-semitism on support for QAnon and vote intention for Trump in 2020", which also has the phenomenon of estimating the effect of negative attitudes about a target group but not excluding participants who favor the target group.

Tagged with: , , , , ,

1.

Politics, Groups, and Identities recently published Cravens 2022 "Christian nationalism: A stained-glass ceiling for LGBT candidates?". The key predictor is a Christian nationalism index that ranges from 0 to 1, with a key result that:

In both cases, a one-point increase in the Christian nationalism index is associated with about a 40 percent decrease in support for both lesbian/gay and transgender candidates in this study.

But the 40 percent estimates are based on Christian nationalism coefficients in models in which Christian nationalism is interacted with partisanship, race, and religion, and I don't think that these coefficients can be interpreted as associations across the sample. The estimates across the sample should be from models in which Christian nationalism is not included in an interaction, of -0.167 for lesbian and gay political candidates and -0.216 for transgender political candidates. So about half of 40 percent.

Check Cravens 2022 Figure 2, which reports results for support for lesbian and gay candidates: eyeballing from the figure, the drop across the range of Christian nationalism is about 14 percent for Whites, about 18 percent for Blacks, about 9 percent for AAPI, and about 15 percent for persons of another race. No matter how you weight these four categories, the weighted average doesn't get close to 40 percent.

---

2.

And I think that the constitutive terms in the interactions are not always correctly described, either. From Cravens 2022:

As the figure shows, Christian nationalism is negatively associated with support for lesbian and gay candidates across all partisan identities in the sample. Christian nationalist Democrats and Independents are more supportive than Christian nationalist Republicans by about 23 and 17 percent, respectively, but the effects of Christian nationalism on support for lesbian and gay candidates are statistically indistinguishable between Republicans and third-party identifiers.

Table 2 coefficients are 0.231 for Democrats and 0.170 for Independents, with Republicans as the omitted category, with these partisan predictors interacted with Christian nationalism. But I don't think that these coefficients indicate the difference between Christian nationalist Democrats/Independents and Christian nationalist Republicans. In Figure 1, Christian nationalist Democrats are at about 0.90 and Christian nationalist Republicans are at about 0.74, which is less than a 0.231 gap.

---

3.

From Cravens 2022:

Christian nationalism is associated with opposition to LGBT candidates even among the most politically supportive groups (i.e., Democrats).

For support for lesbian and gay candidates and support for transgender candidates, the Democrat predictor interacted with Christian nationalism has a p-value less than p=0.05. But that doesn't indicate whether there is sufficient evidence that the slope for Christian nationalism is non-zero among Democrats. In Figure 1, for example, the point estimate for Democrats at the lowest level of Christian nationalism looks to be within the 95% confidence interval for Democrats at the highest level of Christian nationalism.

---

4.

From Cravens 2022:

In other words, a one-point increase in the Christian nationalism index is associated with a 40 percent decrease in support for lesbian and gay candidates. For comparison, an ideologically very progressive respondent is only about four percent more likely to support a lesbian or gay candidate than an ideologically moderate respondent; while, a one-unit increase in church attendance is only associated with a one percent decrease in support for lesbian and gay candidates. Compared to every other measure, Christian nationalism is associated with the largest and most negative change in support for lesbian and gay candidates.

The Christian nationalism index ranges from 0 to 1, so the one-point increase discussed in the passage is the full estimated effect of Christian nationalism. The church attendance predictor runs from 0 to 6, so the one-unit increase in church attendance discussed in the passage is one-sixth the estimated effect of church attendance. The estimated effect of Christian nationalism is still larger than the estimated effect of church attendance when both predictors are put on a 0-to-1 scale, but I don't know of a good reason to compare a one-unit increase on the 0-to-1 Christian nationalism predictor to a one-unit increase on the 0-to-6 church attendance predictor.

The other problem is that the Christian nationalism index combines three five-point items, so it might be a better measure of Christian nationalism than, say, the progressive predictor is a measure of political ideology. This matters because, all else equal, poorer measures of a concept are biased toward zero. Or maybe the ends of the Christian nationalism index represent more distance than the ends of the political ideology measure. Or maybe not. But I think that it's a good idea to discuss these concerns when comparing predictors to each other.

---

5.

Returning to the estimates for Christian nationalism, I'm not even sure that -0.167 for lesbian and gay political candidates and -0.216 for transgender political candidates are good estimates. For one thing, these estimates are extrapolations from linear regression lines, instead of comparisons of observed outcomes at low and high levels of Christian nationalism, so it's not clear whether the linear regression line is correctly estimating the outcome for high levels of Christian nationalism, given that, for each Christian nationalist statement, the majority of the sample falls on the side of the items opposing the statement, so that the estimated effect of Christian nationalism might be more influenced by opponents of Christian nationalism than by supporters of Christian nationalism.

For another thing, I think that the effect of Christian nationalism should be conceptualized as being caused by a change from indifference to Christian nationalism to support for Christian nationalism, which means that including observations from opponents of Christian nationalism might bias the estimated effect of Christian nationalism.

For an analogy, imagine that we are interested in the effect of being a fan of the Beatles. I think that it would be preferable to compare, net of controls, outcomes for fans of the Beatles to outcomes for people indifferent to the Beatles, instead of comparing, net of controls, outcomes for fans of the Beatles to outcomes for people who hate the Beatles. The fan/hate comparison means that the estimated effect of being a fan of the Beatles is *necessarily* the exact same size as the estimated effect of hating the Beatles, but I think that these are different phenomena. Similarly, I think that supporting Christian nationalism is a different phenomenon than opposing Christian nationalism.

---

NOTES

1. Cravens 2022 model 2 regressions in Tables 2 and 3 include controls plus a predictor for Christian nationalism, three partisanship categories plus Republican as the omitted category, three categories of race plus White as the omitted category, and five categories of religion plus Protestant as the omitted category, and interactions of Christian nationalism with the three included partisanship categories, interactions of Christian nationalism with the three included race categories, and interactions of Christian nationalism with the five included religion categories.

It might be tempting to interpret the Christian nationalism coefficient in these regressions as indicating the association of Christian nationalism with the outcome net of controls among the omitted interactions category of White Protestant Republicans, but I don't think that's correct because of the absence of higher-order interactions. Let me discuss a simplified simulation to illustrate this.

The simulation had participants that were either male (male=1) or female (male=0) and participants that were either Republican (gop=1) or Democrat (gop=0). In the simulation, I set the association of a predictor X with the outcome Y to be -1 among female Democrats, to be -3 among male Democrats, to be -6 among female Republicans, and to be -20 among male Republicans. So the association of X with the outcome was negative for all four combinations of gender and partisanship. But the coefficient on X was +2 in a linear regression with predictors only for X, the gender predictor, the partisanship predictor, an interaction of X and the gender predictor, and an interaction of X and the partisanship predictor.

Simulation for the code in Stata and in R.

2. Cravens 2022 indicated about Table 2 that "Model 2 is estimated with three interaction terms". But I'm not sure that's correct, given the interaction coefficients in the table and given that the Figure 1 slopes for Republican, Democrat, Independent, and Something Else are all negative and differ from each other and the Other Christian slope in Figure 3 is positive, which presumably means that there were more than three interaction terms.

3. Appendix C has data that I suspect is incorrectly labeled: 98 percent of atheists agreed or strongly agreed that "The federal government should declare the United States a Christian nation", 94 percent of atheists agreed or strongly agreed that "The federal government should advocate Christian values", and 94 percent of atheists agreed or strongly agreed that "The success of the United States is part of God's plan".

4. I guess that it's not an error per se, but Appendix 2 reports means and standard deviations for nominal variables such as race and party identification, even though these means and standard deviations depend on how the nominal categories are numbered. For example, party identification has a standard deviation of 0.781 when coded from 1 to 4 for Republican, Democrat, Independent, and Other, but the standard deviation would presumably change if the numbers were swapped for Democrat and Republican, and, as far as I can tell, there is no reason to prefer the order of Republican, Democrat, Independent, and Other.

Tagged with: , , , , ,

I posted earlier about Filindra et al 2022 "Beyond Performance: Racial Prejudice and Whites' Mistrust of Government". This post discusses part of the code for Filindra et al 2022.

---

Tables in Filindra et al 2022 have a pair of variables called "conservatism (ideology)" and "conservatism not known" and a pair of variables called "income" and "income not known". For an example of what the "not known" variables are for, if a respondent in the 2016 data did not provide a substantive response to the ideology item, Filindra et al 2022 coded that respondent as 1 in the dichotomous 0-or-1 "conservatism not known" variable and imputed a value of zero for the seven-level "conservatism (ideology)" variable, with zero indicating "extremely liberal".

I don't recall seeing that method before, so I figured I would post about it. I reproduced the Filindra et al. 2022 Table 1 results for the 2016 data and then changed the imputed value for "conservatism (ideology)" from 0 (extremely liberal) to 1 (extremely conservative). That changed the coefficient and t-statistic for the "conservatism not known" predictor but not the coefficient or t-statistic for the "conservatism (ideology)" predictor or for any other predictor (log of the Stata output).

---

I think that it might have been from Schaffner et al 2018 that I picked up the use of categories as a way to not lose observations from an analysis merely because the observation has a missing value for a predictor. For example, if a respondent doesn't indicate their income, then income can be coded as a series of categories with non-response as a category (such as income $20,000 or lower; income $20,001 to $40,000; ...; income $200,001 and higher; and income missing). Thus, in a regression with this categorical predictor for income, observations are not lost merely because of not having a substantive value for income. Another nice feature of this categorical approach is permitting nonuniform associations, in which, for example, the association of income might level off at higher categories.

But dealing with missing values on a control by using categorical predictors can produce long regression output, with, for example, fifteen categories of income, eight categories of ideology, ten categories of age, etc. The Filindra et al 2022 method seems like a reasonable shortcut, as long as it's understood that results for the "not known" predictors depend on the choice of imputed value. But these "not known" predictors aren't common in the research that I read, so maybe there is another flaw in that method that I'm not aware of.

---

NOTE

1. I needed to edit line 1977 in the Filindra et al 2022 code to:

recode V162345 V162346 V162347 V162348 V162349 V162350 V162351 V162352 (-9/-5=.)

Tagged with: ,

Broockman 2013 "Black politicians are more intrinsically motivated to advance Blacks' interests: A field experiment manipulating political incentives" reported results from an experiment in which U.S. state legislators were sent an email from "Tyrone Washington", which is a name that suggests that the email sender is Black. The experimental manipulation was that "Tyrone" indicated that the city that he lived in was a city in the legislator's district or was a well-known city far from the legislator's district.

Based on Table 2 column 2, response percentages were:

  • 56.1% from in-district non-Black legislators
  • 46.4% from in-district Black legislators (= 0.561 - 0.097)
  • 28.6% from out-of-district non-Black legislators (= 0.561 - 0.275)
  • 41.4% from out-of-district Black legislators (= 0.561 - 0.275 + 0.128)

---

Broockman 2013 lacked another emailer to serve as comparison for response rates to Tyrone, such as an emailer with a stereotypical White name. Broockman 2013 discusses this:

One challenge in designing the experiment was that there were so few black legislators in the United States (as of November 2010) that a set of white letter placebo conditions could not be implemented due to a lack of adequate sample size.

So all emails in the Broockman 2013 experiment were signed "Tyrone Washington".

---

But here is how Broockman 2013 was cited by Rhinehar 2020 in American Politics Research:

A majority of this work has explored legislator responsiveness by varying the race or ethnicity of the email sender (Broockman, 2013;...

---

Costa 2017 in the Journal of Experimental Political Science:

As for variables that do have a statistically significant effect, minority constituents are almost 10 percentage points less likely to receive a response than non-minority constituents (p < 0.05). This is consistent with many individual studies that have shown requests from racial and ethnic minorities are given less attention overall, and particularly when the recipient official does not share their race (Broockman, 2013;...

But Broockman 2013 didn't vary the race of the requester, so I'm not sure of the basis for the suggestion that Broockman 2013 provided evidence that requests from racial and ethnic minorities are given less attention overall.

---

Mendez and Grose 2018 in Legislative Studies Quarterly:

Others argue or show, through experimental audit studies, that political elites have biases toward minority constituents when engaging in nonpolicy representation (e.g.,Broockman 2013...

I'm not sure how Broockman 2013 permits an inference of political elite bias toward minority constituents, when the only constituent was Tyrone.

---

Lajevardi 2018 in Politics, Groups, and Identities:

Audit studies have previously found that public officials are racially biased in whether and how they respond to constituent communications (e.g., Butler and Broockman 2011; Butler, Karpowitz, and Pope 2012; Broockman 2013;...

---

Dinesen et al 2021 in the American Political Science Review:

In the absence of any extrinsic motivations, legislators still favor in-group constituents (Broockman 2013), thereby indicating a role for intrinsic motivations in unequal responsiveness.

Again, Tyrone was the only constituent in Broockman 2013.

---

Hemker and Rink 2017 in the American Journal of Political Science:

White officials in both the United States and South Africa are more likely to respond to requests from putative whites, whereas black politicians favor putative blacks (Broockman 2013, ...

---

McClendon 2016 in the Journal of Experimental Political Science:

Politicians may seek to favor members of their own group and to discriminate against members of out-groups (Broockman, 2013...

---

Gell-Redman et al 2018 in American Politics Research:

Studies that explore other means of citizen and legislator interaction have found more consistent evidence of bias against minority constituents. Notably, Broockman (2013) finds that white legislators are significantly less likely to respond to black constituents when the political benefits of doing so were diminished.

But the only constituent was Tyrone, so you can't properly infer bias against Tyrone or minority constituents more generally, because the experiment didn't indicate whether the out-of-district drop-off for Tyrone differed from the out-of-district drop-off for a putative non-Black emailer.

---

Broockman 2014 in the American Journal of Political Science:

Outright racial favoritism among politicians themselves is no doubt real (e.g., Broockman 2013b;...

But who was Tyrone favored more than or less than?

---

Driscoll et al 2018 in the American Journal of Political Science:

Broockman (2013) finds that African American state legislators expend more effort to improve the welfare of black voters than white state legislators, irrespective of whether said voters reside in their districts.

Even ignoring the added description of the emailer as a "voter", response rates to Tyrone were not "irrespective" of district residence. Broockman 2013 even plotted data for the matched case analysis, in which the bar for in-district Black legislators was not longer than the bar for in-district non-Black legislators:

---

Shoub et al 2020 in the Journal of Race, Ethnicity, and Politics:

Black politicians are more likely to listen and respond to black constituents (Broockman 2013),...

The prior context in Shoub et al 2020 suggests that the "more likely" comparison is to non-Black politicians, but this description loses the complication in which Black legislators were not more likely than non-Black legislators to respond to in-district Tyrone, which is especially important if we reasonably assume that in-district Tyrone was perceived to be a constituent and out-of-district Tyrone wasn't. Same problem with Christiani et al 2021 in Politics, Groups, and Identities:

Black politicians are more likely to listen and respond to black constituents than white politicians (Broockman
2013)...

The similar phrasing for the above two passages might be due to the publications having the same group of authors: Shoub Epp Baumgartner Christiani Roach, and Christiani Shoub Baumgartner Epp Roach.

---

Gleason and Stout 2014 in the Journal of Black Studies:

Recent experimental studies conducted by Butler and Broockman (2011) and Broockman (2013) confirm these findings. These studies show that Black elected officials are more likely to help co-racial constituents in and outside of their districts gain access to the ballot more than White elected officials.

This passage, from what I can tell, describes both citations incorrectly: in Broockman 2013, Tyrone was asking for help getting unemployment benefits, and I'm not sure what the basis is for the "in...their districts" claim: in-district response rates were 56.1% from non-Black legislators and 46.4% from Black legislators. The Butler and Broockman 2011 appendix reports results such as DeShawn receiving responses from 41.9%, 22.4%, and 44.0% of Black Democrat legislators when DeShawn respectively asked about a primary, a Republican primary, and a Democratic primary and, respectively, from 54.3%, 56.1%, and 62.1% of White Democrat legislators.

But checking citations to Butler and Broockman 2011 would be another post.

---

NOTES

1. The above isn't a systematic analysis of citations of Broockman 2013, so no strong inferences should be made about the percentage of times Broockman 2013 was cited incorrectly, other than maybe too often, especially in these journals.

2. I think that, for the Broockman 2013 experiment, a different email could have been sent from a putative White person, without sample size concerns. Imagine that "Billy Bob" emailed each legislator asking for help with, say, welfare benefits. If, like with Tyrone, Black legislator response rates were similar for in-district Billy Bob and for out-of-district Billy Bob, that would provide a strong signal to not attribute the similar rates to an intrinsic motivation to advance Blacks' interests. But if the out-of-district drop off in Black legislator response rates was much larger for Billy Bob than for Tyrone, that would provide a strong signal to attribute the similar Black legislator response rates for in-district Tyrone and out-of-district Tyrone to an intrinsic motivation to advance Blacks' interests.

3. I think that the error bars in Figure 1 above might be 50% confidence intervals, given that the error bars seems to match the Stata command "reg code_some treat_out treatXblack leg_black [iweight=cem_weights], level(50)" that I ran on the Broockman 2013 data after line 17 in the Stata do file.

4. I shared this post with David Broockman, who provided the following comments:

Hi LJ,

I think you're right that some of these citations are describing my paper incorrectly and probably meant to cite my 2011 paper with Butler. (FWIW, in that study, we find legislators of all races seem to just discriminate in favor of their race, across both parties, so some of the citations don't really capture that either....)

The experiment would definitely be better with a white control, there was just a bias-variance trade-off here -- adding a putative race of constituent factor in the experiment would mean less bias but more variance. I did the power calculations and didn't think the experiment would be well-powered enough if I made the cells that small and were looking for a triple interaction between legislator race X letter writer putative race X in vs. out of district. In the paper I discuss a few alternative explanations that the lack of a white letter introduces and do some tests for them (see the 3 or 4 paragraphs starting with "One challenge..."). Essentially, I didn't see any reason why we should expect black legislators to just be generically less sensitive to whether a person is in their district, especially given in our previous paper we found they reacted pretty strongly to the race of the email sender (so it's not like the black legislators who do respond to emails just don't read emails carefully). Still, I definitely still agree with what I wrote then that this is a weakness of the study. It would be nice for someone to replicate this study, and I like the idea you have in footnote 2 for doing this. Someone should do that study!

Tagged with: , ,

Political Behavior recently published Filindra et al 2022 "Beyond Performance: Racial Prejudice and Whites' Mistrust of Government". Hypothesis 1 is the expectation that "...racial prejudice (anti-Black stereotypes) is a negative and significant predictor of trust in government".

Filindra et al 2022 limits the analysis to White respondents and measures anti-Black stereotypes by combining responses to available items in which respondents rate Blacks on seven-point scales, ranging from hardworking to lazy, and/or from peaceful to violent, and/or from intelligent to unintelligent. The data include items about how respondents rate Whites on these scales, but Filindra et al 2022 didn't use these responses to measure anti-Black stereotyping.

But information about how respondents rate Whites is useful for measuring anti-Black stereotyping. For example, a respondent who rates all racial groups at the midpoint of a stereotype scale hasn't indicated an anti-Black stereotype; this respondent's rating about Blacks doesn't differ from the respondent's rating about other racial groups, and it's not clear to me why rating Blacks equal to all other racial groups would be a moderate amount of "prejudice" in this case.

But this respondent who rated all racial groups equally on the stereotype scales nonetheless falls halfway along the Filindra et al 2022 measure of "negative Black stereotypes", in the same location as a respondent who rated Blacks at the midpoint of the scale and rated all other racial groups at the most positive end of the scale.

---

I think that this flawed measurement means that more analyses need to be conducted to know whether the key Filindra et al 2022 finding is merely due to the flawed measure of racial prejudice. Moreover, I think that more analyses need to be conducted to know whether Filindra et al 2022 overlooked evidence of the effect of prejudice against other racial groups.

Filindra et al 2022 didn't indicate whether their results held when using a measure of anti-Black stereotypes that placed respondents who rated all racial groups equally into a different category than respondents who rated Blacks less positively than all other racial groups and a different category than respondents who rated Blacks more positively than all other racial groups. Filindra et al 2022 didn't even report results when their measure of anti-White stereotypes was included in the regressions estimating the effect of anti-Black stereotypes.

A better review process might have produced a Filindra et al 2022 that resolved questions such as: Is the key Filindra et al 2022 finding merely because respondents who don't trust the government rate *all* groups relatively low on stereotype scales? Is the key finding because anti-Black stereotypes and anti-White stereotypes and anti-Hispanic stereotypes and anti-Asian stereotypes *each* reduce trust in government? Or are anti-Black stereotypes the *only* racial stereotypes that reduce trust in government?

Even if anti-Black stereotypes among Whites is the most important combination of racial prejudice and respondent demographics, other combinations of racial stereotype and respondent demographics are important enough to report on and can help readers better understand racial attitudes and their consequences.

---

NOTES

1. Filindra et al 2022 did note that:

Finally, another important consideration is the possibility that other outgroup attitudes or outgroup-related policy preferences may also have an effect on public trust.

That's sort of close to addressing some of the alternate explanations that I suggested, but the Filindra et al 2022 measure for this is a measure about immigration *policy* and not, say, the measures of stereotypes about Hispanics and about Asians that are included in the data.

2. Filindra et al 2022 suggested that:

Future research should focus on the role of attitudes towards immigrants and other racial groups—such as Latinos— and ethnocentrism more broadly in shaping white attitudes toward government.

But it's not clear to me why such analyses aren't included in Filindra et al 2022.

Maybe the expectation is that another publication should report results that include the measures of anti-Hispanic stereotypes and anti-Asian stereotypes in the ANES data. And another publication should report results that include the measures of anti-White stereotypes in the ANES data. And another publication should report results that include or focus on respondents in the ANES data who aren't White. But including all this in Filindra et al 2022 or its supplemental information would be more efficient and could produce a better understanding of political attitudes.

3. Filindra et al 2022 indicated that:

All variables in the models are rescaled on 0–1 scales consistent with the nature of the original variable. This allows us to conceptualize the coefficients as maximum effects and consequently compare the size of coefficients across models.

Scaling all predictors to range from 0 to 1 means that comparison of coefficients likely produces better inferences than if the predictors were on different scales, but differences in 0-to-1 coefficients can also be due to differences in the quality of the measurement of the underlying concept, as discussed in this prior post.

4. Filindra et al 2022 justified not using a differenced stereotype measure, citing evidence such as (from footnote 2):

Factor analysis of the Black and white stereotype items in the ANES confirms that they do not fall on a single dimension.

The reported factor analysis was on ANES 2020 data and included a measure of "lazy" stereotypes about Blacks, a measure of "violent" stereotypes about Blacks, a feeling thermometer about Blacks, a measure of "lazy" stereotypes about Whites, a measure of "violent" stereotypes about Whites, and a feeling thermometer about Whites.[*] But a "differenced" stereotype measure shouldn't be constructed by combining measures like that, as if the measure of "lazy" stereotypes about Blacks is independent of the measure of "lazy" stereotypes about Whites.

A "differenced" stereotype measure could be constructed by, for example, subtracting the "lazy" rating about Whites from the "lazy" rating about Blacks, subtracting the "violent" rating about Whites from the "violent" rating about Blacks, and then summing these two differences. That measure could help address the alternate explanation that the estimated effect for rating Blacks low is because respondents who rate Blacks low also rate all other groups low. That measure could also help address the concern that using only a measure of stereotypes about Blacks underestimates the effect of these stereotypes.

Another potential coding is a categorical measure, coded 1 for rating Blacks lower than Whites on all stereotype measures, 2 for rating Blacks equal to Whites on all stereotype measures, coded 3 for rating Blacks higher than Whites on all stereotype measures, and coded 4 for a residual category. The effect of anti-Black stereotypes could be estimated as the difference net of controls between category 1 and category 2.

Filindra et al 2022 provided justifications other than the factor analysis for not using a differenced stereotype measure, but, even if you agree that stereotype scale ratings about Blacks should not be combined with stereotype scale ratings about Whites, the Filindra et al 2022 arguments don't preclude including their measure of anti-White prejudice as a separate predictor in the analyses.

[*] I'm not sure why the feeling thermometer responses were included in a factor analysis intended to justify not combining stereotype scale responses.

5. I think that labels for the panels of Filindra et al 2022 Figure 1 and the corresponding discussion in the text are backwards: the label for each plot in Figure 1a appears to be "Negative Black Stereotypes", but the Figure 1a label is "Public Trust"; the label for each plot in Figure 1b appears to be "Level of Trust in Govt", but the Figure 1b label is "Anti-Black stereotypes".

My histogram of the Filindra et al 2022 measure of anti-Black stereotypes for the ANES 2020 Time Series Study looks like their 2020 plot in Figure 1a.

6. I'm not sure what the second sentence is supposed to mean, from this part of the Filindra et al 2022 conclusion:

Our results suggest that white Americans' beliefs about the trustworthiness of the federal government have become linked with their racial attitudes. The study shows that even when racial policy preferences are weakly linked to trust in government racial prejudice does not. Analyses of eight surveys...

7. Data source for my analysis: American National Election Studies. 2021. ANES 2020 Time Series Study Full Release [dataset and documentation]. July 19, 2021 version. www.electionstudies.org.

Tagged with: , , , , ,

Social Forces published Wetts and Willer 2018 "Privilege on the Precipice: Perceived Racial Status Threats Lead White Americans to Oppose Welfare Programs", which indicated that:

Descriptive statistics suggest that whites' racial resentment rose beginning in 2008 and continued rising in 2012 (figure 2)...This pattern is consistent with our reasoning that 2008 marked the beginning of a period of increased racial status threat among white Americans that prompted greater resentment of minorities.

Wetts and Willer 2018 had analyzed data from the American National Election Studies, so I was curious about the extent to which the rise in Whites' racial resentment might be due to differences in survey mode, given evidence from the Abrajano and Alvarez 2019 study of ANES data that:

We find that respondents tend to underreport their racial animosity in interview-administered versus online surveys.

---

I didn't find a way to reproduce the exact results from Wetts and Willer 2018 Supplementary Table 1 for the rise in Whites' racial resentment, but, like in that table, my analysis controlled for gender, age, education, employment status, marital status, class identification, income, and political ideology.

Using the ANES Time Series Cumulative Data File with weights for the full samples, my analysis detected p<0.05 evidence of a rise in Whites' mean racial resentment from 2008 to 2012, which matches Wetts and Willer 2018; this holds net of controls and without controls. But the p-values were around p=0.22 for the change from 2004 to 2008.

But using weights for the full samples compares respondents in 2004 and in 2008 who were only in the face-to-face mode, with respondents in 2012, some of whom were in the face-to-face mode and some of whom were in the internet mode.

Using weights only for the face-to-face mode, the p-value was not under p=0.25 for the change in Whites' mean racial resentment from 2004 to 2008 or from 2008 to 2012, net of controls and without controls. The point estimates for the 2008-to-2012 change were negative, indicating, if anything, a drop in Whites' mean racial resentment.

---

NOTES

1. For what it's worth, the weighted analyses indicated that Whites' mean racial resentment wasn't higher in 2008, 2012, or 2016, relative to 2004, and there was evidence at p<0.05 that Whites' mean racial resentment was lower in 2016 than in 2004.

2. Abrajano and Alvarez 2019 discussing their Table 2 results for feeling thermometers ratings about groups indicated that (p. 263):

It is also worth noting that the magnitude of survey mode effects is greater than those of political ideology and gender, and nearly the same as partisanship.

I was a bit skeptical that the difference in ratings about groups such as Blacks and illegal immigrants would be larger by survey mode than by political ideology, so I checked Table 2.

The feeling thermometer that Abrajano and Alvarez 2019 discussed immediately before the sentence quoted above involved illegal immigrants; that analysis had coefficients of -2.610 for internet survey mode, but coefficients of 6.613 for Liberal, -1.709 for Conservative, 6.405 for Democrat, and -8.247 for Republican. So the liberal/conservative difference is 8.322 and the Democrat/Republican difference is 14.652, compared to the survey mode difference is -2.610.

3. Dataset: American National Election Studies. 2021. ANES Time Series Cumulative Data File [dataset and documentation]. November 18, 2021 version. www.electionstudies.org

4. Data, code, and output for my analysis.

Tagged with: , , , , ,