The Monkey Cage tweeted a link to post (Gift et al 2022), claiming that "Just seeing a Fox News logo prompts racial bias, new research suggests".

This new research is Bell et al 2022, which reported on an experiment that manipulated the logo on a news story provided to participants (no logo, CNN, and Fox News) and manipulated the name of the U.S. Army Ranger in the news story who was accused of killing a wounded Taliban detainee, with the name signaling race (e.g., no name, Tyrone Washington, Mustafa Husain, Santiago Gonzalez, Todd Becker).

The Appendix to Bell et al 2022 reports some results for all respondents, but Bell et al 2022 indicates (footnotes and citations omitted):

Research on racial attitudes in America largely theorizes about the proclivities and nuances of racial animus harbored by Whites, so we follow conventions in the literature by restricting our analysis to 1149 White respondents.

Prior relevant post.

---

1.

From the Gift et al 2022 Monkey Cage post (emphasis added):

The result wasn't what we necessarily expected. We didn't anticipate that the Fox News logo might negatively affect attitudes toward the Black service member any more than soldiers of other races. So what could explain this outcome?

The regression results reported in Bell et al 2022 have the "no name" condition as the omitted category, so the 0.180 coefficient and 0.0705 standard error for the [Black X Fox News] interaction term for the "convicted" outcome indicates the effect of the Fox News logo in the Black Ranger condition relative to the effect of the Fox News logo in the no-name condition.

But, for assessing anti-Black bias among White participants, it seems preferable to compare the effect of the Fox News logo in the Black Ranger condition to the effect of the Fox News logo in the White Ranger condition. Otherwise, the Black name / no-name comparison might conflate the effect of a Black name for the Ranger with the general effect of naming the Ranger. Moreover, a Black name / White name comparison would better fit the claim about "any more than soldiers of other races".

---

The coefficient and standard error are 0.0917 and 0.0701 for the [White X Fox News] interaction term for the "convicted" outcome, and I don't think that there is sufficient evidence that the 0.180 [Black X Fox News] coefficient differs from the 0.0917 [White X Fox News] coefficient, given that the difference in coefficients for the interaction terms is only 0.09 and the standard errors are about 0.07 for each interaction term.

Similar concern about the "justified" outcome, which had respective coefficients (and standard errors) of −0.142 (0.0693) for [Black X Fox News] and −0.0841 (0.0692) for [White X Fox News]. I didn't see the replication materials for Bell et al 2022 in the journal's Dataverse, or I might have tried to get the p-values.

---

2.

From the Gift et al 2022 Monkey Cage post:

Of course one study is hardly definitive. Our analysis points to the need for more research into how Fox News and other media may or may not prime racial attitudes across a range of political and social issues.

Yes, one study is not definitive, so it might have been a good idea for the Gift et al 2022 Monkey Cage post to have mentioned the replication attempt *published in Bell et al 2022* in which the [Black X Fox News] interaction term did not replicate in statistical significance or even in the direction of the coefficients, with a −0.00371 coefficient for the "convicted" outcome and a 0.0199 coefficient for the "justified" outcome.

I can't see a good reason for the Gift et al 2022 Monkey Cage post to not report results for the preregistered replication attempt and for the Monkey Cage editors to have not known about the replication attempt or to permit publishing the post without mentioning the lack of replication for the [Black X Fox News] interaction term.

The preregistration suggests that the replication attempt was due to the journal (Research & Politics), so it seems that we can thank a peer reviewer or editor for the replication attempt.

---

3.

Below is the first sentence from the preregistration question about the main question for Study 2:

White Americans who see a story about a non-white soldier will be more likely to say the soldier should be punished for their alleged crime than either an unnamed soldier or a white soldier.

Bell et al 2022 Appendix Table A2 indicates that means for the "convicted" outcome in Study 2 were, from high to low and by condition:

No logo news source
0.725 White name
0.697 Latin name
0.692 MEast name
0.680 No name 
0.655 Black name

CNN logo
0.705 No name 
0.698 Latin name
0.695 Black name
0.694 White name
0.688 MEast name

Fox News logo
0.730 No name 
0.703 White name
0.702 Black name
0.695 MEast name
0.688 Latin name

So, in the Fox News condition from this *preregistered* experiment, the highest point estimate for a named Ranger was for the White Ranger, for the "convicted" outcome, which seems like a better measure of punishment than the "justified" outcome.

The gap between the highest mean "convicted" outcome for a named Ranger (0.703) and the lowest mean "convicted" outcome for a named Ranger (0.688) was 0.015 units on a 0-to-1 scale. That seems small enough to be consistent with random assignment error and to be inconsistent with the title of the Monkey Cage post of "Just seeing a Fox News logo prompts racial bias, new research suggests".

---

NOTES

1. Tweet question to authors of Bell et al 2022.

2. The constant in the Bell et al 2022 OLS regressions represents the no-name Ranger in the no-logo news story.

In Study 1, this constant indicates that the Ranger in the no-name no-logo condition was rated on a 0-to-1 scale as 0.627 for the "convicted" outcome and as 0.389 for the "justified" outcome. This balance make sense: on net, participants in the no-name no-logo condition agreed that the Ranger should be convicted and disagreed that the Ranger's actions were justified. Appendix Table A1 indicates that the mean "convicted" rating was above 0.50 and the mean "justified" rating was below 0.50 for each of the 15 conditions for Study 1.

But the constants in Study 2 were 0.680 for the "convicted" outcome and 0.711 for the "justified" outcome, which means that, on net, participants in the no-name no-logo condition agreed that the Ranger should be convicted and agreed that the Ranger's actions were justified. Appendix Table A2 indicates that the mean for both outcomes was above 0.50 for each of the 15 conditions for Study 2.

3. I think that Bell et al 2022 Appendix A1 might report results for all respondents: the sample size in A1 is N=1554, but in the main text Table 2 sample sizes are N=1149 for the convicted outcome and 1140 for the justified outcome. Moreover, I think that the main text Figure 2 might plot these A1 results (presumably for all respondents) and not the Table 2 results that were limited to White respondents.

For example, A1 has the mean "convicted" rating as 0.630 for no-name no-logo, 0.590 for no-name CNN logo, and 0.636 for non-name Fox logo, which matches the CNN dip in the leftmost panel of Figure 2 and Fox News being a bit above the no-logo estimate in that panel. But the "convicted" constant in Table 1 is 0.630 (for the no-name no-logo condition), with a −0.0303 coefficient for CNN and a −0.0577 coefficient for Fox News, so based on this I think that the no-name Fox News mean should be lower than the no-name CNN mean.

The bumps in Figure 2 better match with Appendix Table A5 estimates, which are for all respondents.

4. This Bell et al 2022 passage about Study 2 seems misleading or at least easy to misinterpret (emphasis in the original, footnote omitted):

If the soldier was White and the media source was unnamed, respondents judged him to be significantly less justified in his actions, but when the same information was presented under the Fox News logo, respondents found him to be significantly more justified in his actions.

As indicated in the coefficients and Figure 3, the "more justified" isn't more justified relative to the no-name no-logo condition, but more justified relative to the bias against the White Ranger relative to the no-name Ranger in the no-logo condition. Relevant coefficients are −0.131 for "White", which indicates the reduction in the "justified" rating between the no-name no-logo condition and the White-name no-logo condition, and 0.169 for "White X Fox News", which indicates the White-name Fox-News advantage relative to the no-name Fox-News effect.

So the Fox News bias favoring the White Ranger in the Study 2 "justified" outcome only a little more than offset the bias against the White Ranger in the no-logo condition, with a net bias that I suspect might be small enough to be consistent with random assignment error.

Tagged with: , , ,

The Monkey Cage recently published "Nearly all NFL head coaches are White. What are the odds?" [archived], by Bethany Lacina.

Lacina reported analyses that compared observed racial percentages of NFL head coaches to benchmark percentages that are presumably intended to represent what racial percentages of NFL head coaches would occur absent racial bias. For example, Lacina compared the percentage of Whites among NFL head coaches hired since February 2021 (8 of 10, or 80%) to the percentage of Whites among the set of NFL offensive coordinators, defensive coordinators, and recently fired head coaches (which was between 70% and 80% White).

Lacina indicated that:

If the hiring process did not favor White candidates, the chances of hiring eight White people from that pool is only about one in four — or plus-322 in sportsbook terms.

I think that Lacina might have reported the probability that *exactly* eight of the ten recent NFL coach hires were White. But for assessing unfair bias favoring White candidates, it makes more sense to report the probability that *at least* eight of the ten recent NFL coach hires were White: that probability is 38% using a 70% White pool and is 67% using an 80% White pool. See Notes 1 through 3 below.

---

Lacina also conducted an analysis for the one Black NFL head coach among the 14 NFL head coaches in 2021 to 2022 who were young enough to have played in the NCAA between 1999 and 2007, given that demographic data from her source were available starting in 1999. Benchmark percentages were 30% Black from NCAA football players and 44% Black from NCAA Division I football players.

The correctness of Lacina's calculations for this analysis doesn't seem to matter, because the benchmark does not seem to be a reasonable representation of how NFL head coaches are selected. For example, quarterback is the most important player position, and quarterbacks presumably need to know football strategy relatively well compared to players at most or all other positions, so I think that the per capita probability of a college quarterback becoming an NFL head coach is likely nontrivially higher than the per capita probability of players from other positions becoming an NFL head coach; however, Lacina's benchmark doesn't adjust for player position.

---

None of the above analysis should be interpreted to suggest that selection of NFL head coaches has been free from racial bias. But I think that it's reasonable to suggest that the Lacina analysis isn't very informative either way.

---

NOTES

1. Below is R code for a simulation that returns a probability of about 24%, for the probability that *exactly* eight of ten candidates are White, drawn without replacement from a candidate pool of 32 offensive coordinators and 32 defensive coordinators that is overall 70% White:

SET  <- c(rep_len(1,45),rep_len(0,19))
LIST <- c()
for (i in 1:100000){
   LIST[i] <- sum(sample(SET,10,replace=F))
}
table(LIST)
length(LIST[LIST==8])/length(LIST)

The probability is about 32% if the pool of 64 is 80% White. Adding in a few recently fired head coaches doesn't change the percentage much.

2. In reality, 8 White candidates were hired for the 10 NFL head coaching positions. So how do we assess the extent to which this observed result suggests unfair bias in favor of White candidates? Let's first get results from the simulation...

For my 100,000-run simulation using the above code and a random seed of 123, the simulation produced exactly zero White head coaches zero times, exactly 1 White head coach 5 times, exactly 2 White head coaches 52 times, exactly 3 White head coaches 461 times, exactly 4 White head coaches 2654 times, exactly 5 White head coaches 9255 times, exactly 6 White head coaches 20987 times, exactly 7 White head coaches 29307 times, exactly 8 White head coaches 24246 times, exactly 9 White head coaches 10978 times, and exactly 10 White head coaches 2055 times.

The simulation indicated that, if candidates were randomly drawn from a 70% White pool, exactly 8 of 10 coaches would be White about 24% of the time (24,246/100,000). This 8-of-10 result represents a selection of candidates from the pool that is perfectly fair with no evidence of bias for *or against* White candidates.

The 8-of-10 result would be the proper focus if our interest were bias for *or against* White candidates. But the Lacina post didn't seem concerned about evidence of bias against White candidates, so the 9 White of 10 simulation result and the 10 White of 10 simulation result should be added to the totals to get 37%: the 9 of 10 and 10 of 10 represent simulated outcomes in which White candidates were underrepresented in reality relative to that outcome from the simulation. So the 8 of 10 represents no bias and the 9 of 10 and the 10 of 10 represent bias against Whites, so that everything else represents bias favoring Whites.

3. Below is R code for a simulation that returns a probability of about 37%, for the probability that *at least* eight of ten candidates are White, drawn with replacement from a candidate pool of 32 offensive coordinators and 32 defensive coordinators that is overall 70% White:

SET <- c(rep_len(1,45),rep_len(0,19))
LIST <- c()
for (i in 1:100000){
   LIST[i] <- sum(sample(SET,10,replace=F))
}
table(LIST)
length(LIST[LIST>=8])/length(LIST)

---

UPDATE

I corrected some misspellings of "Lacinda" to "Lacina" in the post.

---

UPDATE 2 (March 18, 2022)

Bethany Lacina discussed with me her calculation. She indicated that she did calculate at least eight of ten, but she used a joint probability method that I don't think is correct because random error would bias the inference toward unfair selection of coaches by race. Given the extra information that Bethany provided, here is a revised calculation that produces a probability of about 60%:

# In 2021: 2 non-Whites hired of 6 hires.
# In 2022: 0 non-Whites hired of 4 hires (up to the point of the calculation).
# The simulation below is for the probability that at least 8 of the 10 hires are White.

SET.2021 <- c(rep_len(0,12),rep_len(1,53)) ## 1=White candidate
SET.2022 <- c(rep_len(0,20),rep_len(1,51)) ## 1=White candidate
LIST <- c()

for (i in 1:100000){
DRAW.2021 <- sum(sample(SET.2021,6,replace=F)) 
DRAW.2022 <- sum(sample(SET.2022,4,replace=F)) 
LIST[i] <- DRAW.2021 + DRAW.2022
}

table(LIST)
length(LIST[LIST>=8])/length(LIST)
Tagged with: , ,

The recent Rhodes et al 2022 Monkey Cage post indicated that:

...as [Martin Luther] King [Jr.] would have predicted, those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election and condone serious violations of democratic norms.

Regarding this inference about the legitimacy of a democratic election, Rhodes et al 2022 reported results for an item that measured perceptions about the legitimacy of Joe Biden's election as president in 2020. But a potential confound is that reported perceptions of the legitimacy of the U.S. presidential election in 2020 are due to who won that election and are not about elections per se. One way to address this confound is to use a measure of reported perceptions of the legitimacy of the U.S. presidential election *in 2016*, which Donald Trump won.

I checked data from the Democracy Fund Voter Study Group VOTER survey for responses to the items below, which can help address this confound:

[from 2016 and 2020] Over the past few years, Blacks have gotten less than they deserve.

[from 2016] How confident are you that the votes in the 2016 election across the country were accurately counted?

[from 2020] How confident are you that votes across the United States were counted as voters intended in the elections this November?

Results are below:

The dark columns are for respondents who strongly disagreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as denying the existence of unfair racial inequality. The light columns are for respondents who strongly agreed that Blacks have gotten less than they deserve, so that these respondents can plausibly be described as most strongly asserting the existence of unfair racial inequality.

Comparison of the 2020 column for "strongly disagree" to the 2020 column for "strongly agree" suggests that, as expected based on Rhodes et al 2022, skepticism about votes in 2020 being counted accurately was more common among respondents who most strongly denied the existence of unfair racial inequality than among respondents who most strongly asserted the existence of unfair racial inequality.

But comparison of the 2016 column for "strongly disagree" to the 2016 column for "strongly agree" suggests that the general phrasing of "those who deny the existence of racial inequality are also those who are most willing to reject the legitimacy of a democratic election" does not hold for every election, such as the presidential election immediately prior to the election that was the focus of the relevant item in Rhodes et al 2022.

---

NOTE

1. Data source. Stata do file. Stata output. Code for the R plot.

Tagged with: , , ,