Study 1 of Strickler and Lawson 2020 "Racial conservatism, self-monitoring, and perceptions of police violence" in Politics, Groups, and Identities was an experiment in which participants rated how justified a police shooting was. The experiment had a control condition, a "stereotype" condition in which the officer was White and the suspect Black, and a "counterstereotype" condition in which the officer was Black and the suspect White.

The article indicates that:

And while racial resentment did not moderate how whites responded to treatment in the White Officer/Black Victim condition, it did impact response to treatment in the Black Officer/White Victim condition. As Table 3 and Figure 4 demonstrate, for whites, those with higher levels of racial resentment are significantly less likely to view shooting as justified if it involves a black officer and a white victim.

However, the 95% confidence interval in the aforementioned Figure 4 crosses zero at high levels of racial resentment. I emailed lead author Ryan Strickler for the data and code, which he provided.

---

Instead of using a regression to estimate the outcome at higher levels of racial resentment, I'll estimate the outcome for only participants at given ranges of racial resentment (see Hainmueller et al. 2019). This way, inferences about particular groups are based on data for only those groups.

Plots below report point estimates and 95% confidence intervals from tests comparing the outcome across conditions, at various ranges of racial resentment, among all White respondents or among Whites who responded correctly to manipulation checks about the officer's race and the suspect's race. Racial resentment was coded from 1 through 17.

The outcome for the first four plots was whether the participant indicated that the officer's actions were justified.

In the top left plot, the top estimate is for White participants at the highest observed level of racial resentment. The estimate is positive 0.06, which indicates that high racial resentment participants in the stereotypic condition were 6 percentage points more likely to rate the shooting as justified, compared to high racial resentment participants in the counterstereotypic condition; however, the 95% confidence interval crosses zero. The next lower estimate compared outcomes for White participants at a racial resentment of 16 and 17. The bottom estimate (RR>=1) is for all White participants, and the negative point estimate for this bottom estimate indicates that White participants in the counterstereotypic shooting condition were more likely to rate the shooting as justified, compared to White participants in the stereotypic shooting condition.

The evidence for bias among Whites high in racial resentment is a bit stronger in the right panels, which compared the counterstereotypic condition to the control condition, but the 95% confidence intervals still overlap zero. There is an exception among White participants who scored 14 or higher on the racial resentment scale, when excluding participants who did not pass the post-treatment manipulation check, but it's not a good idea to exclude participants after the treatment.

---

Tables in the main text of Strickler and Lawson 2020 reported results for a dichotomous outcome coded 1 if, for the first item of the branching, the respondent indicated that the officer's actions were justified. But tables in the appendix used ratings of the extent to which the shooting was justified, measured using branched items that placed respondents into nine levels, from "a great deal certain" that the shooting was not justified to "a great deal certain" that the shooting was justified.

The plots below report results from tests that compared conditions for this ordinal measure of justification, placed on a 0-to-1 scale. Evidence in the right panel is a bit stronger using this outcome, compared to the dichotomous outcome. Like before, the top estimate is for White participants at the highest observed level of racial resentment. Middle estimates (RR>=1 and RR<=17) are for all Whites; below that, estimates are for more extreme levels of low racial resentment, ending with RR==1, for White participants at the lowest observed level of racial resentment.

Results for Whites who passed the manipulation checks are in the output file.

---

NOTES

1. Thanks to Ryan Strickler for sending me data and code for the article.

2. Stata code and R code for my analyses. Data for the first four plots. Data for the final two plots.

Tagged with: ,

The American National Election Studies Time Series Cumulative Data File (1948-2016) contains data for feeling thermometer measures for Whites and for Blacks, collected in face-to-face or telephone interviews, for each U.S. presidential election year from 1964 to 2016.

Feeling thermometers range from 0 to 100, with higher values indicating warmer or more favorable feelings about a group. The ANES Cumulative Data File and some early individual year ANES Time Series files collapse responses of 97 through 100 into a response of 97. This means that a respondent who selected 97 for Whites and 100 for Blacks would have the same "difference" value as a respondent who selected 100 for Whites and 97 for Blacks. Therefore, I placed respondents with a substantive value for the feeling thermometer about Whites and the feeling thermometer about Blacks into one of three categories:

  • rated Whites more than 3 units above Blacks
  • rated Whites within 3 units of Blacks, and
  • rated Blacks more than 3 units above Whites.

Abrajano and Alvarez (2019) reported evidence from ANES Time Series Studies that responses to racial feeling thermometers differed between the non-internet mode and the internet mode, so my reported results do not include results from the internet mode, which do not go back to 1964.

Below is a plot of how Whites Americans (left) and Black Americans (right) fell into each of the three categories, not including the respondents in the cumulative data file who did not report a substantive response to the items, which ranged from 1% to 8% (see the Notes). Documentation for the cumulative data file indicated that in 1964 and 1968 a response was recorded as 50 for a "don't know" response or if the participant indicated that the participant did not know too much about a group.

---

The plot below indicates how these thermometer ratings associated with two-party vote choice, among White participants:

The right panel indicates a steep drop in two-party vote for the Republican presidential candidate among Whites who rated Blacks more than 3 units higher than Whites, which seems to be consistent with evidence of a "Great Awokening" (see, e.g., Yglesias 2019 and Goldberg 2019, and this image linked to in Goldberg 2019).

---

The plot below is the plot above, but with columns grouped by year:

---

NOTES

1. Percentage non-responses to one or both thermometer items, by year: 3% (1964), 4% (1968), 8% (1972), 5% (1976), 5% (1980), 7% (1984), 5% (1988), 4% (1992), 4% (1996), 8% (2000), 3% (2004), 3% (2008), 1% (2012), 2% (2016).

2. Code for my analyses and black-and-white plots.

3. Feeling thermometer ratings about Chicanos/Hispanics and about Asians are not available in ANES Time Series Cumulative Data File until 1976 and 1992, respectively.

4. A color version of the first plot, for comparison:

5. A color version with a black line divider:

Tagged with:

The Adida et al. 2020 PS: Political Science & Politics article "Broadening the PhD Pipeline: A Summer Research Program for HBCU Students" claimed that (p. 727):

The US academy today is overwhelmingly white, with only 8% to 9% of full-time science and engineering faculty as underrepresented minorities (DePass and Chubin 2008, 6).

This evidence to support the claim that the "US academy" is "overwhelmingly white" is the percentage of a *subset* of the U.S. academy (science and engineering) that is not White *and not Asian*, given that Asians were not considered underrepresented minorities in the calculation of the percentage. Moreover, the cited publication is more than a decade old, and the data might be even older than that.

Below is a plot of data from 2018, for the U.S. academy as a whole, of data from the National Center for Educational Statistics. The light areas indicate the percentage White for each rank and overall, compared to the total White, Black, Hispanic, Asian, Pacific Islander, and persons of two or more races; the percentage does not include persons with an unknown race/ethnicity and does not include non-resident aliens.

Overall, in Fall 2018, about 76% of U.S. full time faculty at U.S. degree-granting postsecondary institutions were White, which matches a calculation in this Pew study or Fall 2017. So, if you randomly selected four of these faculty, one of them would be expected to be non-White. I'm not sure whether that counts as being "overwhelmingly white".

---

NOTES

1. R code for the plot.

Tagged with: ,

The average eighth grade math score on the 2019 National Assessment of Educational Progress (NAEP) was 310 for Asian/Pacific Islander students, 292 for White students, 268 for Hispanic students, and 260 for Black students. This pattern has been consistent for many years, for fourth grade students (Figure 3), for eighth grade students (Figure 4), and for twelfth grade students (Figure 5).

However, before inferring that Asian/Pacific Islander students are better in math on average than are White students and Hispanic students and Black students, be aware that this inference could be labeled "prejudice" in peer-reviewed research such as Piston 2010 and Hopkins and Washington 2020, which measured "prejudice" as a difference in ratings of groups on stereotype scales for certain characteristics.

---

Piston 2010 conceptualized "prejudice" with "an etymological perspective":

An assessment that one racial group possesses a negative attribute relative to another racial group is a "pre-judgment"; it precedes, but may or may not influence, the evaluation of an individual member of that group, such as Barack Obama.

So, if you make a good faith interpretation of NAEP scores and/or SAT scores and infer that Asian/Pacific Islander students are better on average in math than are White students and Hispanic students and Black students, that would be "prejudice" by the analysis in Piston 2010.

---

Your responses might not be "prejudice" based on Hopkins 2019:

We define prejudice as a standing, negative predisposition toward a social group held in the face of contradictory information.

Based on this, Hopkins 2019 seems to require evidence that Asian/Pacific Islander students are not better on average in math than are White students and Hispanic students and Black students ("contradictory information") before labeling that belief as "prejudice".

I asked Dan Hopkins in a tweet what "contradictory information" he was referring to for his use of "prejudice", and, perhaps as a consequence, Hopkins and Washington 2020 removed the "held in the face of contradictory information" restriction. From Hopkins and Washington 2020:

'Prejudice' refers to a standing, negative predisposition toward a social group.

So, by Hopkins and Washington 2020, it would be "prejudice" to have a justified standing, negative predisposition toward a hate group that regularly commits terrorism. That might be a proper conceptualization of "prejudice", but I would be interested in seeing Hopkins or Washington use "prejudice" in that way.

Hopkins and Washington 2020 used stereotype scale differences as measures of "prejudice", but it seems possible to perceive that members of one group perform better on average on some measure than members of another group, without having a "standing, negative predisposition" toward either group, especially because nothing in these traditional stereotype scales indicates that the scales measure belief about innate or genetic characteristics.

---

From what I can tell, the belief that U.S. Asian/Pacific Islander students are better in math on average than are White students and Hispanic students and Black students would be "prejudice" under the conceptualizations in Piston 2010 and Hopkins and Washington 2020, even though I think that this belief can result from a good faith interpretation of high quality evidence. I thus think that use of the conceptualizations of "prejudice" in Piston 2010 or Hopkins and Washington 2020 has the potential to be misleading and to corrode public discourse.

The potential to mislead is because I think that "prejudice" has a negative connotation in everyday language, and I don't think that a good faith interpretation of high quality evidence should have a label that has a negative connotation. I am not aware of anything that prevents researchers from labeling such stereotype scale responses as "stereotype scale differences" or something similar that would more precisely describe the phenomenon being measured.

The potential to corrode public discourse is the potential that fear of application of the "prejudice" label can make people less likely to express beliefs that have been derived from a good faith interpretation of high quality evidence, and I don't think that, barring some compelling reason otherwise, people should be discouraged from expressing a belief that has been derived from a good faith interpretation of high quality evidence.

Tagged with: , ,

This Brian Schaffner post at Data for Progress indicates that, on 9 June during the 2020 protests over the death of George Floyd, only 57% of Whites and about 83% of Blacks agreed that "White people in the U.S. have certain advantages because of the color of their skin". It might be worth considering why not everyone agreed with that statement.

---

Let's check data from the Nationscape survey, focusing on the survey conducted 11 June 2020 (two days from the aforementioned Data for Progress survey) and the items that ask: "How much discrimination is there in the United States today against...", with response options of "A great deal", "A lot", "A moderate amount", "A little", and "None at all".

For rating discrimination against Blacks, 95% of Whites selected a level from "A great deal" through "A little", including missing responses in the 5%. It could be that the difference between this 95% and the Data for Progress 57% is because about 38% of Whites think that discrimination against Blacks favors only non-White non-Black persons. But the 57% Data for Progress estimate was pretty close to the 59% of Whites in the Nationscape data who rated the discrimination against Blacks higher than they rated the discrimination against Whites.

The pattern is similar for Blacks: about 83% of Blacks in the Data for Progress data agreed that "White people in the U.S. have certain advantages because of the color of their skin", and 85% of Blacks in the Nationscape data rated the discrimination against Blacks higher than the discrimination against Whites. But, in the Nationscape data, 98% of Blacks selected a level from "A great deal" through "A little" for the amount of discrimination that Blacks face in the United States today.

---

So this seems to be suggestive evidence that many people who do not agree that "White people in the U.S. have certain advantages because of the color of their skin" might not be indicating a lack of "acknowledgement of racism" in Schaffner's terms, but are rather signaling a belief closer to the idea that the discrimination against Blacks does not outweigh the discrimination against Whites, at least as measured on a five-point scale.

---

NOTES:

[1] The "certain advantages" item has appeared on the CCES; here is evidence that another CCES item does not well measure what the item presumably is supposed to measure.

[2] Data citation:

Chris Tausanovitch and Lynn Vavreck. 2020. Democracy Fund + UCLA Nationscape, October 10-17, 2019 (version 20200814). Retrieved from: https://www.voterstudygroup.org/downloads?key=e6ce64ec-a5d0-4a7b-a916-370dc017e713.

Note: "the original collectors of the data, UCLA, LUCID, and Democracy Fund, and all funding agencies, bear no responsibility for the use of the data or for interpretations or inferences based upon such issues".

[3] Code for my analysis:

* Stata code for the Data for Progress data

tab acknowledgement_1
tab starttime if wave==8
svyset [pw=nationalweight]
svy: prop acknowledgement_1 if ethnicity==1 & wave==8
svy: prop acknowledgement_1 if ethnicity==2 & wave==8

* Stata code for the Nationscape data [ns20200611.dta]

recode discrimination_blacks (1/4=1) (5 .=0), gen(discB)
recode discrimination_whites (1/4=1) (5 .=0), gen(discW)
tab discrimination_blacks discB, mi
tab discrimination_whites discW, mi

gen discBW = 0
replace discBW = 1 if discrimination_blacks < discrimination_whites & discrimination_blacks!=. & discrimination_whites!=.
tab discrimination_blacks discrimination_whites if discBW==1, mi
tab discrimination_blacks discrimination_whites if discBW==0, mi

svyset [pw=weight]

svy: prop discB if race_ethnicity==2
svy: prop discBW if race_ethnicity==2

svy: prop discB if race_ethnicity==1
svy: prop discBW if race_ethnicity==1

Tagged with: , ,

The Ellis and Faricy 2020 Political Behavior article "Race, Deservingness, and Social Spending Attitudes: The Role of Policy Delivery Mechanism" discussed results from Figure 2:

This graph illustrates that while the mean support for this program does not differ significantly by spending mode, racial attitudes strongly affect the type of spending that respondents would prefer: those lowest in symbolic racism are expected to prefer the direct spending program to the tax expenditure program, while those high in symbolic racism are expected to prefer the opposite (p. 833).

Data for Study 2 indicated that, based on a linear regression using symbolic racism to predict nonBlack participant support for the programs, controlling for party identification, income, trust, egalitarianism, White race, and male, as coded in the Ellis and Faricy 2020 analyses, the predicted level of support at the lowest level of symbolic racism with other predictors at their means was 3.37 for the tax expenditure program and 3.87 for the direct spending program, but the predicted level of support at the highest level of symbolic racism was 3.44 for the tax expenditure program and 3.24 for the direct spending program.

However, linear regression can misestimate treatment effects. Below is a plot of the treatment effect estimated at individual levels of symbolic racism, with no controls (left panel) and with the aforementioned controls (right panel).

There does not appear to be much evidence in these data that participants high in symbolic racism preferred one program to the other. For example, in the left panel, at the highest level of symbolic racism, the estimated support was 2.76 for the tax expenditure program and was 2.60 for the direct spending program (p=0.41 for the difference). Moreover, the p-value for the difference did not drop under p=0.4 if participants from adjacent high levels of symbolic racism are included (7 and 8, or 6 through 8, or 5 through 8, or 4 through 8), with or without the controls.

---

NOTES

1. Code for my analyses and plot. Data for the plot.

Tagged with: , , ,

The plot below is from the Burge et al. 2020 Journal of Politics article "A Certain Type of Descriptive Representative? Understanding How the Skin Tone and Gender of Candidates Influences Black Politics":

I thought that the plot could be improved. Some superficial shortcomings of the plot:

[1] Placing dependent variable information in the legend unnecessary causes readers to need to decipher the dot, triangle, and X symbols.

[2] The y-axis text is unnecessarily vertical, and vertical text is more difficult to read than horizontal text.

[3] The panels are a lot taller than needed, so the top estimate is farther from the x-axis labels than needed.

Some other flaws are better understood with information about the experiment. Black participants were randomly assigned to groups and asked to rate a candidate, in which candidate characteristics varied, such as being female and dark skinned (Dark Julie) or male and light skinned (Light James). Participants responded to items about the candidate, such as reporting their willingness to vote for the candidate. The key result, indicated in the abstract, is that "darker-skinned candidates are evaluated more favorably than lighter-skinned candidates" (p. 1).

[4] The estimates of interest unnecessarily consume too little of the plot space. The dependent variables were placed on a 0-to-1 scale, and the plotted estimates are differences on this scale, so that -1 and +1 are potential estimates; the x-axes thus do not need to run from -0.5 to +0.5. The estimate of interest is the difference in responses between candidates and not the absolute values of the responses, so I think that it is fine to zoom in on the estimates and to not show the full potential scale on the x-axis.

Below is a plot that addresses these points:

I also changed the dependent variables from a 0-to-1 scale to a 0-to-100 scale, to avoid decimals in the x-axis, because decimals involve unnecessary periods and sometimes involve unnecessary zeros. For example, for the difference between Dark James and Light James in the middle panel, I would prefer to have the relevant tick labeled "5" than ".05" or "0.05".

And I removed what I thought was information that could be placed into a figure note or dropped altogether from the figure (such as sample size and model numbers). The note on the data source could also be placed into the figure note for journal publication, but I'm including it in this plot, in case I tweet the plot.

---

Another potential improvement is to revise the plot to emphasize the key finding, about the skin tone difference. The original Burge et al. 2020 plot includes a comparison of Dark Julie to Dark James, but does not include a comparison of Light Julie to Light James (all three comparisons of Light Julie to Light James are nulls). But the inclusion of the third panel in the original Burge et al. 2020 plot dilutes the focus on the skin color comparison. Here is a plot focusing on only the dark/light comparison:

Potential shortcomings of the above plot are the absence of the absolute values for the estimates and an inability to make across-sex comparisons of, say, Light Julie to Dark James. The plot below includes absolute values, permits comparisons across sex, and still permits the key finding about skin color to be relatively easily discerned:

The plot below uses shading to encourage by-color comparison of candidate pairs within panel:

Maybe it would be better to emphasize the dark/light finding by using a light dot for the "Light" targets and a dark dot for the "Dark" candidates. And, for a stand-alone plot, maybe it would be better to add a title summarizing the key pattern, such as "Black participants tended to prefer the darker-skinned Black candidates". Feel free to comment on any other improvements that can be made.

---

NOTES

1. Code and data for the 3-panel plot.

2. Code and data for the 2-panel plot.

3. Code and data for the unshaded 1-panel plot.

4. Code and data for the shaded 1-panel plot.

Tagged with: ,