I posted earlier about Jardina and Piston 2021 "The Effects of Dehumanizing Attitudes about Black People on Whites' Voting Decisions".

Jardina and Piston 2021 limited the analysis to White respondents, even though the Qualtrics_BJPS dataset at the Dataverse page for Jardina and Piston 2021 contained observations for non-White respondents. The Qualtrics_BJPS dataset had variables such as aofmanpic_1 and aofmanpic_6, and I didn't know which of these variables corresponded to which target groups.

My post indicated a plan to follow up if I got sufficient data to analyze responses from non-White participants. Replication code has now been posted at version 2 of the Dataverse page for Jardina and Piston 2021, so this is that planned post.

---

Version 2 of the Jardina and Piston 2021 Dataverse page has a Qualtrics dataset (Qualtrics_2016_BJPS_raw) that differs from the version 1 Qualtrics dataset (Qualtrics_BJPS): for example, the version 2 Qualtrics dataset doesn't contain data for non-White respondents, doesn't contain respondent ID variables V1 and uid, and doesn't contain variables such as aofmanpic_2.

I ran the Jardina and Piston 2021 "aofman" replication code on the Qualtrics_BJPS dataset to get a variable named "aofmanwb". In the version 2 dataset, this produced the output for the Trump analysis in Table 1 of Jardina and Piston 2021, so this aofmanwb variable is the "Ascent of man" dehumanization measure, coded so that rating Blacks as equally evolved as Whites is 0.5, rating Whites as more evolved than Blacks runs from just above 0.5 to 1, and rating Blacks more evolved than Whites runs from just under 0.5 down to zero.

The version 2 replication code for Jardina and Piston 2021 suggests that aofmanpic_1 is for rating how evolved Blacks are and aofmanpic_4 is for rating how evolved Whites are. So unless these variable names were changed between versions of the dataset, the version 2 replication code should produce the "Ascent of man" dehumanization measure when applied to the version 1 dataset, which is still available at the Jardina and Piston 2021 Dataverse page.

To check, I ran commands such as "reg aofmanwb ib4.ideology if race==1 & latino==2" in both datasets, and got similar but not exact results, with the difference presumably due to the differences between datasets discussed in the notes below.

---

The version 1 Qualtrics dataset didn't contain a variable that I thought was a weight variable, so my analyses below are unweighted.

In the version 1 dataset, the medians of aofmanwb were 0.50 among non-Latino Whites in the sample (N=450), 0.50 among non-Latino Blacks in the sample (N=98), and 0.50 among respondents coded Asian, Native American, or Other (N=125). Respective means were 0.53, 0.48, and 0.51.

Figure 1 of Jardina and Piston 2021 mentions the use of sliders to select responses to the items about how evolved target groups are, and I think that some unequal ratings might be due to respondent imprecision instead of an intent to dehumanize, such as if a respondent intended to select 85 for each group in a pair, but moved the slider to 85 for one group and 84 for the other group, and then figured that this was close enough. So I'll report percentages below with a strict dehumanization definition of anything differing from 0.5 on the 0-to-1 scale as dehumanization, but I'll also report percentages with a tolerance for potential unintentional dehumanization.

---

For the strict coding of dehumanization, I recoded aofmanwb into a variable that had levels for [1] rating Blacks as more evolved than Whites, [2] equal ratings of how evolved Blacks and Whites are, and [3] rating Whites as more evolved than Blacks.

In the version 1 dataset, 13% of non-Latino Whites in the sample rated Blacks more evolved than Whites, with an 83.4% confidence interval of [11%, 16%], and 39% rated Whites more evolved than Blacks [36%, 43%]. 42% of non-Latino Blacks in the sample rated Blacks more evolved than Whites [35%, 49%], and 23% rated Whites more evolved than Blacks [18%, 30%]. 19% of respondents not coded Black or White in the sample rated Blacks more evolved than Whites [15%, 25%], and 38% rated Whites more evolved than Blacks [32%, 45%].

---

For the non-strict coding of dehumanization, I recoded aofmanwb into a variable that had levels that included [1] rating Blacks at least 3 units more evolved than Whites on a 0-to-100 scale, and [5] rating Whites at least 3 units more evolved than Blacks on a 0-to-100 scale.

In the version 1 dataset, 8% of non-Latino Whites in the sample rated Blacks more evolved than Whites [7%, 10%], and 30% rated Whites more evolved than Blacks [27%, 34%]. 34% of non-Latino Blacks in the sample rated Blacks more evolved than Whites [27%, 41%], and 21% rated Whites more evolved than Blacks [16%, 28%]. 13% of respondents not coded Black or White in the sample rated Blacks more evolved than Whites [9%, 18%], and 31% rated Whites more evolved than Blacks [26%, 37%].

---

NOTES

1. Variable labels in the Qualtrics dataset ("male" coded 0 for "Male" and 1 for "Female") and associated replication commands suggest that Jardina and Piston 2021 might have reported results for a "Female" variable coded 1 for male and 0 for female, which would explain why Table 1 Model 1 of Jardina and Piston 2021 indicates that females were predicted to have higher ratings about Trump net of controls at p<0.01 compared to males, even though the statistically significant coefficients for "Female" in the analyses from other datasets in Jardina and Piston 2021 are negative when predicting positive outcomes for Trump.

The "Female" variable in Jardina and Piston 2021 Table 1 Model 1 is right above the statistically significant coefficient and standard error for age, of "0.00" and "0.00". The table note indicates that "All variables are transformed onto a 0 to 1 scale.", but that isn't correct for the age predictor, which ranges from 19 to 86.

2. I produced a plot like Jardina and Piston 2021 Figure 3, but with a range from most dehumanization of Whites relative to Blacks to most dehumanization of Blacks relative to Whites. The 95% confidence interval for Trump ratings at most dehumanization of Whites relative to Blacks did not overlap with the 95% confidence interval for Trump ratings at no / equal dehumanization of Whites and Blacks. But, as indicated in my later analyses, that might merely be due to the Jardina and Piston 2021 use of aofmanwb as a continuous predictor: the aforementioned inference wasn't supported using 83.4% confidence intervals when the aofmanwb predictor was trichotomized as described above.

3. Regarding differences between Qualtrics datasets posted to the Jardina and Piston 2021 Dataverse page, the Stata command "tab race latino, mi" returns 980 respondents who selected "White" for the race item and "No" for the Latino item in the version 1 Qualtrics dataset, but returns 992 respondents who selected "White" for the race item and "No" for the Latino item in the version 2 Qualtrics dataset.

Both version 1 and version 2 of the Qualtrics datasets contain exactly one observation with a 1949 birth year and a state of Missouri. In both datasets, this observation has codes that indicate a White non-Latino neither-liberal-nor-conservative male Democrat with some college but no degree who has an income of $35,000 to $39,999. That observation has values of 100 for aofmanvinc_1 and 100 for aofmanvinc_4 in the version 2 Qualtrics dataset, but, in the version 1 Qualtrics dataset, that observation has no numeric values for aofmanvinc_1, aofmanvinc_4, or any other variable starting with "aofman".

I haven't yet received an explanation about this from Jardina and/or Piston.

4. Below is a description of more checking about whether aofmanwb is correctly interpreted above, given that the Dataverse page for Jardina and Piston 2021 doesn't have a codebook.

I dropped all cases in the original dataset not coded race==1 and latino==2. Case 7 in the version 2 dataset is from New York, born in 1979, has an aofmanpic_1 of 84 , and an aofmanpic_4 of 92; this matches Case 7 in the version 1 dataset when dropping aforementioned cases. Case 21 in the version 1 dataset is from South Carolina, born in 1966, has an aofmanvinc_1 of 79, and an aofmanvinc_4 of 75; this matches Case 21 in the version 2 dataset when dropping aforementioned cases. Case 951 in the version 1 dataset is from Georgia, born in 1992, has an aofmannopi_1 of 77, and an aofmannopi_4 of 65; this matches case *964* in the version 2 dataset when dropping aforementioned cases.

5. From what I can tell, for anyone interested in analyzing the data, thermind_2 in the version 2 dataset is the feeling thermometer about Donald Trump, and thermind_4 is the feeling thermometer about Barack Obama.

6. Stata code and output from my analysis.

Tagged with: ,