In 2019, Michael Tesler published a Monkey Cage post subtitled "The majority of people who hold racist beliefs say they have an African American friend". Here is a description of these racist beliefs:

Not many whites in the survey took the overtly racist position of saying 'most blacks' lacked those positive attributes. The responses ranged from 9 percent of whites who said 'most blacks' aren't intelligent to 20 percent who said most African Americans aren't law-abiding or generous.

My analysis of the Pew Research Center data used in the Tesler 2019 post indicated that Tesler 2019 labeled as "overtly racist" the belief that most Blacks are not intelligent, even if a participant also indicated that most Whites are not intelligent.

In the Pew Research Center data (citation below), including Don't Knows and refusals, 118 of 1,447 Whites responded "No" to the question of whether most Blacks are intelligent, which is about 8 percent. However, 57 of the 118 Whites who responded "No" to the question of whether most Blacks are intelligent also responded "No" to the question of whether most Whites are intelligent. Thus, based on these intelligence items, 48 percent of the White participants who Tesler 2019 coded as taking an "overtly racist position" against Blacks also took a (presumably) overtly racist position against Whites. It could be that about half of the Whites who are openly racist against Blacks are also openly racist against Whites, or it could be that most or all of these 57 White participants have a nonracial belief that most people are not intelligent.

Even classification of responses of the 56 Whites who reported "No" for whether most Blacks are intelligent and "Yes" for whether most Whites are intelligent should address the literature on the distribution of IQ test scores in the United States and the possibility that at least some of these 56 Whites used the median U.S. IQ as the threshold for being intelligent.

---

I offered Michael Tesler an opportunity to reply. His reply is below:

Scholars have long disputed what constitutes racism in survey research.  Historically, these disagreements have centered around whether racial resentment items like agreeing that “blacks could be just as well of as whites if they only tried harder” are really racism or prejudice.  Because of these debates, I have avoided calling whites who score high on the racial resentment scale racists in both my academic research and my popular writing.

Yet even scholars who are most critical of the racial resentment measure, such as Ted Carmines and Paul Sniderman, have long argued that self-reported racial stereotypes are “self-evidently valid” measures of prejudice.  So, I assumed it would be relatively uncontroversial to say that whites who took the extreme position of saying that MOST BLACKS aren’t intelligent/hardworking/honest/law-abiding hold racist beliefs.  As the piece in question noted, very few whites took such extreme positions—ranging from 9% who said most blacks aren’t intelligent to 20% who said most blacks are not law-abiding.

If anything, then, the Pew measure of stereotypes used severely underestimates the extent of white racial prejudice in the country.  Professor Zigerell suggests that differencing white from black stereotypes is a better way to measure prejudice.  But this isn’t a very discerning measure in the Pew data because the stereotypes were only asked as dichotomous yes-no questions.  It’s all the more problematic in this case since black stereotypes were asked immediately before white stereotypes in the Pew survey and white respondents may have rated their own group less positively to avoid the appearance of prejudice.

In fact, Sniderman and Carmines’s preferred measure of prejudice—the difference between 7-point anti-white stereotypes and 7-point anti-black stereotypes—reveals far more prejudice than I reported from the Pew data.  In the 2016 American National Election Study (ANES), for example, 48% of whites rated their group as more hardworking than blacks, compared to only 13% in the Pew data who said most blacks are not hardworking.  Likewise, 53% of whites in the 2016 ANES rated blacks as more violent than whites and 25% of white Americans in the pooled 2010-2018 General Social Survey rated whites as more intelligent than blacks.

Most importantly, the substantive point of the piece in question—that whites with overtly racist beliefs still overwhelmingly claim they have black friends—remains entirely intact regardless of measurement.  Even if one wanted to restrict racist beliefs to only those saying most blacks are not intelligent/law-abiding AND that most whites are intelligent/law-abiding, 80%+ of these individuals who hold racist beliefs reported having a black friend in the 2009 Pew Survey.

All told, the post in question used a very narrow measure, which found far less prejudice than other valid stereotype measures, to make the point that the vast majority of whites with overtly racist views claim to have black friends.  Defining prejudice even more narrowly leads to the exact same conclusion.

I'll add a response in the comments.

---

NOTES

1. The title of the Tesler 2019 post is "No, Mark Meadows. Having a black friend doesn't mean you're not racist".

2. Data citation: Pew Research Center for the People & the Press/Pew Social & Demographic Trends. Pew Research Center Poll: Pew Social Trends--October 2009-Racial Attitudes in America II, Oct, 2009 [dataset]. USPEW2009-10SDT, Version 2. Princeton Survey Research Associates International [producer]. Cornell University, Ithaca, NY: Roper Center for Public Opinion Research, RoperExpress [distributor], accessed Aug-14-2019.

3. "White" and "Black" in the data analysis refer to non-Hispanic Whites and non-Hispanic Blacks.

4. In the Pew data, more White participants (147) reported "No" for the question of whether most Whites are intelligent, compared to the number of White participants (118) who reported "No" for the question of whether most Blacks are intelligent.

Patterns were similar among the 812 Black participants: 145 Black participants reported "No" for the question of whether most Whites are intelligent, but only 93 Black participants reported "No" for the question of whether most Blacks are intelligent.

Moreover, 76 White participants reported "Yes" for the question of whether most Blacks are intelligent and "No" for the question of whether most Whites are intelligent.

5. Stata code:

tab racethn, mi

tab q69b q70b if racethn==1, mi

tab q69b q70b if racethn==2, mi

Tagged with: , ,

Racial attitudes have substantially correlated with environmental policy preferences net of partisanship and ideology, such as here, here, and here. These results were from data collected in 2012 or later, so, to address the concern that this association is due to "spillover" of anti-Obama attitudes into non-racial policy areas, I checked whether the traditional four-item measure of racial resentment substantially correlated with environmental policy preferences net of partisanship and ideology in ANES data from 1986, which I think is the first time these items appeared together on an ANES survey.

I limited the sample to non-Hispanic Whites and controlled for participant gender, education, age, family income, partisanship, and ideology, and the race of the interviewer. The outcome variable concerns federal spending on improving and protecting the environment, which I coded so that 1 was "increased" and 0 was "same" or "decreased", with Don't Knows and Not Ascertaineds coded as missing; only 4 percent of respondents had indicated "decreased".

Other model variables at their means, the predicted probability of a reported preference for increased federal spending on improving and protecting the environment was 65% [54%, 76%] at the lowest level of racial resentment, but fell to 39% [31%, 47%] at the highest level of racial resentment. That's a substantial 26 percentage-point drop "caused" by racial attitudes, for anyone who thinks that such a research design permits causal inference.

---

NOTES:

1. Kinder and Sanders 1996 used racial resentment to predict non-racial attitudes (pp. 121-124), but, based on reading that section, I don't think KS96 predicted this environmental policy preference variable.

2. Data source: Warren E. Miller and the University of Michigan. Institute for Social Research. American National Election Studies. ANES 1986 Time Series Study. Inter-university Consortium for Political and Social Research [distributor].

3. Stata code and output.

4. The post title is about 1986, but some ANES 1986 interviews were conducted in Jan/Feb 1987. The key result still holds if the sample is limited to cases with an "86" year for the "Date of Interview" variable, with respective predicted probabilities of 67% and 37% (p=0.002 for racial resentment). About 4 or so dates appear to be incorrect, such as "01-04-86", "12-23-87", and "11-18-99". Code:

logit env2 RR4 i.female i.educ age i.finc i.party i.ideo i.V860037 if NHwhite==1 & substr(V860009, 7, 8)=="86"
margins, atmeans at(RR4=(0 1))

Tagged with: ,

The 2018 Cooperative Congressional Election Survey included two items labeled as measures of "sexism", for which respondents received five response options from "strongly agree" to "strongly disagree". One of these sexism measures is the Glick and Fiske 1996 hostile sexism statement that "Feminists are making entirely reasonable demands of men". This item was recently used in the forthcoming Schaffner 2020 article in the British Journal of Political Science.

It is not clear to me what "demands" the statement refers to. Moreover, it seems plausible that Democrats would conceptualize these demands differently than Republicans do so that, in effect, many Democrats would respond to a different item than many Republicans respond to. Democrats might be more likely to conceptualize reasonable demands such as support for equal work for equal pay, but Republicans might be more likely to conceptualize more disputable demands such as support for taxpayer-funded late-term abortions.

---

To assess whether CCES 2018 respondents were thinking only of the reasonable demand of men's support for equal work for equal pay, let's check data for the 2016 American National Election Studies Time Series Study, which asked post-election survey participants to respond to the item: "Do you favor, oppose, or neither favor nor oppose requiring employers to pay women and men the same amount for the same work?".

In weighted ANES 2016 data, 87% of participants asked that item favored requiring employers to pay women and men the same amount for the same work, including non-substantive responses, with a 95% confidence interval of [86%, 89%]. However, in weighted CCES 2018 post-election data, only 38% of participants somewhat or strongly agreed that feminists are making entirely reasonable demands of men, including non-substantive responses, with a 95% confidence interval of [37%, 39%].

So, in these weighted national samples, 87% favored requiring employers to pay women and men the same amount for the same work, but only 38% agreed that feminists are making entirely reasonable demands of men. I think that this is strong evidence that a large percentage of U.S. adults do not think of only reasonable demands when responding to the statement that "Feminists are making entirely reasonable demands of men".

---

To address the concern that the interpretation of the "demands" differs by partisanship, here are support levels by partisan identification:

Democrats

  • 92% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 59% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 33 percentage-point difference

Republicans

  • 84% favor requiring employers to pay women and men the same amount for the same work [2016 ANES]
  • 18% agree that feminists are making entirely reasonable demands of men [2018 CCES]
  • 66 percentage-point difference

So that's an 8-point Democrat/Republican gap in favoring requiring employers to pay women and men the same amount for the same work, but a 41-point Democrat/Republican gap in agreement that feminists are making entirely reasonable demands of men.

I think that this is at least suggestive evidence that a nontrivial percentage of Democrats and an even higher percentage of Republicans are not thinking of reasonable feminist demands such as support for equal work for equal pay. If it is generally true that, responding to the "feminist demands" item, Democrats on average think of different demands than Republicans think of, that seems like a poor research design, to infer sexism in politically relevant variables based on a too-vague item that different political groups interpret differently.

---

NOTES:

1. ANES 2016 citations:

The American National Election Studies (ANES). 2016. ANES 2012 Time Series Study. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2016-05-17. https://doi.org/10.3886/ICPSR35157.v1.

ANES. 2017. "User's Guide and Codebook for the ANES 2016 Time Series Study". Ann Arbor, MI, and Palo Alto, CA: The University of Michigan and Stanford University.

2. CCES 2018 citation:

Stephen Ansolabehere, Brian F. Schaffner, and Sam Luks. Cooperative Congressional Election Study, 2018: Common Content. [Computer File] Release 2: August 28, 2019. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu.

3. ANES 2016 Stata code:

tab V162149

tab V160502

keep if V160502==1

tab V162149

gen favorEQpay = V162149

recode favorEQpay (-9 -8 2 3=0)

tab V162149 favorEQpay, mi

svyset [pweight=V160102], strata(V160201) psu(V160202)

svy: prop favorEQpay

tab V161155

svy: prop favorEQpay if V161155==1

svy: prop favorEQpay if V161155==2

4. CCES 2018 Stata code:

tab CC18_422d tookpost, mi

tab CC18_422d tookpost, mi nol

keep if tookpost==2

tab CC18_422d, mi

gen femagree = CC18_422d

recode femagree (3/5 .=0) (1/2=1)

tab CC18_422d femagree, mi

svyset [pw=commonpostweight]

svy: prop femagree

tab CC18_421a

svy: prop femagree if CC18_421a==1

svy: prop femagree if CC18_421a==2

Tagged with: ,

This post discusses a commonly used "blatant" measure of dehumanization. Let me begin by proposing two blatant measures of dehumanization:

1. Yes or No?: Do you think that members of Group X are fully human?

2. On a scale in which 0 is not at all human and 10 is fully human, where would you rate members of Group X?

I would interpret a "No" response for the first measure and a response of any number lower than 10 for the second measure as dehumanization of members of Group X. If there are no reasonable alternate interpretation for these responses, then these are face-valid unambiguous measures of blatant dehumanization.

---

But neither above measure is the commonly used social science measure of blatant dehumanization. Instead, the the commonly used "measure of blatant dehumanization" (from Kteily et al. 2015), referred to as the Ascent measure, is below:

And here is how Kteily et al.'s 2015 described the ends of the tool (emphasis omitted):

Responses on the continuous slider were converted to a rating from 0 (least "evolved") to 100 (most "evolved")...

Note that participants are instructed to rate how "evolved" the participant considers the average member of a group to be and that these ratings are placed on a scale from "least evolved" to "most evolved", but these ratings are then interpreted as participant perceptions about the humanness of the group. This doesn't seem like a measure of blatant dehumanization if participants aren't asked to indicate their perceptions of how human the average member of a group is.

The Ascent measure is a blatant measure of dehumanization only if "human" and "evolved" are identical concepts, but these aren't identical concepts. It's possible to simultaneously believe that Bronze Age humans are fully human and that Bronze Age humans are less evolved than humans today. Moreover, I think that the fourth figure in the Ascent image is a Cro-Magnon that is classified by scientists as human, and Kteily et al. seem to agree:

...the image is used colloquially to highlight a salient distinction between early human ancestors and modern humans; that is, the full realization of cognitive ability and cultural expression

The perceived humanness of the fourth figure matters for understanding responses to the Ascent measure because much of the variation in responses occurs between the fourth figure and fifth figure (for example, see Table 1 of Kteily et al. 2015 and Note 1 below).

There is an important distinction between participants dehumanizing a group and participants rating one group lower than another group on a measure that participants interpret as indicating something other than "humanness", such as the degree of "realization of cognitive ability and cultural expression", especially because I don't think that humans need to have "the full realization of cognitive ability and cultural expression" in order to be fully human.

---

NOTES

1. The Jardina and Piston TESS study conducted in 2015 and 2016 with only non-Hispanic White participants had a Ascent measure in which 66% and 77% of unweighted responses for the respective targets of Blacks and Whites were in the 91-to-100 range.

2. I made some of the above points in 2015 in the ANES Online Commons. Lee Jussim raised issues discussed above in 2018, and I didn't find anything earlier.

3. More Twitter discussion of the Ascent measure: here with no reply, here with no reply, here with a reply, here with a reply.

Tagged with:

The PS: Political Science and Politics article "Fear, Institutionalized Racism, and Empathy: The Underlying Dimensions of Whites' Racial Attitudes" by Christopher D. DeSante and Candis Watts Smith reports results for four racial attitudes items from a "FIRE" battery.

I have a paper and a blog post indicating that combinations of these items substantially associate with environmental policy preferences net of controls for demographics, partisanship, and political ideology. DeSante and Smith have a paper that reported an analysis that uses combinations of these items to predict an environmental policy preference ("Support E.P.A.", in Table 3 of the paper), but results for this outcome variable are not mentioned in the DeSante and Smith 2020 PS publication. DeSante and Smith 2020 reports results for the four FIRE racial attitudes items separately, so I will do so below for environmental policy preference outcome variables, using data from the 2016 Cooperative Congressional Election Study (CCES).

---

Square brackets contain predicted probabilities from a logistic regression—net of controls for gender, education, age, family income, partisanship, and political ideology—of selecting "oppose" regarding the policy "Strengthen enforcement of the Clean Air Act and Clean Water Act even if it costs US jobs". The sample is limited to White respondents, and the estimates are weighted. The first probability in square brackets is at the highest level of measured agreement to the indicated statement on a five-point scale, with all other model predictors at their means; the second probability is for the corresponding highest level of measured disagreement to the indicated statement.

  • [38% to 56%, p<0.05] I am angry that racism exists.
  • [29% to 58%, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [39% to 42%, p>0.05] I often find myself fearful of people of other races.
  • [51% to 36%, p<0.05] Racial problems in the U.S. are rare, isolated situations.

Results below are from a fractional logistic regression predicting an index of values of the four environmental policy items summed together and placed on a 0-to-1 scale:

  • [0.28 to 0.48, p<0.05] I am angry that racism exists.
  • [0.23 to 0.44, p<0.05] White people in the U.S. have certain advantages because of the color of their skin.
  • [0.28 to 0.32, p<0.05] I often find myself fearful of people of other races.
  • [0.42 to 0.26, p<0.05] Racial problems in the U.S. are rare, isolated situations.

The standard deviation for the 0-to-1 four-item environmental policy index is 0.38, so three of the four results immediately above indicate nontrivially high differences in predictions for a environmental policy preferences outcome variable that has no theoretical connection to race, which I think raises legitimate questions about whether these racial attitudes items should ever be used to estimate the causal influence of racial attitudes.

---

NOTES

1. Stata code.

2. Data source: Stephen Ansolabehere and Brian F. Schaffner, Cooperative Congressional Election Study, 2016: Common Content. [Computer File] Release 2: August 4, 2017. Cambridge, MA: Harvard University [producer] http://cces.gov.harvard.edu

Tagged with:

1.

The Hassell et al. 2020 Science Advances article "There is no liberal media bias in which news stories political journalists choose to cover" reports null results from two experiments on ideological bias in media coverage.

The correspondence experiment emailed journalists a message about a candidate who planned to announce a candidacy for state legislator, with a question of whether the journalist would be interested in a sit-down interview with the candidate to discuss the candidate's candidacy and vision for state government. Experimental manipulations involved the description of the candidate, such as "...is a true conservative Republican..." or "...is a true progressive Democrat...".

The conjoint experiment asked journalists to hypothetically choose between two candidacy announcements to cover, with characteristics of the candidates experimentally manipulated.

---

2.

Hassell et al. 2020 claims that (p. 1)...

Using a unique combination of a large-scale survey of political journalists, data from journalists' Twitter networks, election returns, a large-scale correspondence experiment, and a conjoint survey experiment, we show definitively that the media exhibits no bias against conservatives (or liberals for that matter) in what news that they choose to cover.

I think that a good faith claim that research "definitively" shows no media bias against conservatives or liberals in the choice of news to cover should be based on at least one test that is very likely to detect that type of bias. But I don't think that either experiment provides such a "very likely" test.

I think that a "very likely" scenario in which ideology would cause a journalist to not report a story has at least three characteristics: [1] the story unquestionably reflects poorly on the journalist's ideology or ideological group, [2] the journalist has nontrivial gatekeeping ability over the story, and [3] the journalist could not meaningfully benefit from reporting the story.

Regarding [1], it's not clear to me that any of the candidate announcement stories would unquestionably reflect poorly on any ideology or ideological group. The lack of an ideological valence to the story is especially lacking in the correspondence experiment, given that a liberal journalist could ask softball questions to try to make a liberal candidate look good and could ask hardball questions to try to make a conservative candidate look bad.

Regarding [2], it's not clear to me that a journalist would have nontrivial gatekeeping ability over the candidate announcement story: it's not like a journalist could keep secret the candidate's candidacy.

---

3.

I think that title of the Hassell et al. 2020 Monkey Cage post describing this research is defensible: "Journalists may be liberal, but this doesn't affect which candidates they choose to cover". But I'm not sure who thought otherwise.

Hassell et al. 2020 describe the concern about selective reporting as "... journalists may omit news stories that do not adhere to their own (most likely liberal) predispositions" (p. 1). But in what sense does a conservative Republican announcing a candidacy for office have anything to do with adhering to a liberal disposition? The concern about media bias in the selection of stories to cover, as I understand it, is largely about stories that have an obvious implication for ideologically preferred narratives. So something like "Conservative Republican accused of sexual assault", not "Conservative Republican runs for office".

The selective reporting that conservatives complain about is plausibly much more likely—and plausibly much more important—at the national level than at a lower level. For example, I don't think that ideological bias is large enough to cause a local newspaper to not report on a police shooting of an unarmed person in the newspaper's distribution area; however, I think that ideological bias is large enough to influence a national media organization's decisions about which subset of available police shootings to report on.

Tagged with:

1.

The Carrington and Strother 2020 "Who thinks removing Confederate icons violates free speech?" Politics, Groups, and Identities article "examine[s] the relationship between both 'heritage' and 'hate' and pro Confederate statue views" (p. 5).

The right panel of Carrington and Strother 2020 Figure 2 indicates how support for Confederate symbols associates with their "hate" measure. Notice how much of the "hate" association is due to those who rate Whites less warmly than they rate Blacks. Imagine a line extending horizontally from [i] the y-axis at a 50 percent predicted probability of support for Confederate symbols to [ii] the far end of the confidence interval; that 50 percent ambivalence about Confederate symbols falls on the "anti-White affect" part of the "hate" measure.

---

2.

The second author of Carrington and Strother 2020 has discussed the Wright and Esses 2017 article that claimed that "Most supporters of the flag are doing so because of their strong Southern pride and their conservative political views and do not hold negative racial attitudes toward Blacks" (p. 235). Moreover, my 2015 Monkey Cage post on support for the Confederate battle flag presented evidence that conflicted with claims that the second author of Carrington and Strother 2020 made in a prior Monkey Cage post.

The published version of Carrington and Strother 2020 did not cite Wright and Esses 2017 or my 2015 post. I don't think that Carrington and Strother 2020 had an obligation to cite either publication, but if these publications were not cited in the initial submission, I think that that would plausibly produce a less rigorous peer review, if the journal's selection of peer reviewers is at least partly dependent on manuscript references. And the review process for Carrington and Strother 2020 appears to have not been especially rigorous, to the extent that this can be inferred from the published Carrington and Strother 2020, which reported multiple impossible p-values ("p < .000") and referred to "American's views toward Confederate statues" (p. 5, instead of "Americans' views") and to "the Cour's decision" (p. 7, instead of "the Court's decision").

The main text reports a sample of 332, but table Ns are 233; presumably, the table results are for Whites only, and the sample is for the full set of respondents, but I don't see that mentioned in the article. The appendix indicates that the Figure 2 outcome variable had four levels and that the Figure 3 outcome variable had six levels, but figure results are presented in terms of predicted probabilities, so I suspect that the analysis dichotomized these outcome variables for some reason, but let me known if you find an indication of that in the article.

And did no one in the review process raise a concern about the Carrington and Strother 2020 suggestion below that White Southern pride requires or is nothing more than "pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery" (p. 6)?

It must be noted that White Southern pride should not be assumed to be racially innocuous: it is hard to imagine a racially neutral pride in a failed rebellion whose stated purpose was the perpetuation of race-based chattel slavery.

It seems possible to be proud to be from the South but not have pride in the Confederacy, similar to the way that it is possible to be proud to be a citizen of a country and not have pride in every action of the country or even in a major action of that country.

---

3.

My peer review might have mentioned that, while Figure 2 of Carrington and Strother 2020 indicates that racial attitudes are a larger influence than Southern pride, the research design might have been biased toward this inference: Southern pride is measured with a 5-point item, racial attitudes are measured with a 201-point scale, and it is plausible that a more precise measure might produce a larger association, all else equal.

Moreover, the left panel of Carrington and Strother 2020 Figure 2 indicates that the majority supported Confederate symbols. Maybe I'm thinking about this incorrectly, but much of the association for racial attitudes is due to the "less than neutral about Whites" part of the racial attitudes scale, but there is no corresponding "less than neutral" part of the Southern pride item. Predicted probabilities for the racial attitudes panel extend much lower than neutral because of more negative attitudes about Whites relative to Blacks, but the research design doesn't provide corresponding predicted probabilities for those who have negative attitudes about Southern pride.

---

4.

I think that a core claim of Carrington and Strother 2020 is that (p. 2):

...our findings suggest that the free speech defense of Confederate icons in public spaces is, in part, motivated by racial attitudes.

The statistical evidence presented for this claim is that the racial attitudes measure associates with a measure of agreement with a free speech defense of Confederate monuments. But, as indicated in the right panel of Carrington and Strother 2020 Figure 3, the results are also consistent with the claim that racial attitudes partly motivates *not* agreeing with this free speech defense.

---

5.

The Carrington and Strother 2020 use of a White/Black feeling thermometer difference for their measure of racial attitudes permitted comparison of those who have relatively more favorable feelings about one of the racial groups to those who have relatively more favorable feelings about the other racial group.

The racial resentment measure that sometimes is used as a measure of racial attitudes would have presumably instead coded the bulk of respondents on or near the end of the "Warmer to Black" [sic] part of the Carrington and Strother 2020 "hate" measure as merely being not racially resentful, which would not have permitted readers to distinguish those who reported relatively high more negative feelings about Whites from those whose reported feelings favor neither Whites nor Blacks.

Tagged with: