Below are leftover comments on publications that I read in 2021.

---

ONO AND ZILIS 2021

Politics, Groups, and Identities published Ono and Zilis 2021, "Do Americans perceive diverse judges as inherently biased?". Ono and Zilis 2021 indicated that "We test whether Americans perceive diverse judges as inherently biased with a list experiment". The statements to test whether Americans perceive diverse judges to be "inherently biased" were:

When a court case concerns issues like #metoo, some women judges might give biased rulings.

When a court case concerns issues like immigration, some Hispanic judges might give biased rulings.

Ono and Zilis 2021 indicated that "...by endorsing that idea, without evidence, that 'some' members of a group are inclined to behave in an undesirable way, respondents are engaging in stereotyping" (pp. 3-4).

But statements about whether *some* Hispanic judges and *some* women judges *might* be biased can't measure stereotypes or the belief that Hispanic judges or women judges are *inherently* biased. For example, a belief that *some* women *might* commit violence doesn't require the belief that women are inherently violent and doesn't even require the belief that women are on average more violent than men are.

---

Ono and Zilis 2021 claimed that "Hispanics do not believe that Hispanic judges are biased" (p. 4, emphasis in the original), but, among Hispanic respondents, the 95% confidence interval for agreement with the claim that Hispanic judges might be biased in cases involving issues like immigration did not cross zero in the multivariate analyses in Figure 1.

For Table 2 analyses without controls, the corresponding point estimate indicated that 25 percent of Hispanics agreed with the claim about Hispanic judges, but the ratio of the relevant coefficient to standard error was 0.25/0.15, which is about 1.67, depending on how the 0.25 and 0.15 were rounded. The corresponding p-value isn't less than p=0.05, but that doesn't support the conclusion that the percentage of Hispanics that agreed with the statement is zero.

---

BERRY ET AL 2021

Politics, Groups, and Identities published Berry et al 2021,"White identity politics: linked fate and political participation". Berry et al 2021 claimed to have found "notable partisan differences in the relationship between racial linked fate and electoral participation for White Americans". But this claim is based on differences in the presence of statistical significance between estimates for White Republicans and estimates for White Democrats ("Linked fate is significantly and consistently associated with increased electoral participation for Republicans, but not Democrats", p. 528), instead of being based on statistical tests of whether estimates for White Republicans differ from estimates for White Democrats.

The estimates in the Berry et al 2021 appendix that I highlighted in yellow appear to be incorrect, in terms of plausibility and based on the positive estimate in the corresponding regression output.

---

ARCHER AND CLIFFORD FORTHCOMING

In "Improving the Measurement of Hostile Sexism" (reportedly forthcoming at Public Opinion Quarterly), Archer and Clifford proposed a modified version of the hostile sexism scale that is item specific. For example, instead of measuring responses about the statement "Women exaggerate problems they have at work", the corresponding item-specific item measures responses to the question of "How often do women exaggerate problems they have at work?". Thus, to get the lowest score on the hostile sexism scale, instead of merely strongly disagreeing that women exaggerate problems they have at work, respondents must report the belief that women *never* exaggerate problems they have at work.

---

Archer and Clifford indicated that responses to some of their revised items are measured on a bipolar scale. For example, respondents can indicate that women are offended "much too often", "a bit too often", "about the right amount", "not quite often enough", or "not nearly often enough". So to get the lowest hostile sexism score, respondents need to indicate that women are wrong about how often they are offended, by not being offended enough.

Scott Clifford, co-author of the Archer and Clifford article, engaged me in a discussion about the item specific scale (archived here). Scott suggested that the low end of the scale is more feminist, but dropped out of the conversation after I asked how much of an OLS coefficient for the proposed item-specific hostile sexism scale is due to hostile sexism and how much is due to feminism.

The portion of the hostile sexism measure that is sexism seems like something that should have been addressed in peer review, if the purpose of a hostile sexism scale is to estimate the effect of sexism and not to merely estimate the effect of moving from highly positive attitudes about women to highly negative attitudes about women.

---

VIDAL ET AL 2021

Social Science Quarterly published Vidal et al,"Identity and the racialized politics of violence in gun regulation policy preferences". Appendix A indicates that, for the American National Election Studies 2016 Time Series Study, responses to the feeling thermometer about Black Lives Matter ranged from 0 to 999, with a standard deviation of 89.34, even though the ANES 2016 feeling thermometer for Black Lives Matter ran from 0 to 100, with 999 reserved for respondents who indicate that they don't know what Black Lives Matter is.

---

ARORA AND STOUT 2021

Research & Politics published Arora and Stout 2021 "After the ballot box: How explicit racist appeals damage constituents views of their representation in government", which noted that:

The results provide evidence for our hypothesis that exposure to an explicitly racist comment will decrease perceptions of interest representation among Black and liberal White respondents, but not among moderate and conservative Whites.

This is, as far as I can tell, a claim that the effect among Black and liberal White respondents will differ from the effect among moderate and conservative Whites, but Arora and Stout 2021 did not report a test of whether these effects differ, although Arora and Stout 2021 did discuss statistical significance for each of the four groups.

Moreover, Arora and Stout 2021 footnote 4 indicates that:

In the supplemental appendix, we confirm that explicit racial appeals have a unique effect on interest representation and are not tied to other candidate evaluations such as vote choice.

But the estimated effect for interest representation (Table 1) was -0.06 units among liberal White respondents (with a "+" indicator for statistical significance), which is the same reported number as the estimated effect for vote choice (Table A5): -0.06 units among liberal White respondents (with a "+" indicator for statistical significance).

None of the other estimates in Table 1 or Table A5 have an indicator for statistical significance.

---

Arora and Stout 2021 repeatedly labeled as "explicitly racist" the statement that "If he invited me to a public hanging, I'd be on the front row", but it's not clear to me how that statement is explicitly racist. The Data and Methodology section indicates that "Though the comment does not explicitly mention the targeted group...". Moreover, the Conclusion of Arora and Stout 2021 indicates that...

In spite of Cindy Hyde-Smith's racist comments during the 2018 U.S. Senate election which appeared to show support for Mississippi's racist and violent history, she still prevailed in her bid for elected office.

... and "appeared to" isn't language that I would expect from an explicit statement.

---

CHRISTIANI ET AL 2021

The Journal of Race, Ethnicity, and Politics published Christiani et al 2021 "Masks and racial stereotypes in a pandemic: The case for surgical masks". The abstract indicates that:

...We find that non-black respondents perceive a black male model as more threatening and less trustworthy when he is wearing a bandana or a cloth mask than when he is not wearing his face covering—especially those respondents who score above average in racial resentment, a common measure of racial bias. When he is wearing a surgical mask, however, they do not perceive him as more threatening or less trustworthy. Further, it is not that non-black respondents find bandana and cloth masks problematic in general. In fact, the white model in our study is perceived more positively when he is wearing all types of face coverings.

Those are the within-model patterns, but it's interesting to compare ratings of the models in the control, pictured below:

Appendix Table B.1 indicates that, on average, non-Black respondents rated the White model more threatening and more untrustworthy compared to the Black model: on a 0-to-1 scale, among non-Black respondents, the mean ratings of "threatening" were 0.159 for the Black model and 0.371 for the White model, and the mean ratings of "untrustworthy" were 0.128 for the Black model and 0.278 for the White model. These Black/White gaps were about five times the standard errors.

Christiani et al 2021 claimed that this baseline difference does not undermine their results:

Fortunately, the divergent evaluations of our two models without their masks on do not undermine either of the main thrusts of our analyses. First, we can still compare whether subjects perceive the black model differently depending on what type of mask he is wearing...Second, we can still assess whether people resolve the ambiguity associated with seeing a man in a mask based on the race of the wearer.

But I'm not sure that it is true, that "divergent evaluations of our two models without their masks on do not undermine either of the main thrusts of our analyses".

I tweeted a question to one of the Christiani et al 2021 co-authors that included the handles of two other co-authors, asking whether it was plausible that masks increase the perceived threat of persons who look relatively nonthreatening without a mask but decrease the perceived threat of persons who look relatively more threatening without a mask. That phenomenon would explain the racial difference in patterns described in the abstract, given that the White model in the control was perceived to be more threatening than the Black model in the control.

No co-author has yet responded to defend their claim.

---

Below are the mean ratings on the 0-to-1 "threatening" scale for models in the "no mask" control group, among non-Black respondents by high and low racial resentment, based on Tables B.2 and B.3:

Non-Black respondents with high racial resentment
0.331 mean "threatening" rating of the White model
0.376 mean "threatening" rating of the Black model

Non-Black respondents with low racial resentment
0.460 mean "threatening" rating of the White model
0.159 mean "threatening" rating of the Black model

---

VICUÑA AND PÉREZ 2021

Politics, Groups, and Identities published Vicuña and Pérez 2021, "New label, different identity? Three experiments on the uniqueness of Latinx", which claimed that:

Proponents have asserted, with sparse empirical evidence, that Latinx entails greater gender-inclusivity than Latino and Hispanic. Our results suggest this inclusivity is real, as Latinx causes individuals to become more supportive of pro-LGBTQ policies.

The three studies discussed in Vicuña and Pérez 2021 had these prompts, with the bold font in square brackets indicating the differences in treatments across the four conditions:

Using the spaces below, please write down three (3) attributes that make you [a unique person/Latinx/Latino/Hispanic]. These could be physical features, cultural practices, and/or political ideas that you hold [as a member of this group].

If the purpose is to assess whether "Latinx" differs from "Latino" and "Hispanic", I'm not sure of the value of the "a unique person" treatment.

Discussing their first study, Vicuña and Pérez 2021 reported the p-value for the effect of the "Latinx" treatment relative to the "unique person" treatment (p<.252) and reported the p-values for the effect of the "Latinx" treatment relative to the "Latino" treatment (p<.046) and the "Hispanic" treatment (p<.119). Vicuña and Pérez 2021 reported all three corresponding p-values when discussing their second study and their third study.

But, discussing their meta-analysis of the three studies, Vicuña and Pérez 2021 reported one p-value, which is presumably for the effect of the "Latinx" treatment relative to the "unique person" treatment.

I tweeted a request Dec 20 to the authors to post their data, but I haven't received a reply yet.

---

KIM AND PATTERSON JR. 2021

Political Science & Politics published Kim and Patterson Jr. 2021, "The Pandemic and Gender Inequality in Academia", which reported on tweets of tenure-track political scientists in the United States.

Kim and Patterson Jr. 2021 Figure 2 indicates that, in February 2020, the percentage of work-related tweets was about 11 percent for men and 11 percent for women, and that, shortly after Trump declared a national emergency, these percentages had dropped to about 8 percent and 7 percent respectively. Table 2 reports difference-in-difference results indicating that the pandemic-related decrease in the percentage of work-related tweets was 1.355 percentage points larger for women than for men.

That seems like a relatively small gender inequality in size and importance, and I'm not sure that this gender inequality in percentage of work-related tweets offsets the advantage of having the 31.5k follower @womenalsoknow account tweet about one's research.

---

The abstract of Kim and Patterson Jr. 2021 refers to "tweets from approximately 3,000 political scientists". Table B1 in Appendix B has sample size of 2,912, with a larger number of women than men at the rank of assistant professor, at the rank of associate professor, and at the rank of full professor. The APSA dashboard indicates that women were 37% of members of the American Political Science Association and that 79.5% of APSA members are in the United States, so I think that Table B1 suggests that a higher percentage of female political scientists might be on Twitter than male political scientists.

Oddly, though, when discussing the representatives of this sample, Kim and Patterson Jr. 2021 indicated that (p. 3):

Yet, relevant to our design, we found no evidence that female academics are less likely to use Twitter than male colleagues conditional on academic rank.

That's true about not being *less* likely, but my analysis of the data for Kim and Patterson Jr. 2021 Table 1 indicated that, controlling for academic rank, about 5 percent more female political scientists from top 50 departments were on Twitter, compared to male political scientists from top 50 departments.

Table 1 of Kim and Patterson Jr. 2021 is limited to the 1,747 tenure-track political scientists in the United States from top 50 departments. I'm not sure why Kim and Patterson Jr. 2021 didn't use the full N=2,912 sample for the Table 1 analysis.

---

My analysis indicated that the female/male gaps in the sample were as follows: 2.3 percentage points (p=0.655) among assistant professors, 4.5 percentage points (p=0.341) among associate professors, and 6.7 percentage points (p=0.066) among full professors, with an overall 5 percentage point male/female gap (p=0.048) conditional on academic rank.

---

Kim and Patterson Jr. 2021 suggest a difference in the effect by rank:

Disaggregating these results by academic rank reveals an effect most pronounced among assistants, with significant—albeit smaller—effects for associates. There is no differential effect on work-from-home at the rank of full professor, which is consistent with our hypothesis that these gaps are driven by the increased obligations placed on women who are parenting young children.

But I don't see a test for whether the coefficients differ from each other. For example, in Table 2 for work-related tweets, the "Female * Pandemic" coefficient is -1.188 for associate professors and is -0.891 for full professors, for a difference of 0.297, relative to the respective standard errors of 0.579 and 0.630.

---

Table 1 of Kim and Patterson Jr. 2021 reported a regression predicting whether a political scientist in a top 50 department was a Twitter user, and the p-values are above p=0.05 for all coefficients for "female" and for all interactions involving "female". That might be interpreted as a lack of evidence for a gender difference in Twitter use among these political scientists, but the interaction terms don't permit a clear inference about an overall gender difference.

For example, associate professor is the omitted category of rank in the regression, so the 0.045 non-statistically significant "female" coefficient indicates only that female associate professor political scientists from top 50 departments were 4.5 percentage points more likely to be a Twitter user than male associate professor political scientists from top 50 departments.

And the non-statistically significant "Female X Assistant" coefficient doesn't indicate whether female assistant professors differ from male assistant professors: instead, the non-statistically significant "Female X Assistant" coefficient indicates only that the associate/assistant difference among men in the sample does not differ at p<0.05 from the associate/assistant difference among women in the sample.

Link to the data. R code for my analysis. R output from my analysis.

---

LEFTOVER PLOT

I had the plot below for a draft post that I hadn't yet published:

Item text: "For each of the following groups, how much discrimination is there in the United States today?" [Blacks/Hispanics/Asians/Whites]. Substantive response options were: A great deal, A lot, A moderate amount, A little, and None at all.

Data source: American National Election Studies. 2021. ANES 2020 Time Series Study Preliminary Release: Combined Pre-Election and Post-Election Data [dataset and documentation]. March 24, 2021 version. www.electionstudies.org.

Stata and R code. Dataset for the plot.

Tagged with: , , , ,

1.

The Hassell et al. 2020 Science Advances article "There is no liberal media bias in which news stories political journalists choose to cover" reports null results from two experiments on ideological bias in media coverage.

The correspondence experiment emailed journalists a message about a candidate who planned to announce a candidacy for state legislator, with a question of whether the journalist would be interested in a sit-down interview with the candidate to discuss the candidate's candidacy and vision for state government. Experimental manipulations involved the description of the candidate, such as "...is a true conservative Republican..." or "...is a true progressive Democrat...".

The conjoint experiment asked journalists to hypothetically choose between two candidacy announcements to cover, with characteristics of the candidates experimentally manipulated.

---

2.

Hassell et al. 2020 claims that (p. 1)...

Using a unique combination of a large-scale survey of political journalists, data from journalists' Twitter networks, election returns, a large-scale correspondence experiment, and a conjoint survey experiment, we show definitively that the media exhibits no bias against conservatives (or liberals for that matter) in what news that they choose to cover.

I think that a good faith claim that research "definitively" shows no media bias against conservatives or liberals in the choice of news to cover should be based on at least one test that is very likely to detect that type of bias. But I don't think that either experiment provides such a "very likely" test.

I think that a "very likely" scenario in which ideology would cause a journalist to not report a story has at least three characteristics: [1] the story unquestionably reflects poorly on the journalist's ideology or ideological group, [2] the journalist has nontrivial gatekeeping ability over the story, and [3] the journalist could not meaningfully benefit from reporting the story.

Regarding [1], it's not clear to me that any of the candidate announcement stories would unquestionably reflect poorly on any ideology or ideological group. The lack of an ideological valence to the story is especially lacking in the correspondence experiment, given that a liberal journalist could ask softball questions to try to make a liberal candidate look good and could ask hardball questions to try to make a conservative candidate look bad.

Regarding [2], it's not clear to me that a journalist would have nontrivial gatekeeping ability over the candidate announcement story: it's not like a journalist could keep secret the candidate's candidacy.

---

3.

I think that title of the Hassell et al. 2020 Monkey Cage post describing this research is defensible: "Journalists may be liberal, but this doesn't affect which candidates they choose to cover". But I'm not sure who thought otherwise.

Hassell et al. 2020 describe the concern about selective reporting as "... journalists may omit news stories that do not adhere to their own (most likely liberal) predispositions" (p. 1). But in what sense does a conservative Republican announcing a candidacy for office have anything to do with adhering to a liberal disposition? The concern about media bias in the selection of stories to cover, as I understand it, is largely about stories that have an obvious implication for ideologically preferred narratives. So something like "Conservative Republican accused of sexual assault", not "Conservative Republican runs for office".

The selective reporting that conservatives complain about is plausibly much more likely—and plausibly much more important—at the national level than at a lower level. For example, I don't think that ideological bias is large enough to cause a local newspaper to not report on a police shooting of an unarmed person in the newspaper's distribution area; however, I think that ideological bias is large enough to influence a national media organization's decisions about which subset of available police shootings to report on.

Tagged with:

Peter Gleick has confessed to obtaining confidential Heartland Institute files under false pretenses, and some persons have also accused Gleick of forging the memo that was released with the files. But analyzing the memo as if it were written by one person might be a mistake, because there is evidence that the memo is the work of multiple authors.

Consider these inconsistencies in the less-than-two-page-long memo:

1. The memo varies ending a section heading with a period:

2. The memo varies the use of first names:

3. The memo varies the hyphenation of high-profile:

4. The memo varies the use of such as and e.g. when indicating examples:

5. The memo varies the rules for lists: funding is listed from largest amount to smallest amount, but there is no apparent order for listing Romm, Trenberth, and Hansen.

6. The memo varies use of global warming and AGW, which abbreviates anthropogenic global warming:

7. The memo varies in quality. For example, the memo starts with a well-written sentence that has an extended introductory clause and no em-dashes or parenthetical tangents; the second sentence nicely follows the first sentence:

Here is another well-written section:

But this next sentence is full of problems: the variation from e.g. to such as in a section of the sentence that tries to maintain a parallel; the use of his in his Forbes blog refers to Taylor inside a parenthetical remark so that the main sentence reads ...especially through our in-house experts through his Forbes blog...; and parallelism is lost when through is not placed in front of our conferences as it was in the previous item in the list and the subsequent item in the list. Moreover, the casual use of parenthetical remarks is not present in much of the rest of the memo:

The memo also omits a necessary comma before the and that separates two independent clauses:

---

The inconsistencies listed above suggest that the memo was written by multiple persons or by a highly-inconsistent writer. But the memo displays amazing consistency amid apparent inconsistency in at least one instance.

DocMartyn asked at this Climate Audit post why the memo inconsistently mentions the first name of Anthony Watts but not the first name of Curry, Romm, Trenberth, Hansen, Gleick, Taylor, or Revkin. But, as I observed, the memo follows a simple rule for the use of first names: the memo identifies the first name of a person only if that person receives funding.

That seems like an unusual rule for one person to have, but it is a bit less unusual if the memo had multiple authors: the person who wrote about funding uses first names, and the person who wrote the section with other names uses only last names. Mystery solved.

---

My original analysis of the memo suggested that an existing Heartland memo might have been interpolated; entire sections might have been added or deleted, which would explain why two phrases saying pretty much the same thing appear in close proximity to each other:

That finally sentence ends the memo and is a poor conclusion, but that sentence would not be as poor had it been part of a concluding section that has been deleted.

---

In some cases, though, the memo has evidence that interpolations were made at the sentence level.

Consider the sentence outlined below in red about funding skeptic Anthony Watts to create a website to track temperature station data: ask yourself whether that sentence belongs where it currently appears in the memo, after a discussion of climate communications and "more neutral voices" in which no first names are provided, or whether that sentence might more naturally have been located at the end of the paragraph highlighted in blue, which mentions -- by first name -- funding for persons like Anthony Watts who publicly counter the alarmist message.

Note also that the red sentence and the sentence immediately preceding it both contain also, which is an unusual repetition, especially since the red sentence about funding does not flow from the preceding sentence about communication; but the red sentence does flow nicely from the previous section about funding, and appending the red sentence to the end of the previous section would make the use of also less unusual.

---

The theory of an interpolated memo suggests that an existing memo was augmented to make the memo more sinister: there is no need to presume that someone was creative enough to generate the idea of a fake memo to summarize pedestrian financial data in a more sinister tone.

But presuming that the memo was the work of one author presumes that a person with enough originality to conceive of producing a fake memo from scratch is also an extremely inconsistent writer who does not know the possessive form of United Nations.

---

Perhaps an objection might be lodged that there should be more inconsistencies if the memo had multiple authors. But inconsistencies due to multiple authors would occur only if there was a difference in style from one author to the next.

For example, the memo consistently uses an Oxford comma to set off the last item in a list of more than two things, but the memo would exhibit variation in the Oxford comma only if one of the multiple authors preferred to omit the Oxford comma and that author had written a list with more than two items.

Incidentally, Heartland CEO Joe Bast, who would likely have written any confidential Heartland memo, uses the Oxford comma:

Also incidentally, Peter Gleick, who obtained the Heartland files under a false identity, uses the Oxford comma, too: (h/t to Steve McIntyre for linking to the PGleick review)

By the way, the abundance of parenthetical remarks in the first paragraph of PGleick's review is reminiscent of the abundance of parenthetical remarks in this section of the memo:

Of course, how anyone who uses this many parenthetical remarks could have written so much of the memo without parenthetical remarks is a mystery only to those persons who think that the memo had one author.

---

The theory of multiple authorship does not necessitate two forgers or an interpolated memo. But the theory does suggest that treatment of the memo as a seamless garment might not be appropriate when analyzing the memo to identify its author.

---

---

Memo elements described in this post might have resulted from an inconsistent author working with Heartland documents. The author would, for instance, use first names for persons receiving funding because first names were used in the documents that the author had been copying; the author would then revert to his or her personal last-name-only style when extemporaneously writing the climate communications section without reference to any Heartland file.

Presuming one highly-inconsistent author working from Heartland files explains much of the observations described above; even variations in style and quality might be explained, if the author's free-flowing style evident in the climate communications section was held in check when copying Heartland files for much of the rest of the memo.

It is still unclear why funding for Anthony Watts to track station data was mentioned in the climate communications section and not in the funding for individuals section, but that element alone does not demand postulating a second author.

---

Joe Bast has produced a version of the memo in which yellow highlight indicates forged phrases, which might be used to focus attention on the features of the memo that are reflective of the forger:

  1. The use of the last-name-only style occurs only in the yellow sections.
  2. The word key occurs twice, but only in the yellow sections.
  3. The words effort or efforts occur seven times, but only in the yellow sections.
  4. The unusual phrases focus in the following areas and parallel organizations occur only in the yellow sections.
  5. The incorrect possessive United Nation's appears only in the yellow sections.
Tagged with:

Documents were recently obtained under false pretenses from the Heartland Institute and posted on the DeSmogBlog site. Megan McArdle suspects that the Confidential Memo: 2012 Heartland Climate Strategy file that is purportedly part of the document cache is a fake.

Let's see if the memo provides enough detail to permit identification of an author or at least the drafting of an author profile.

---

Ms. McArdle provides a nice start, observing that the memo author might use high-profile often and write in a run-on style. But let's examine the memo a bit closer:

1. Perhaps the biggest clue is that the memo author did not realize that the possessive form of United Nations is not United Nation's:

The suspect pool is now restricted to persons with a misunderstanding of possessives.

2. The memo author consistently used a comma to set off the final item in a list of more than two items, such as in this sentence:

Another $88,000 is earmarked this year for Heartland staff, incremental expenses, and overhead for editing, expense reimbursement for the authors, and marketing.

The suspect pool is now limited to persons with a misunderstanding of possessives and a preference for the Oxford comma. Let's continue...

---

3. The memo author wrote 20 as a number but two as a word.

4. The memo author did not indent paragraphs.

5. The memo author used ragged-right justification with no hyphenation.

6. The memo author used a dash in K-12.

7. The memo author used periods in most section headings, an unusual choice that might be a modified APA style:

8. The memo author did not mind an orphaned word that appeared at the top of a page:

9. The memo author used periods for U.S. in adjective form.

10. The memo author inconsistently hyphenated the adjective high-profilehigh profile.

11. The memo author did not offset such as with a comma.

12. The memo author used focus in where focus on might be more common:

In 2012 our efforts will focus in the following areas...

13. The memo author used parenthetical remarks, especially in the final section that Ms. McArdle suspects is closest to the author's style.

14. The memo author introduced the acronyms IPCC, NIPCC, AGW, and WUWT without explanation, presuming reader familiarity with these acronyms.

15. The memo author indicated a project with quotation marks ("Global Warming Curriculum for K-12 Classrooms") but indicated a written document with italic font (Climate Change Reconsidered). The New York Times was written as NYTimes, without italic font or quotation marks.

16. The memo author used a percent sign (%) instead of writing the word percent.

---

Once this profile was completed, I suspected that the best place to look for documents matching the profile was the Heartland Institute, if the memo was authentic, or the DeSmogBlog site, if the memo was fake; the memo might have been forged by a third party, but I decided to start with the simple scenarios.

Google searches for the exact phrase united nation's coupled with site:heartland.org and with site:desmogblog.com respectively returned 97 results and 617 results. Adding the distinctive focus in phrase decreased the results to the memo itself. I removed the focus in phrase and added the less distinctive K-12 phrase, which returned from DeSmogBlog the memo, this file, and a Heartland budget file, but returned from the Heartland site this transcript of a 2007 speech by Heartland Institute President and CEO Joseph Bast.

The similarities between the memo and the speech transcript were not trivial.

1. The speech transcript misplaced the apostrophe in the possessive form of United Nations:

2. The speech transcript consistently used a comma to set off the final item in a list of more than two items. [see the red boxes below]

3. The speech transcript wrote out three and six as a word, consistent with the use of two in the memo. [see the purple boxes below]

[The green box indicates an unexpected switch from the first person singular to the first person plural.]

4. The speech transcript did not indent paragraphs. [see above]

5. The speech transcript used ragged-right justification with no hyphenation. [see above]

6. The speech transcript used a dash in K-12. [see the purple box below]

7. The speech transcript used periods in some section headings. [see the blue box below]

8. The speech transcript had an orphaned word at the top of a page:

9. The speech transcript used periods for U.S. in adjective form:

10. The speech transcript inconsistently hyphenated man made as a predicate nominative: "Claims of a consensus that global warming is man made" on p. 2, but "claims that global warming is man-made and a crisis" on p. 3.

---

The remaining items provided no evidence of consistency with the memo or were inconsistent with the memo.

11. The speech transcript sometimes used a comma to offset such as, and other times did not.

12. The speech transcript did not contain the word focus.

13. The speech transcript did not appear to overuse parenthetical remarks.

14. The speech transcript defined an unfamiliar acronym before using the acronym. The speech transcript did not contain the acronym AGW, and global warming was modified with man-made and not anthropogenic.

15. The speech transcript used italic font for periodicals and quotation marks for books, reports, and articles. The New York Times was written as the New York Times.

16. The speech transcript did not use the percent sign (%) and instead used the word percent.

---

The memo and the speech transcript appear to be formatted similarly, with similar margins and font, and the reading statistics are similar for the memo and the speech transcript, respectively: 21.2 words per sentence, compared to 25.0 words per sentence; 5.2 characters per word, compared to 5.1 characters per word; 18 percent passive sentences, compared to 16 percent passive sentences; reading ease of 29.8, compared to a reading ease of 26.4; and a Flesch-Kincaid Grade Level of 14.3, compared to a Flesch-Kincaid Grade Level of 14.3.

The memo and the speech transcript are somewhat consistent with each other in multiple elements indicated above and in the sense that both were written by a person who appears to have a strong but imperfect command of the English language: one of the few imperfections in the speech transcript was a passage from page 10 that incorrectly used it to represent groups and that also incorrectly used it's and not its as a possessive:

Environmental advocacy groups raised $6.6 billion in 2006 and it’s take is growing fast...

Based on imperfections that riddle DeSmogBlog, many DeSmog bloggers might not have been able to sustain the level of grammar in the memo throughout the entire memo, and many DeSmog bloggers appear to avoid the Oxford comma. For example, a 17 Feb 2012 post by Richard Littlemore contains a possessive error and overuse of parentheses, but lacks the Oxford comma:

---

The memo also lacks the signature of a Joseph Bast memorandum, the use of large-square bullet lists:

Perhaps a two-page memo was not long enough to warrant a large-square bullet list, or perhaps the lack of a large-square bullet list is evidence of a forgery.

But the memo does look and sound like something that the President and CEO of the Heartland Institute would write and has written, though some passages are inconsistent with that idea, such as: "...two key points that are effective at dissuading teachers from teaching science" and "[t]his influential audience has usually been reliably anti-climate and it is important to keep opposing voices out."

The two most bogus paragraphs appear to be the "Expanded climate communications" paragraph and the paragraph with apparently erroneous information about the Koch Foundation donation (see the update here): these paragraphs just happen to be in the only sections with titles that lack a period.

---

Perhaps the memo is like the Testimonium Flavianum, a core authentic document with inauthentic interpolations inserted by a true believer on the opposite side of a battle.

Perhaps the document cache obtained by DeSmogBlog contained an authentic Heartland memo that served as the basis for the formatting and core text of an interpolated memo; this would explain both the similarities and the differences with the Heartland speech transcript.

The interpolation theory lowers the bar from the highly original idea of generating a bogus confidential memo from scratch to the less original idea of spicing up an existing text.

---

For example, note the parallelism apparently intended between Taylor and Gleick in these two sentences:

The parallelism is broken with variation from a parenthetical e.g. to a parenthetical such as, lack of consistent hyphenation in high profile, and a change in focus from high profile outlets to high profile scientists...and this broken parallelism might signal the presence of two authors.

Or perhaps the entire "expanded climate communications" section is forged, given that the phrases climate communications and climate communication never appear on the Heartland website: 62,600 hits for climate, 17,300 hits for communications, and 0 hits for climate communications:

Oddly enough, though, climate communications is a tag on the DeSmogBlog site:

The evidence is clear: Heartland reserves the phrase climate communications for confidential memos. There might be another explanation for the fact that a phrase absent from the Heartland site but appearing on the DeSmogBlog site also appears in a document hosted by DeSmogBlog that Heartland alleges is forged, but as Tink Thompson reminded us:

If you have any fact which you think is really sinister...is really obviously a fact which can only point to some sinister underpinning...forget it, man, because you can never on your own think up all the non-sinister perfectly valid explanations for that fact.

---

Further notes:

  1. Both the memo and the speech transcript use single spaces between sentences.
  2. Not counting brief indications of payments such as ($11,600 per month), each extended parenthetical remark in the memo appears in one of the two sections without a period in the section title.
  3. The word key appears twice in the memo: once in a section without a period in its title, and another time in the suspect phrase two key points that are effective at dissuading teachers from teaching science.
  4. The phrase such as appears five times in the memo, each time in the "climate communications" paragraph.
  5. The "climate communications" paragraph has multiple errors and odd phrasings, such as especially through our in-house experts (e.g., Taylor) through his Forbes blog and related high profile outlets. Note that his refers to experts.
  6. [Update 19 Feb 2012 at 3:45pm] It is possible and perhaps likely that the Heartland memo was interpolated by someone unaffiliated with DeSmogBlog. Presumably, the person who obtained the document cache under false pretenses and sent the cache to DeSmogBlog is an opponent of climate skeptics, and opponents of climate skeptics appear to use the phrase climate communications more often than climate skeptics themselves; for example, the search site:grist.org "climate communications" -role -gavin returns 3,660 hits from Google. (The -role phrase is to remove hits about the memo itself, which contains the phrase an important role in climate communications; the -gavin phrase is to remove hits regarding the Climate Communications Prize that the American Geophysical Union awarded to Gavin Schmidt in 2011; Grist was chosen merely as an example of a group unaligned with climate skeptics.)
  7. [Update: 19 Feb 2012 at 5:11pm] Jim Lakely of the Heartland Institute explains the release of the document cache: "The stolen documents were obtained by an unknown person who fraudulently assumed the identity of a Heartland board member and persuaded a staff member here to 're-send' board materials to a new email address." The Heartland staffer who emailed the documents presumably has the email address that the documents were sent to, but Heartland does not appear to have released that email address. I presume that Heartland could demonstrate to a neutral third party that the pdf of the confidential memo was not sent from the email address that the Heartland staffer used to send the other documents. I also presume that the person who received the document cache could demonstrate that the Heartland staffer emailed the confidential memo, but I also presume that that person would rather remain anonymous.
Tagged with: