Ahlquist, Mayer, and Jackman (2013, p. 3) wrote:

List experiments are a commonly used social scientific tool for measuring the prevalence of illegal or undesirable attributes in a population. In the context of electoral fraud, list experiments have been successfully used in locations as diverse as Lebanon, Russia and Nicaragua. They present our best tool for detecting fraudulent voting in the United States.*

I'm not sure that list experiments are the best tool for detecting fraudulent voting in the United States. But, first, let's introduce the list experiment.

The list experiment goes back at least to Judith Droitcour Miller's 1984 dissertation, but she called the procedure the item count method (see page 188 of this 1991 book). Ahlquist, Mayer, and Jackman (2013) reported results from list experiments that split a sample into two groups: members of the first group received a list of 4 items and were instructed to indicate how many of the 4 items applied to themselves; members of the second group received a list of 5 items -- the same 4 items that the first group received, plus an additional item -- and were instructed to indicate how many of the 5 items applied to themselves. The difference in the mean number of items selected by the groups was then used to estimate the percent of the sample and -- for weighted data -- the percent of the population to which the fifth item applied.

Ahlquist, Mayer, and Jackman (2013) reported four list experiments from September 2013, with these statements as the fifth item:

  • "I cast a ballot under a name that was not my own."
  • "Political candidates or activists offered you money or a gift for your vote."
  • "I read or wrote a text (SMS) message while driving."
  • "I was abducted by extraterrestrials (aliens from another planet)."

Figure 4 of Ahlquist, Mayer, and Jackman (2013) displayed results from three of these list experiments:

amj2013f4

My presumption is that vote buying and voter impersonation are low frequency events in the United States: I'd probably guess somewhere between 0 and 1 percent, and closer to 0 percent than to 1 percent. If that's the case, then a list experiment with 3,000 respondents is not going to detect such low frequency events. 95 percent confidence intervals for weighted estimates in Figure 4 appear to span 20 percentage points or more: the weighted 95 percent confidence interval for vote buying appears to range from -7 percent to 17 percent. Moreover, notice how much estimates varied between the December 2012 and September 2013 waves of the list experiment: the point estimate for voter impersonation in December 2012 was 0 percent, and the point estimate for voter impersonation in September 2013 was -10 percent, a ten-point swing in point estimates.

So, back to the original point, list experiments are not the best tool for detecting vote fraud in the United States because vote fraud in the United States is a low frequency event that list experiments cannot detect without an improbably large sample size: the article indicates that at least 260,000 observations would be necessary to detect a 1% difference.

If that's the case, then what's the purpose of a list experiment to detect vote fraud with only 3,000 observations? Ahlquist, Mayer, and Jackman (2013, p. 31) wrote that:

From a policy perspective, our findings are broadly consistent with the claims made by opponents of stricter voter ID laws: voter impersonation was not a serious problem in the 2012 election.

The implication appears to be that vote fraud is a serious problem only if the fraud is common. But there's a lot of problems that are serious without being common.

So, if list experiments are not the best tool for detecting vote fraud in the United States, then what is a better way? I think that -- if the goal is detecting the presence of vote fraud and not estimating its prevalence -- then this is one of those instances in which journalism is better than social science.

---

* This post was based on the October 30, 2013, version of the Ahlquist, Mayer, and Jackman manuscript, which was located here. A more recent version is located here and has replaced the "best tool" claim about list experiments:

List experiments are a commonly used social scientific tool for measuring the prevalence of illegal or undesirable attributes in a population. In the context of electoral fraud, list experiments have been successfully used in locations as diverse as Lebanon, Russia, and Nicaragua. They present a powerful but unused tool for detecting fraudulent voting in the United States.

It seems that "unused" is applicable, but I'm not sure that a "powerful" tool for detecting vote fraud in the United States would produce 95 percent confidence intervals that span 20 percentage points.

P.S. The figure posted above has also been modified in the revised manuscript. I have a pdf of the October 30, 2013, version, in case you are interested in verifying the quotes and figure.

Tagged with: , ,

I came across an interesting site, Dynamic Ecology, and saw a post on self-archiving of journal articles.The post mentioned SHERPA/RoMEO, which lists archiving policies for many journals. The only journal covered by SHERPA/RoMEO that I have published in that permits self-archiving is PS: Political Science & Politics, so I am linking below to pdfs of PS articles that I have published.

---

This first article attempts to help graduate students who need seminar paper ideas. The article grew out of a graduate seminar in US voting behavior with David C. Barker. I noticed that several articles on the seminar reading list placed in top-tier journals but made an incremental theoretical contribution and used publicly-available data, which was something that I as a graduate student felt that I could realistically aspire to.

For instance, John R. Petrocik in 1996 provided evidence that candidates and parties "owned" certain issues, such as Democrats owning care for the poor and Republicans owning national defense. Danny Hayes extended that idea by using publicly-available ANES data to provide evidence that candidates and parties owned certain traits, such as Democrats being more compassionate and Republicans being more moral.

The original manuscript identified the Hayes article as a travel-type article in which the traveling is done by analogy. The final version of the manuscript lost the Hayes citation but had 19 other ideas for seminar papers. Ideas on the cutting room floor included replication and picking a fight with another researcher.

Of Publishable Quality: Ideas for Political Science Seminar Papers. 2011. PS: Political Science & Politics 44(3): 629-633.

  1. pdf version, copyright held by American Political Science Association

---

This next article grew out of reviews that I conducted for friends, colleagues, and journals. I noticed that I kept making the same or similar comments, so I produced a central repository for generalized forms of these comments in the hope that -- for example -- I do not review any more manuscripts that formally list hypotheses about the control variables.

Rookie Mistakes: Preemptive Comments on Graduate Student Empirical Research Manuscripts. 2013. PS: Political Science & Politics 46(1): 142-146.

  1. pdf version, copyright held by American Political Science Association

---

The next article grew out of friend and colleague Jonathan Reilly's dissertation. Jonathan noticed that studies of support for democracy had treated don't know responses as if the respondents had never been asked the question. So even though 73 percent of respondents in China expressed support for democracy, that figure was reported as 96 percent because don't know responses were removed from the analysis.

The manuscript initially did not include imputation of preferences for non-substantive responders, but a referee encouraged us to estimate missing preferences. My prior was that multiple imputation was "making stuff up," but research into missing data methods taught me that the alternative -- deletion of cases -- assumed that cases were missing at random, which did not appear to be true in our study: the percent of missing cases in a country correlated at -0.30 and -0.43 with the country's Polity IV democratic rating, which meant that respondents were more likely to issue a non-substantive response in countries where political and social liberties are more restricted.

Don’t Know Much about Democracy: Reporting Survey Data with Non-Substantive Responses. 2012. PS: Political Science & Politics 45(3): 462-467. Second author, with Jonathan Reilly.

  1. pdf version, copyright held by American Political Science Association
Tagged with: , , , ,

The American National Elections Studies (ANES) has measured abortion attitudes since 1980 with an item that dramatically inflates the percentage of pro-choice absolutists:

There has been some discussion about abortion during
recent years. Which one of the opinions on this page best agrees with your view? You can just tell me the number of the opinion you choose.
1. By law, abortion should never be permitted.
2. The law should permit abortion only in case of rape, incest, or when the woman's life is in danger.
3. The law should permit abortion for reasons other than rape, incest, or danger to the woman's life, but only after the need for the abortion has been clearly established.
4. By law, a woman should always be able to obtain an abortion as a matter of personal choice.
5. Other {SPECIFY}

In a book chapter of Improving Public Opinion Surveys: Interdisciplinary Innovation and the American National Election Studies, Heather Marie Rice and I discussed this measure and results from a new abortion attitudes measure piloted in 2006 and included on the 2008 ANES Time Series Study. The 2006 and 2008 studies did not ask any respondents both abortion attitudes measures, but the 2012 study did. This post presents data from the 2012 study describing how persons selecting an absolute abortion policy option responded when asked about policies for specific abortion conditions.

---

Based on the five-part item above, and removing from the analysis the five persons who provided an Other response, 44 percent of the population agreed that "[b]y law, a woman should always be able to obtain an abortion as a matter of personal choice." The figure below indicates how these pro-choice absolutists later responded to items about specific abortion conditions.

Red bars indicate the percentage of persons who agreed on the 2012 pre-election survey that "[b]y law, a woman should always be able to obtain an abortion as a matter of personal choice" but reported opposition to abortion for the corresponding condition in the 2012 post-election survey.

2012abortionANESprochoice4

Sixty-six percent of these pro-choice absolutists on the 2012 pre-election survey later reported opposition to abortion if the reason for the abortion is that the child will not be the sex that the pregnant woman wanted. Eighteen percent of these pro-choice absolutists later reported neither favoring nor opposing abortion for that reason, and 16 percent later reported favoring abortion for that reason. Remember that this 16 percent favoring abortion for reasons of fetal sex selection is 16 percent of the pro-choice absolutist subsample.

In the overall US population, only 8 percent favor abortion for fetal sex selection; this 8 percent is a more accurate estimate of the percent of pro-choice absolutists in the population than the 44 percent estimate from the five-part item.

---

Based on the five-part item above, and removing from the analysis the five persons who provided an Other response, 12 percent of the population thinks that "[b]y law, abortion should never be permitted." The figure below indicates how these pro-life absolutists later  responded to items about specific abortion conditions.

Green bars indicate the percentage of persons who agreed on the 2012 pre-election survey that "[b]y law, abortion should never be permitted" but reported support for abortion for the corresponding condition in the 2012 post-election survey.

2012abortionANESprolife4

Twenty-nine percent of these pro-life absolutists on the 2012 pre-election survey later reported support for abortion if the reason for the abortion is that the woman might die from the pregnancy. Twenty-nine percent of these pro-choice absolutists later reported neither favoring nor opposing abortion for that reason, and 42 percent later reported opposing abortion for that reason. Remember that this 42 percent opposing abortion for reasons of protecting the pregnant woman's life is 42 percent of the pro-life absolutist subsample.

In the overall US population, only 11 percent oppose abortion if the woman might die from the pregnancy; this 11 percent is a more accurate estimate of the percent of pro-life absolutists in the US population than the 12 percent estimate from the five-part item.

---

There is a negligible difference in measured pro-life absolutism between the two methods, but the five-part item inflated pro-choice absolutism by a factor of 5. Our book chapter suggested that this inflated pro-choice absolutism might result because the typical person considers abortion in terms of the hard cases, especially since the five-part item mentions only the hard cases of rape, incest, and danger to the pregnant woman's life.

---

Notes

1. The percent of absolutists is slightly smaller if absolutism is measured as supporting or opposing abortion in each listed condition.

2. The percent of pro-life absolutists is likely overestimated in the "fatal" abortion condition item because the item asks about abortion if "staying pregnant could cause the woman to die"; presumably, there would be less opposition to abortion if the item stated with certainty that staying pregnant would cause the woman to die.

3. Data presented above are for persons who answered the five-part abortion item on the 2012 ANES pre-election survey and answered at least one abortion condition item on the 2012 ANES post-election survey. Don't know and refusal responses were listwise deleted for each cross-tabulation. Data were weighted with the Stata command svyset [pweight=weight_full], strata(strata_full); weighted cross-tabulations were calculated with the command svy: tabulate X Y if Y==Z, where X is the abortion condition item, Y is the five-part abortion item, and Z is one of the absolute policy options on the five-part item.

4. Here is the text for each abortion condition item that appeared on the 2012 ANES Time Series post-election survey:

>[First,/Next,] do you favor, oppose, or neither favor nor oppose abortion being legal if:
* staying pregnant could cause the woman to die
* the pregnancy was caused by the woman being raped
* the fetus will be born with a serious birth defect
* the pregnancy was caused by the woman having sex with a blood relative
* staying pregnant would hurt the woman's health but is very unlikely to cause her to die
* having the child would be extremely difficult for the woman financially
* the child will not be the sex the woman wants it to be

There was also a general item on the post-election survey:

Next, do you favor, oppose, or neither favor nor oppose abortion being legal if the woman chooses to have one?

5. Follow-up items to the post-election survey abortion items asked respondents to indicate intensity of preference, such as favor a great deal, favor moderately, or favor a little. These follow-up items were not included in the above analysis.

6. There were more than 5000 respondents for the pre-election and post-election surveys.

Tagged with: , ,

For those of you coming from the Monkey Cage: welcome!

This is a blog on my research and other topics of interest. I'm in the middle of a series on incorrect survey weighting, which is part of a larger series on reproduction in social science. I'm a proponent of research transparency, such as preregistration of experimental studies to reduce researcher degrees of freedom, third-party data collection to reduce fraud, and public online archiving of data and code to increase the likelihood that error is discovered.

My main research areas right now are race, law, and their intersection. I plan to blog on those and other topics: I am expecting to post on list experiments, abortion attitudes, the file drawer problem, Supreme Court nominations, and curiosities in the archives at the Time-Sharing Experiments for the Social Sciences. I hope that you find something of interest.

---

UPDATE (May 21, 2014)

Links to the Monkey Cage post have been made at SCOTUSBlog, Jonathan Bernstein, and the American Constitution Society.

---

UPDATE (May 21, 2014)

Jonathan Bernstein commented on my Monkey Cage guest post, expressing skepticism about a real distinction between delayed and hastened retirements. The first part of my response was as follows:

Hi Jonathan,

Let me expand on the distinction between delayed and hastened retirements.

Imagine that Clarence Thomas reveals that he wants to retire this summer, but conservatives pressure him to delay his retirement until a Republican is elected president. Compare that to liberals pressuring Ruth Bader Ginsburg to retire before the 2016 election.

Note the distinctions: liberals are trying to change Ginsburg's mind about *whether* to retire, and conservatives are trying to change Thomas's mind about *when* to retire; moreover, conservatives are asking Thomas to sacrifice *extra* *personal* time that he would have had in retirement, and liberals are asking Ginsburg to sacrifice *all* the rest of her years as *one of the most powerful persons in the United States.*

Orin Kerr of the Volokh Conspiracy also commented on the post, at the Monkey Cage itself, asking why a model is necessary when the sample of justices is small enough to ask justices or use past interviews. My response:

Hi Orin,

Artemus Ward has a valuable book, Deciding to Leave, that offers more richness than statistical models offer for investigating the often idiosyncratic reasons for Supreme Court retirements. But for addressing whether justices retire strategically and, if so, when and under what conditions -- or for making quantitative predictions about whether a particular justice might retire at a given time -- there is complementary value in a statistical model.

1. For one thing, there is sometimes reason to be skeptical of the reasons that political actors provide for their behavior: there is a line of research suggesting that personal policy preferences inform Supreme Court justice voting on cases, though many justices might not admit this in direct questioning. Regarding retirements, many justices have been forthcoming about their strategic retirement planning, but some justices have downplayed or denied strategic planning: for example, Ward described press skepticism of Potter Stewart's assertion that he did not strategically delay retirement while Jimmy Carter was president (p. 194).

Statistical models permit us to test theories based on what Stewart and other justices *did* instead of what Stewart and other justices *said*, similar to the way that prosecutors might develop a theory of the crime based on forensic evidence instead of suspect statements.

2. But even if the justices were always honest and public about their reasons for retiring or not retiring, it is still necessary to apply some sort of statistical analysis to address our questions. By my count, from 1962 to 2010, 5 justices retired consistent with a delay strategy and 8 justices retired when the political environment was unfavorable. Observers using simple statistical tools might consider this evidence that justices are more likely to retire unstrategically than to delay retirement, but this overlooks the fact that justices have more opportunities to retire unstrategically than to delay retirement.

For example, assuming that no conservative retires during President Obama's eight years in office, the five conservative justices as a group will each have had eight years to retire unstrategically, for a total of 40 opportunities; but liberal justices have had fewer opportunities to delay retirement: Breyer, Ginsburg, Souter, and Stevens each had one opportunity to retire consistent with a delay strategy in 2009, and -- presuming that justices stay on another year to avoid a double summer vacancy -- Breyer, Ginsburg, Sotomayor, and Stevens each had one opportunity to retire consistent with a delay strategy in 2010, for a total of 8 opportunities.

In this particular period, the proper comparison is not 2 delayed retirements to 0 unstrategic retirements, but instead is 2 delayed retirements out of 8 opportunities (25%) to 0 unstrategic retirements out of 40 opportunities (0%).

3. Sotomayor's addition in the 2010 data highlights another value of statistical models: they permit us to control for other retirement pressures. Statistical models can help account -- in a way that qualitative studies or direct questioning cannot -- for the fact that the 2010 observation of Sotomayor is not equivalent to the 2010 observation of Ginsburg because these justices have different characteristics on other key variables, such as age. From 1962 to 2010, justices retired 14 percent of the time during delayed retirement opportunities, but retired only 4 percent of the time during unfavorable political environments. But these percentages should not be directly compared because there might be spurious correlations that have inflated or deflated the percentages: for example, perhaps older and infirm justices were more likely to experience a delayed opportunity and *that* is why the delayed percentage is relatively higher than the unstrategic percentage. Statistical models let us adjust summary statistics to address such spurious correlations.

---

Bill James is said to have said something to the effect that bad statistics are the alternative to good statistics. Relying only on justice statements instead of good statistics can introduce inferential error about justice retirement strategies in the aggregate in several ways: (1) justices might misrepresent their motives for retiring or not retiring; (2) we might not properly account for the fact that justices face more unstrategic opportunities than delayed opportunities or hasten opportunities; and (3) we might not properly account for variables such as age and illness that also influence decisions to retire.

Tagged with: ,

My previous posts discussed the p-values that the base module of SPSS reports for statistical significance tests using weighted data; these weights are not correct for probability-weighted analyses. Jon Peck informed me of SPSS Complex Samples, which can provide correct p-values for statistical significance tests for probability-weighted analyses. Complex Samples does not have the most intuitive setup, so this post describes the procedure for analyzing data using probability weights in SPSS Statistics 21.

SPSS0

SPSS1

The dataset that I was working with had probability weights but no clustering or stratification, so the Stratify By and Clusters boxes remain empty in the image below.

SPSS4

The next dialog box has options for Simple Systematic and Simple Sequential. Either method will work if Proportions are set to 1 in the subsequent dialog box.

SPSS3

SPSS4

SPSS5

SPSS6

SPSS7

SPSS8

SPSS9

I conducted an independent samples t-test, so I selected the General Linear Model command below.

SPSS10

SPSS11

Click the Statistics button in the image above and then click the t-test box in the image below to tell SPSS to conduct a t-test.

SPSS12

SPSS13

Hit OK to get the output.

rattan2012outputSPSS

The SPSS output above has the same p-value as the probability-weighted Stata output below.

rattan2012outputStata

Tagged with: , ,

My previous post discussed p-values in SPSS and Stata for probability-weighted data. This post provides more information on weighting in the base module of SPSS. Data in this post are from Craig and Richeson (2014), downloaded from the TESS archives; SPSS commands are from personal communication with Maureen Craig, who kindly and quickly shared her replication code.

Figure 2 in Craig and Richeson's 2014 Personality and Social Psychology Bulletin article depicts point estimates and standard errors for racial feeling thermometer ratings made by white non-Hispanic respondents. The article text confirms what the figure shows: whites in the racial shift condition (who were exposed to a news article titled, "In a Generation, Racial Minorities May Be the U.S. Majority") rated Blacks/African Americans, Latinos/Hispanics, and Asian-Americans lower on the feeling thermometers at a statistically significant level than whites in the control condition (who were exposed to a news article titled, "U.S. Census Bureau Reports Residents Now Move at a Higher Rate").

CraigRicheson2014PSPB

Craig and Richeson generated a weight variable that retained the original post-stratification weights for non-Hispanic white respondents but changed the weight to 0.001 for respondents who were not non-Hispanic white. Figure 2 results were drawn from the SPSS UNIANOVA command, which "provides regression analysis and analysis of variance for one dependent variable by one or more factors and/or variables," according to the SPSS web entry for the UNIANOVA command.

The SPSS output below represents a weighted analysis in the base SPSS module for the command UNIANOVA therm_bl BY dummyCond WITH cPPAGE cPPEDUCAT cPPGENDER, in which therm_bl, dummyCond, cPPAGE, cPPEDUCAT, and cPPGENDER respectively indicate numeric ratings on a 0-to-100 feeling thermometer scale for blacks, a dummy variable indicating whether the respondent received the control news article or the treatment news article, respondent age, respondent education on a four-level scale, and respondent sex. The 0.027 Sig. value for dummyCond indicates that the mean thermometer rating made by white non-Hispanics in the control condition was different at the 0.027 level of statistical significance from the mean thermometer rating made by white non-Hispanics in the treatment condition.

CR2014PSPB

The image below presents results for the same analysis conducted using probability weights in Stata, with weightCR indicating a weight variable mimicking the post-stratification weight created by Craig and Richeson: the corresponding p-value is 0.182, not 0.027, a difference due to the Stata p-value reflecting a probability-weighted analysis and the SPSS p-value reflecting a frequency-weighted analysis.

CR2014bl0

So why did SPSS return a p-value of 0.027 for dummyCond?

The image below is drawn from online documentation for the SPSS weight command. The second bullet point indicates that SPSS often rounds fractional weights to the nearest integer. The third bullet point indicates that SPSS statistical procedures ignore cases with a weight of zero, so cases with fractional weights that round to zero will be ignored. The first bullet point indicates that SPSS arithmetically replicates a case according to the weight variable: for instance, SPSS treats a case with a weight of 3 as if that case were 3 independent and identical cases.

 weightsSPSS

Let's see if this is what SPSS did. The command gen weightCRround = round(weightCR) in the Stata output below generates a variable with the values of weightCR rounded to the nearest integer. When the Stata command used the frequency weight option with this rounded weight variable, Stata reported p-values identical to the SPSS p-values.

CR2014bl2

The Stata output below illustrates what happened in the above frequency-weighted analysis. The expand weightCRround command replicated each dataset case n-1 times, in which n is the number in the weightCRround variable: for example, each case with a weightCRround value of 3 now appears three times in the dataset. Stata retained one instance of each case with a weightCRround value of zero, but SPSS ignores cases with a weight of zero for weighted analyses; therefore, the regression excluded cases with a zero value for weightCRround.

Stata p-values from a non-weighted regression on this adjusted dataset were identical to SPSS p-values reported using the Craig and Richeson commands.

CR2014bl3

So how much did SPSS alter the dataset? The output below is for the original dataset: the racial shift and control conditions respectively had 233 and 222 white non-Hispanic respondents with full data on therm_bl, cPPAGE, cPPEDUCAT, and cPPGENDER; the difference in mean therm_bl ratings across conditions was 3.13 units.

CR2014bl4before

The output below is for the dataset after executing the round and expand commands: the racial shift and control conditions respectively had 189 and 192 white non-Hispanic respondents with a non-zero weight and full data on therm_bl, cPPAGE, cPPEDUCAT, and cPPGENDER; the difference in mean therm_bl ratings across conditions was 4.67, a 49 percent increase over the original difference of 3.13 units.

CR2014bl4after

---

Certain weighted procedures in the SPSS base module report p-values identical to p-values reported in Stata when weights are rounded, cases are expanded by those weights, and cases with a zero weight are ignored; other weighted procedures in the SPSS base module report p-values identical to p-values reported in Stata when the importance weight option is selected or when the analytic weight option is selected and the sum of the weights is 1.

(Stata's analytic weight option treats each weight as an indication of the number of observations represented in a particular case; for instance, an analytic weight of 4 indicates that the values for the corresponding case reflect the mean values for four observations; see here.)

Test analyses that I conducted produced the following relationship between SPSS output and Stata output.

SPSS weighted base module procedures that reported p-values identical to Stata p-values when weights were rounded, cases were expanded by those weights, and cases with a zero weight were ignored:

  1. UNIANOVA with weights indicated in the WEIGHT BY command

SPSS weighted base module procedures that reported p-values identical to Stata p-values when the importance weight or analytic weight option was selected and the sum of the weights was 1:

  1. Independent samples t-test
  2. Linear regression with weights indicated in the WEIGHT BY command
  3. Linear regression with weights indicated in the REGWT subcommand in the regression menu (weighted least squares analysis)
  4. UNIANOVA with weights indicated in the REGWT subcommand in the regression menu (weighted least squares analysis)

---

SPSS has a procedure that correctly calculates p-values with survey weights, as Jon Peck noted in a comment to the previous post. The next post will describe that procedure.

---

UPDATE (June 20, 2015)

Craig and Richeson have issued a corrigendum to the "On the Precipice of a 'Majority-Minority' America" article that had used incorrect survey weights.

Tagged with: , , ,

Here are t-scores and p-values from a set of t-tests that I recently conducted in SPSS and in Stata:

Group 1 unweighted
t = 1.082 in SPSS (p = 0.280)
t = 1.082 in Stata (p = 0.280)

Group 2 unweighted
t = 1.266 in SPSS (p = 0.206)
t = 1.266 in Stata (p = 0.206)

Group 1 weighted
t = 1.79 in SPSS (p = 0.075)
t = 1.45 in Stata (p = 0.146)

Group 2 weighted
t = 2.15 in SPSS (p = 0.032)
t = 1.71 in Stata (p = 0.088)

There was no difference between unweighted SPSS p-values and unweighted Stata p-values, but weighted SPSS p-values fell under conventional levels of statistical significance that probability weighted Stata p-values did not (0.10 and 0.05, respectively).

John Hendrickx noted some problems with weights in SPSS:

One of the things you can do with Stata that you can't do with SPSS is estimate models for complex surveys. Most SPSS procedures will allow weights, but although these will produce correct estimates, the standard errors will be too small (aweights or iweights versus pweights). SPSS cannot take clustering into account at all.

Re-analysis of Group 1 weighted and Group 2 weighted indicated that t-scores in Stata were the same as t-scores in SPSS when using the analytic weight option [aw=weight] and the importance weight option [iw=weight].

---

SPSS has another issue with weights, indicated on the IBM help site:

If the weighted number of cases exceeds the sample size, tests of significance are inflated; if it is smaller, they are deflated.

This means that, for significance testing, SPSS treats the sample size as the sum of the weights and not as the number of observations: if there are 1,000 observations and the mean weight is 2, SPSS will conduct significance tests as if there were 2,000 observations. Stata with the probability weight option treats the sample size as the number of observations no matter the sum of the weights.

I multiplied the weight variable by 10 in the dataset that I have been working in. For this inflated weight variable, Stata t-scores did not change for the analytic weight option, but Stata t-scores did inflate for the importance weight option.

---

UPDATE (April 21, 2014)

Jon Peck noted in the comments that SPSS has a Complex Samples procedure. SPSS p-values from the Complex Samples procedure matched Stata p-values using probability weights:

SPSS

Stata

The Complex Samples procedure appears to require a plan file. I tried several permutations for the plan, and the procedure worked correctly with this setup:

SPSS-CS

---

UPDATE (May 30, 2015)

More here and here.

 

Tagged with: , , ,