Trump’s Survey Data Is (Not Surprising) Also Biased

In defending his proposals to end Muslim immigration, Donald Trump cites research that says 25 percent of American Muslims agree that violence against the U.S. is justified, and 51 percent want American Muslims to have the option of being governed by Islamic law.

The poll that generated this data, commissioned by the Center for Security Policy, is unreliable and filled with survey error — or to employ a more popular term, crap.

In my Reporting classes, I teach my students how to evaluate survey data.  This survey fails on two counts that I talk about in class — one mathematical and the other conceptual — that render it statistically worthless, except as a gathering place for flies.

First, the math part (and yes, we’re starting with the math because if I put this second you would change the channel before you got to it).

The survey uses what we call a non-probability sample, which means that you cannot generalize its results to the population as a whole.  To swap the negatives, with a probability sample (a random sample being one example), you can generalize the results, with caution, of course.

A typical survey that employs a random sample involves mail, telephone or direct interview of a pre-selected sample.  The CSP survey was an online survey on a web site, a voluntary survey, that anyone could click on and answer.  Not only is the sample flawed mathematically, but the pollsters cannot even verify who took the survey.  For all we know, CSP staff could have clicked on the survey themselves and answered it in a way that advanced its organization’s initiatives.

You used to see these all the time, often on the entry page of news or sports sites like CNN, FOX News and ESPN.  They would put some current events or favorite athlete question on the page and let you click on it.  But it always had a disclaimer that this was not a scientific survey.

You also see these now on Twitter, with its polling feature.  It too is all in fun.  The results are meaningless for getting society’s pulse.

I also warn my students to note who is conducting the survey, to see if they have an interest in the results.  For this reason, sorry, but surveys conducted by politicians are usually biased and self-serving, worthless beyond the politician’s interests.

Speaking of which, this survey of Muslims!  As this report from Foreign Policy points out, the Center for Security Policy has a political agenda that it was seeking to advance with the survey.

The CSP actively promotes an anti-Muslim message, and the survey seems slanted to be consistent with the agenda.  Its executive director, Frank Gaffney, has a reputation for making extreme, even outlandish comments about political opponents.

That immediately casts suspicions on the results, which can be expected to support CSP’s agenda.  According to this report from Georgetown, that bias shows up in survey wording and in how the results are interpreted.

By comparison, the most recent numbers available from Pew Research Center (whom Trump also cited, though vaguely), 86 percent of the Muslims surveyed said tactics such as suicide bombing were rarely or never justified.

As with so much of Trump’s message, the numbers are hard to believe, once you look at them closely.


“Survey Says”: Margin of Error Indeed

(Update Feb. 27, 2015: Since some time before Dec. 12, 2014, I have been blocked by Darren Rovell. It might be because I questioned his Johnny Manziel autograph story (admittedly minor questions answered by others).  It might have started with this blog, from May of 2013.)

I recently had a dust-up with ESPN’s Darren Rovell about the reliability of Poptip poll results.

Poptip is a program that allows users to conduct quick surveys on Twitter, and then compiles the results.

No harm, nothing foul.  But apparently some folks got into it with Rovell about his references to a “margin of error” in his Poptip polls.

So being a professor and a math nerd, I thought I would convene a quick session on surveys, if for no other reason, to explain why some people (like me) get so agitated when Poptip polls claim a margin of error.

A margin of error is one measurement of how closely a sample’s opinions reflect the population as a whole.  You might also see it called a “plus/minus.”  It relates to the survey’s reliability in predicting the results of an election, for example.  It does seem to give a poll a certain gravity and authority, but it’s not always allowable.

The issue here involves not the size of the sample or the quality of the survey, but how the sample was drawn.  As anyone who has studied statistics will tell you, the margin of error is provided only when the sample is drawn from the population by some mathematical formula — random or systematic.

Poptip polls — like the quick polls on the front page of ESPN — create what are known as volunteer or convenience samples.  These are not mathematically drawn samples, and they cannot reflect the population as a whole, because they are drawn from people who happen to be on the ESPN page, or see a Darren Rovell tweet, and vote.

They are fun, and they give often thousands of people a chance to voice their opinions.  But that is as far as it goes.  The results reflect only the people who voted, not sports fans or voters as a whole.  Even responses in the tens of thousands (“mass,” as Rovell described it to me) have no statistical meaning beyond those who vote.

Again, that’s OK.  No problem.  Just don’t present quote a “margin of error” in a survey to make it into something authoritative when it is not. That’s my issue.

When a pollster uses a random sampling method, he/she has mathematical tools available to predict how close the results are to the population as a whole.  Not perfect.  Not always right.  But as Nate Silver demonstrated in the 2012 election, carefully drawn data in skillful hands can yield rich information.

I’m not here to spoil anyone’s fun. If you want to do a PopTip poll, have at. It looks like fun.  But don’t talk margin of error. That’s just putting lipstick on a statistical pig.