historians and polls.

Elections are a special time. Throughout the year, I read and write about politics and policy, but during election season- magical time!!- the whole [American] world is willing to engage me.

As a historian, I favor qualitative analysis. I delight in snarky twitter missives from comedians and politics bloggers. I enjoy a good speech analysis and interviews with “real people” about their political views. I will read anything the The New York Times op-ed page has to offer, even David Brooks, because I like wincing. I sigh with and at the liberal media when they try to do things like “attend the RNC” or “consume conservative media.” I eagerly delve into campaign exposes and profiles. Needless to say, internet media provides me with plenty of ways to get what I like. I lean toward these sources because they often have pithy quotes that I can imagine really “making” a paragraph in a paper or book someday in the future.

That said, something that has really changed in my media consumption this election has been my growing obsession with polling, something I previously knew little about. Nate Silver’s data analysis has been, in my view, the most critical and comprehensive. A fine example of this was a recent blog post about including cellphones in poll data. Silver writes,

These results are consistent with some past research. Roughly one third of American households rely solely on mobile phones and do not have landlines, meaning they will simply be excluded by polls that call landlines only. Potential voters who rely on cellphones belong to more Democratic-leaning demographic groups than those which don’t, and there is reasonably strong empirical evidence that the failure to include them in polls can bias the results against Democrats, even after demographic weightings are applied.

Silver continued with this methodology theme today. He writes,

Of course, if you’re a purist, then the Gallup survey is the only one of the four tracking polls that you’ll find acceptable, since it’s the only one to use a traditional methodology. But then you have to reconcile that with the fact that the polls using the strongest methodological standards at the state level are now giving Mr. Obama a six-point lead in the battleground states.

These two posts gave me pause. Many polls skip a large subsection of the American population (one that includes me!), and Gallup can’t be included as one that is “using the strongest methodological standards.”

One more post drove this problem home to me– Ezra Klein’s discussion of “the poll result that explains the election.” Klein explains that while pollsters like to ask questions that don’t necessarily match the sentiments of voters:

That suggests, again, that the question they’re [voters] likely to ask isn’t “Do I feel better off than I did four years ago?” Voters may not expect to feel good four years after the worst economic crisis in generations. Rather, the question is “has Obama done a good enough job under the circumstances, or at least a better job than the Republicans would have done?” And the electorate’s answer, so far, seems to be that he has.

Ok, so questions and methods matter– duh, I learned that the second week of college. But I highlight these issues because historians are generally careless in their use of poll data. We are, after all, dependent on what the pollsters left us with. But in many political history texts, polls, particularly Gallup, are cited as a fast and effective way to tell you what “mattered” to “people” in the past.

One I see frequently is the Gallup poll showing crime as a number 1 concern among Americans in 1968.  But if you look at the poll, most of the “problems facing this community today” were crime related, and the first five questions of the poll deal exclusively with issues of crime. The available responses to the question “What steps do you think should be taken to reduce crime?” were almost exclusively punitive and contingent on government intervention. The poll privileged some policy possibilities and dismissed others. One can imagine that if the person answering the poll wasn’t scared of crime when they picked up the phone, they would be when they put it down.

I think the poll analysis from our own time is instructive. Historians, and especially those studying mass incarceration and the carceral state, should be just as careful using poll data as they would be with other quantitative and qualitative sources. Historian Khalil G. Muhammad has highlighted how “statistical discourse” about crime and criminality in the Progressive era shaped attitudes and access to services; Historian Heather Thompson has notes that higher crime statistics meant states and cities could access more funding for police technology in the 1960s. Polling is not science, it is not free of bias. Who is asked and how there are asked matters, and unless we engage those issues in our analysis, historians should think twice about citing poll data.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s