ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

2014 Presidential Address

2014 Presidential Address from the AAPOR 69th Annual Conference

Borne of a Renaissance—A Metamorphosis for Our Future
by Robert L. Santos

As most of you know, I’m a practitioner, a mathematical statistician who was lured into the world of survey sampling, project management, survey operations, and social science research. Although I spent the first two decades of my career in the academic survey research world, the past decade and a half has been embedded in public policy research. And for those whom I’ve been fortunate enough to work with, you know that I love being much closer to policy research, even to the point of designing qualitative research studies, conducting focus groups and in-depth interviews, and analyzing or synthesizing the data. I still do plenty of design work on large-scale probability surveys and do much statistical consulting. The mix is terribly exciting. So, I address you in a much different mindset than where I started 35 years ago. I don’t pretend to be an intellectual or a scholar, but I like to be a creative, big thinker. And my thinking has evolved considerably over the years. I wish to share my thoughts about the future direction of AAPOR to strengthen and advance our reputation as leaders in public opinion and survey research.

As a leader in public opinion and survey research, AAPOR prides itself on establishing and promoting research standards. We provide guidance on valid statistical inference and scientific rigor, and we are huge promoters of transparency. I wish to challenge us by posing a few questions: Is our current leadership role sufficient? How can we be doing a better job? Are there gaps that we could be addressing?

I will return to these questions at the conclusion of this address. But first, I provide some context that I found helpful in formulating my own answers.

I’d like to start with a statement I made when I ran for this office. I believe we are in the midst of a renaissance in this country and globally. Just in the past ten years or so, advances in technology have transformed how we interact with one another, with whom we interact, and the frequency and immediacy of the interactions that we make. And access to information has changed the nature of how we experience life and how we make decisions, and it even tempers the way we think about the world and how the world thinks about us. Our nation’s demographics are changing in profound ways—minorities are becoming a majority, and the population is increasingly becoming a melting pot of races and ethnicities and cultures. These changes have been accompanied by an explosion of information sharing, including all kinds of polling data, collective knowledge sharing on the Internet, creative new ways of visualizing quantitative information, and inconceivably potent ways to store and access data.

Along with our exposure to these new capacities and capabilities comes an acculturation process about how we use and access them, as well as the purposes for which we might use them. Thus was born social media. A culture has emerged for how we exchange ideas and sentiments, how we express preferences—likes and dislikes—how we share with friends, and even how we eschew perceived “unfriends.” The culture includes participating in silly Facebook polls like “What kind of ocean animal are you?” and giving feedback in customer satisfaction surveys following an online or telephone transaction. It even includes participating in opt-in panel surveys for rewards or just for fun. What’s funny is that when we participate in some of these things, we even want to share the results.

We extract meaning from all these interactions, and we can have emotional investments in them. In a real sense, we are acculturated through the social uses of technology, and that process includes the consumption of increasingly more data, even if the information lacks in intellectual value what diet soda lacks in nutritional value. But then there are other sources of information we tap that can indeed provide valuable insights and make our lives better. Smartphone apps for navigation help us get where we are going efficiently. Other apps track our exercise and help us become healthier. We can even use the Internet on our smartphone to reacquaint ourselves with AAPOR’s survey best practices. My point is that we have become acculturated as an information- and data-rich society. We are exposed to ratings, preferences, and opinions by orders of magnitude more than earlier generations. And of course not all the quantitative data that are dispersed have originated from rigorous scientific, well-designed studies. Does that mean these data are useless? I won’t say “yes.” I can’t say “yes.” But I can say “It depends.” And this brings me to my next discussion theme: the concept of “actionable insights” and our insatiable thirst for knowledge.

I define “actionable insights” as inferences—implications, interpretations, extrapolations that flow from information or data we collect—that we perceive are sufficiently strong to allow a recommendation or decision to be made. AAPOR members, as social science researchers, are very familiar with actionable inferences. As most of you know, my day job is in a public policy research think tank, the Urban Institute. We are “all about” actionable insights. The pursuit of knowledge is always in the context of a societal issue related to policy—from immigration reform to tax policy to the Affordable Care Act to food insecurity, and the list goes on. We seek insights to elevate the debate on vulnerable populations who struggle with making ends meet or are trying to stay healthy or just find a home to live in. The concept of actionable insight is critical to public policy researchers, and all researchers, for that matter. In fact, such insights are the major takeaways in AAPOR’s annual conference research presentations. Only here, our actionable insights are typically predicated on tests of statistical significance and margins of error.

My fellow AAPOR members, I believe that there are other types of inferences that are arguably as powerful, even though they may not reach the thresholds of statistical rigor afforded by probability sample surveys or randomized controlled tests. Yes, this sampling statistician is talking about qualitative research. I’ve tasted the fruit from the qualitative research tree and found it sweet.

Here is an example: A few years ago—just before the planning of our challenging Phoenix conference with the SB2010 anti-immigrant legislation issue—I helped design and implement a qualitative research study to explore the plight of undocumented immigrant families who were detained as the result of worksite ICE raids (Capps et al. 2007). We traveled to small towns and big cities from coast to coast to find and interview immigrant families of detainees, community leaders, local government officials, and the like. In one of the interviews of the detainee families, we heard about a young breastfeeding mother who was detained for days without being allowed access to her child even after pleading her case to officials. That one interview—an N of 1—led the research team to recommend that ICE create and implement a consistent policy to ensure that detainee parents receive humanitarian consideration for immediate custodial release so that their children—most of whom are US citizens—receive the parental care they need and deserve. And if you are thinking, “What does this have to do with research on opinions and behaviors?” I will add that we collected rich data on how families, communities, and even a small-town jurisdiction were emotionally and financially devastated by the raids. We captured people’s sentiments through the recounting of their “stories,” including their post-raid behaviors, like the family that locked themselves in their house pretending not to be home for weeks for fear of being arrested, and the local grocers who became financially distressed because their sales dropped dramatically for months as the local community was too afraid to venture outdoors. We considered the small town that went bankrupt because their major source of tax revenue was shut down as the result of the raid.

For this qualitative research study, it sufficed to gather the range of experiences of this population without generating a point estimate or a confidence interval. And the information we retrieved was sufficiently cogent that actionable inferences in the form of findings and recommendations could be drawn. But I expect that you understand the value of a good qualitative study, so let me stretch the concept ofactionable inference just a bit further. Let’s consider “big data.”

By now most of us have heard about the Google Flu Trends (GFT) project.1It harvests search terms weekly to monitor for potential influenza outbreaks. It was established about five years ago and hailed as a great illustration of the use of big data for the advancement of public health. It has now grown worldwide, using data visualizations to show influenza activity based on counts of 50 million weekly Google word searches related to influenza (Matthew et al. 2009). Its early success in the United States was remarkable because it could forecast influenza hot spots very inexpensively and more quickly than the CDC’s usual physician-report-based methods. Although the method has encountered a problem of overstating the severity of an outbreak last year due to a media event (of all things), it remains a highly attractive and popular surveillance tool that has been expanded to at least one other disease. The GFT website features data-visualization maps as its principal communication vehicle. You can see that they include neither indications of statistical significance nor estimates of variability in their exposition of results. Their indicators are far from perfect, have unknown sources of error, and rely critically on correctly predicting the types of words that people will use in their searches about the flu (Cook et al. 2011). Yet they have found a productive use in society. This is an illustration of a model-based analysis of organic data being used to provide actionable insight.

We can even delve into the realm of inexpensive commercial daily polls that experience miniscule, single-digit response rates.2 The US Bureau of the Census currently uses such a survey to track sentiment on perceptions about government. The results are never used as official statistics, and even the magnitude of the estimates is less important than shifts over time. I, too, see the attraction and utility of using the resulting time trend for management planning to enhance the Census Partnership program and more generally with its outreach efforts to engender ACS decennial census participation. In this case, point estimates are much less important than changes (trends) over time.

So far I have talked about how we and society have become acculturated to a data-rich environment, and how gaining insights is not restricted to those arising from statistical inference. The next step is to recognize that the overarching research process is often a multi-method process involving mixtures of qualitative and quantitative research. The health services research arena is the cleanest example of this, although I believe that it is ubiquitous to all the social sciences. Consider the following illustration.

My six-year stint as a study section member for the Agency for Health Research and Quality was perhaps the richest educational experience on research in my life. I encountered “the perfect grant application.” It commenced by presenting a specific problem, using vital or other official statistics to justify its importance and significance. A literature review was provided to establish the knowledge gap to be addressed. A conceptual framework was posited that showed how various personal, environmental, and other factors affected a person’s beliefs or behaviors. Preliminary research results typically involving in-depth interviews or focus groups were provided to validate the conceptual model and the underlying constructs to be used in the research. These, in turn, motivated the research questions and hypotheses to be posed, leading to a research design that typically involved a quantitative study: a survey, a randomized-control trial, or a case-control study. And a small qualitative study was proposed after the quantitative analysis in order to help explain and put some texture around the statistical findings that answered the “how much” question but not the “why” question. To me, the use of carefully sequenced, integrated qualitative and quantitative methods as a pathway to insight was profound. It made me realize that insights and knowledge come from a larger investigation process, and that survey research was but one tool—admittedly an important and valuable tool—in the researcher’s toolbox. But it wasn’t the only one. And since then I have directly experienced that statistical inference isn’t the only way to generate insight. And it is with these thoughts in mind that I asked myself and I now ask you: Is AAPOR’s current leadership role sufficient? How can we be doing a better job? Are there gaps that we could be addressing?

Here is my qualitative synthesis to address these questions. Over the past couple of decades, AAPOR has focused heavily on guidance in the conduct of quantitative research, such as survey research, scientific polling, understanding total survey error, and the practice of standardized, rigorous methodologies such as response rate calculations.3 Implicitly, our leadership has been confined to a specific subset of research methods that can yield statistical inference. This includes measuring parameters with measureable levels of uncertainty; conducting tests of significance; calculating MOEs; and conducting research to understand and minimize measurement error. By not directly speaking to the larger research process, we fail to recognize a key element of our societal renaissance—that of gaining insight from nonstatistical inference. I assert that the time has come for us to seriously consider expanding our leadership footprint to include these other methodologies.

I recognize, as I hope you do, that expanding our leadership role to include nontraditional, new methods represents an enormous challenge. There are no easy answers. For if there were, then we would have already seen them by now. If we are to offer guidance about nontraditional research methods, we need a framework with which to help categorize the strengths and weaknesses of the almost limitless different approaches that are appearing in our world. I offer an idea about that framework, one that you may be familiar with if you read AAPOR’s Nonprobability Task Force Report (Baker et al. 2013). It is the concept of fit for purpose.

Our AAPOR Nonprobability Task Force report devoted a chapter to the discussion of fit for purpose in the context of data quality. But here I will be very brief and perhaps a bit simplistic. In my mind, the concept of fit for purpose is the balance researchers seek between available resources, the rigor of research design and implementation, and the nature of the insights needed to effectively address the research questions.

Implicitly or explicitly, it is the fit-for-purpose framework that leads qualitative researchers to subjective samples and semi-structured interviews. It leads the ACS to pursue 97-plus percentage response rates4and to use gold-standard probability sampling and field methods. It leads nongovernmental policy researchers to conduct telephone and online polls with single-digit response rates (Pew Research Center 2012), with or without probability sampling. And it reflects the heavy reliance on opt-in online surveys by market researchers to explore preferences about commercial products or services.

Let us not forget the explosion of model-based data products that are flooding our society. This year I have had a number of media inquiries for commentary on such methods as using tweets or other social media to predict elections. In fact, statisticians at Columbia University conducted a retrospective exercise to show that Xbox surveys in combination with other concomitant data and adjustments could be used to predict the 2012 presidential elections (Wang et al. 2014). And let’s not forget the Google Flu Trends and other similar big-data products that use data analytics to generate all kinds of inferences. Society is embracing these data products, rightfully or not. We need to have something to say about this. For if we limit our leadership focus to traditional, classical methods of scientific public opinion and survey research and statistical inference, I fear the world will pass us by. Instead, I propose that we use the fit-for-purpose framework as a starting point to develop guidelines about the broader spectrum of research and research methods. This is a genuine opportunity for AAPOR, although I reiterate that it represents a huge challenge.

Now let’s consider AAPOR’s current portfolio of standards and guidance. If you consult our website,5 you will easily find several task force reports dating back to 2008 on topics ranging from what went wrong in New Hampshire pre-election polling to online opt-in panels, polling and democracy, nonprobability sampling, and emerging technologies. We also have statements on standards and best practices for surveys, sample dispositions, and response rates. And of course we have our AAPOR Code. Anyone reviewing our research materials could reasonably conclude that we are an association that focuses almost exclusively on applications and methods of survey research. And the fact is, we do. We have defined ourselves by our products, which tell the world that the methods worthy of standards are those associated with survey research, polling, and their associated statistical inference. I can’t help but ask myself: Are polling and survey research the only ways to gain knowledge about opinions, behaviors, and characteristics of ourselves? And if they are not, then what should be AAPOR’s role in addressing other methods? I worry that we have carved ourselves a corner of the research world—that of the rigorous probability sample survey—that is rapidly shrinking.

What can we do? As we look to the future, I would like to see AAPOR focus on how to gain insight using a much broader definition of “research.” I would like to see AAPOR formally recognize that insights need not always stem from statistical inference per se. In a sense, I’d like to see AAPOR undergo a metamorphosis, to emerge from our “survey research exoskeleton” and spread our wings by addressing other ways to garner actionable insights about human knowledge, opinion, behaviors, and characteristics. This would involve exploring and identifying best practices and standards for qualitative research and for model-based research.

Instead of restricting our guidance to the area of survey research methods, let’s turn the conversation on its head. If we accept that research is a quest for actionable insights—a quest for knowledge—then survey research is but one of a larger and rapidly expanding collection of research approaches that can be employed in a journey to gain knowledge. This allows other research methods to be acknowledged for the value of insights they offer, even if their contributions fall short of yielding statistical inference. Adopting this alternative paradigm offers a framework for a more productive conversation about the merits and limitations of methods that we have found so difficult to address as an association thus far. Under this way of thinking, Google Flu Trends and other similar harvesting of searches or social-media data can be applauded for their value as population surveillance tools while recognizing their pitfalls. Perhaps opt-in online panels can be recognized for their use in market research applications even as we retain our caution against their use for generating population point estimates based on our current knowledge base—which, by the way, grows with time and eventually will require more definitive guidance to be developed. So, my declaration to you is:

Let’s talk about how we can gain insight and knowledge about human behaviors and opinions through an overarching research process.

Let’s recognize and embrace a growing toolbox of qualitative and quantitative methods to gain insight about public opinion, knowledge, and behaviors.

Let’s allow the research question to dictate the combination of methods and approaches to be used that best fits the research question at hand, and be sensitive to the fact that resources are never limitless, so trade-offs must be balanced between quality and resulting insights.

And let’s do this while still preserving our existing core guidance on survey research. We need to preserve, distinguish, and promote “statistically valid” probability survey research methods, differentiating them from less rigorous methods, yet be ready to include specific nonprobability approaches if/when a sufficient evidence base can be developed to validate those methods.

Yes, changing our perspective represents a challenge, and it will not be easy. And we should recognize that we won’t be able to do this by ourselves. I see value in bringing other perspectives, ideas, and expertise to the table. For instance, it may be beneficial to hear from econometricians, mathematicians and mathematical statisticians, data analytics experts, philosophers, and perhaps even the youngest cohorts of researchers who have lived only in the renaissance that we longtime veterans have witnessed from its inception (e.g., our 2014 AAPOR ResearchHack participants).

We need to listen better to those who offer alternative, even controversial approaches. We need to see potential and opportunity before pointing out limitations. I was a discussant at a conference session last year on nonprobability methods. Without getting into specifics, I praised the authors’ use for exploring patterns and relationships between factors, for surveillance and for qualitative research. Then I pointed out the obvious limitations when the research objective migrated to the production of population point estimates. Afterward, one of the authors came to me discouraged and devastated because of my commentary. I went out of my way to encourage this person to continue developing their approach. I told this author that the critical comments I provided were meant to spark additional innovation to overcome obvious limitations. The person left our conversation with some optimism. You see, I believe that it is precisely these creative thinkers and innovators that blaze the trail for what will be the standardized, rigorous methods of our future. You can bet that the first telephone or web surveys conducted decades ago were not perfect. The same holds here.

I conclude with a reminder that I commenced this address by posing questions about AAPOR’s leadership role, strengthening that role, and identifying and addressing gaps. I’ve suggested that we could rethink our current framework from one whose focus is dominated by survey-research-based statistical inference to a paradigm that embraces the pursuit of knowledge using a broader toolbox of research techniques. I’ve talked about how we can use the fit-for-purpose concept in combination with a broader perspective of seeking knowledge and insight to develop new guidance for our field and our stakeholders. I am offering AAPOR a strategic direction for further discussion and refinement. This is not a suggestion to open the floodgates. It is a suggestion to open our minds and seize an opportunity, lest we risk untoward consequences.

Thank you for the privilege of serving you this past year, and for your consideration of these ideas I offer today.