ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

2013 Presidential Address

2013 Presidential Address from the AAPOR 68th Annual Conference

Applying a Total Error Perspective for Improving Research Quality in the Social, Behavioral, and Marketing Sciences
by Paul J. Lavrakas

The topic about which I am speaking today is very special to me, and I have been looking forward to making a presentation such as this for a long time. But please know that the ideas about which I will be speaking continue to evolve for me, and what I would have said about this topic twenty years ago, or even ten years ago, would not have been as “developed” as it is for me today (cf. Lavrakas 2012, 2013).

I would like to begin by noting the three major premises that underlie the views I will be expressing. These come from my nearly forty years as a researcher, during which I have encountered a great many and wide variety of social, behavioral, and marketing research studies, both quantitative and qualitative in nature.

First, I believe that many of these studies were conceptualized poorly, executed poorly, and/or interpreted poorly.

Second, I believe that the quality of most of these studies could have been improved with few, if any, cost implications.

And third, I believe that using the Total Error framework, about which I am speaking today, can help bring about a meaningful improvement in research quality.

Many in the audience already are familiar with the Total Survey Error (TSE) approach (cf. Groves 1989; Fuchs 2008). But I sense that many more are not familiar with it. Furthermore, from what I have observed in the past twenty-plus years, few appear to apply the approach broadly to the diverse realms of social, behavioral, and marketing research. And yet, it is what I call the “Total Error” perspective that underlies the TSE perspective.

I am not sure why this is the case, but my goal today is to demonstrate why thinking broadly about a Total Error (TE) approach, not merely a TSE approach, has been so very useful to me. And, as I will explain, I strongly believe that the TE perspective has applicability across all qualitative and quantitative research methods in the social, behavioral, and marketing sciences, and I strongly believe that this is the way researchers in these fields can and should improve the quality of the research they conduct. 

Genesis of the TE Perspective

It was 1990 when I started to read Bob Groves’s book, Survey Errors and Survey Costs (Groves 1989). I read it very slowly over the course of about a year, as I found it to be an extremely important book that I could benefit best from if I consumed it in very small doses (and I still regard it as the most important book written in our field). Reading Bob’s book started me on a thinking process that I did not realize was occurring at the time. This led me by the mid-1990s to the belief that the TSE model could and should be adapted and applied to all kinds of social, behavioral, and marketing research.

So, when I use the phrase Total Error, I am referring to all the problems that can make information gathered in any social, behavioral, or marketing research study, and the conclusions drawn from that study, wrong—i.e., unreliable and/or invalid—or at a minimum may undermine one’s confidence in the study’s results.

Total Error encompasses anything that can cause the information gathered in a research study to be of questionable or limited value. It helps answer the question Is the study fit for purpose? TE not only provides a way to help researchers plan their research; it also helps them oversee their data-collection field period and helps them when analyzing, interpreting, and disseminating their findings and conclusions. It provides excellent guidance at every step in the research process. I also believe that TE is a comprehensive and systematic way for any researcher to engage in a careful process of self-evaluation and improvement. One could think of it as a checklist. When someone is thinking about doing a research study—well, here is a series of very logical and systematic interrelated questions that every researcher should ask and answer for herself/himself.

Many of you know that Total Error includes both bias and variance as sources of error. Bias is the directional type of error. Variance is the type of error that is all over the place, imprecise, and so forth. TE encompasses all of this.

For those who are qualitative researchers, some of the terms I am using might be unfamiliar to you or you may be uncomfortable with them. But I hope you will see beyond the more “quantitative” terms I will be using for the first half of my address today, because I think what I have to say also has application to the qualitative realm of research. (And later, I will present a “translation” of the TE perspective into more familiar qualitative terms.)

A final overarching comment: I believe it is better for researchers to consciously ignore implications of errors in their research than to not be aware of them or not consider them at all. If researchers decide they are not going to worry about something that may be a threat to the reliability and/or validity of their research, at least let them make an active and informed decision about that.

So, what is the schema that I use for the TE perspective?

If you know the Groves et al. (2004, 2009) books on survey methodology, the flow chart in figure 1 will be very familiar to you. But I have adapted it a bit.
Figure 1.

Figure 1. A Schematic of the Basic Social, Behavioral, and Marketing Research Process. SOURCE.—Adapted from Groves et al. (2004).

Let’s start with the Representation side of the figure. No matter what kind of social, behavioral, or marketing research one wants to conduct, one is starting out (even if it is only implicitly) with a population to represent. This often is referred to as the target population. The next step in the process is deciding how one is going to represent that population. For that, a researcher needs to have some kind of a “list” of members (aka, the “elements”) of that population (which is called a “frame” by many quantitative researchers) that ostensibly represents that target population. That listing serves as the sampling frame about which a researcher is going to make and apply sampling decisions. Those decisions will lead to the creation of the initially designated sample from which (or about which) data are going to try to be gathered. The processes used to gather data will lead the researcher to a final sample from whom (or about whom) data have in fact been gathered. It rarely happens in social, behavioral, or marketing research that data are gathered for all the elements in the designated sample, and thus most often data are “missing” from the final sample, and this is called nonresponse. Because of this and because of possible misrepresentation of the target population by the sampling frame and factors associated with how the designated sample was drawn, adjustments may need to be made in an attempt to improve how well the final sample represents the target population; this is what quantitative researchers refer to as weighting. This enumerates the various realms on the representation side of figure 1.

Now, turning to the Measurement side of figure 1, the researcher begins by deciding what domain(s) of information will be gathered from (or about) whoever or whatever is being studied. This starts with the researcher identifying (i.e., specifying) the constructs of interest (e.g., alienation, brand awareness, health care access, motivations for educational attainment, political candidate preferences, fear of crime and victimization history, belief in extraterrestrial life, etc.). Thus, the researcher must decide what topics to gather information about and what attributes make up each of those topics. Once these decisions have been made, the researcher must determine how to operationalize (i.e., how to specifically measure) those constructs and their attributes. As depicted by the Measurement box of figure 1, this leads to the creation of the tool or instrument that will be used to gather data, such as a questionnaire or a coding form. Concern then turns to the information that is generated by those who are gathering the data (e.g., interviewers, observers, and coders) and/or those who are providing the data (i.e., the Response box in the figure). Once the “raw data” have been gathered, the researchers must decide what processes (e.g., cleaning, coding, transforming, imputing, appending, etc.) will be applied to that raw data to create a final data set that is ready to analyze. Once the final data set exists, the researchers will analyze it. The research study’s findings and conclusions then follow from the Representation stream and Measurement stream coming together.

Total Error Framework

Let me now shift to the Total Error framework as I envision it, as depicted in figure 2.

Figure 2.
Figure 2. A Schematic of the Total Error Framework.

Starting on the Representation side, between the realm of the target population and sampling frame, is where coverage error becomes an issue. When we are thinking of coverage/noncoverage and possible coverage error, we are thinking of how well the sampling frame represents the population in which we are interested. And we say noncoverage occurs when the frame is off in some way; i.e., some portion of the target population is not encompassed by the frame(s). And if we are interested in the issue of within-unit coverage, we could have within-unit noncoverageif some portion of the within-unit population is missed by the within-unit selection method that is deployed. In a telephone survey, for example, if we are using a selection process to pick one and only one person in a household to study, that is an example of what this is all about. Ultimately, what we are concerned with here is if there is noncoverage at the unit or within-unit levels—and there almost always is—if the group that is not covered by the list or frame or selection method that we use to represent our population is different on our key variables of interest in nonnegligible ways from the group that is covered, then we are starting out our research with the nonignorable problem of bias due to noncoverage (which, for example, happens in almost all Internet-based panels of the general population and with all landline-only telephone surveys of the general population).

So on to sampling.

In figure 2, there is an arrow leading from the Sampling Frame box to the Measurement box. Generally, a researcher decides early on about the list (frame) that will be used to sample the population of interest. This decision will determine the mode(s) of data collection that can be used to measure the constructs of interest. For example, an RDD frame will allow data collection to be done via telephone, and/or via mail or in person for those numbers that can be matched to an address, or via the Internet if respondents have Internet access and are given a URL to reach the data-collection instrument. Furthermore, the frame that is chosen may restrict the constructs that can be measured in the study (cf. Fan 2013).

Once the sampling frame has been chosen, the units or elements (e.g., persons) need to be selected. To do this, a sampling design needs to be determined, be it a simple random sample, a multistage cluster sample, or some other sampling design. As part of this process, the researcher will decide whether to use a probability approach to selection or a nonprobability approach. In the case of the nonprobability approach, it might be a convenience sample or a purposive one, and it even may include random selection. However, for it to be a probability sample, (1) the researcher must know the probability of selection for each element in the sampling frame; and (2) this probability must be greater than zero; selection is entirely random with the exception of elements that are given a certain (100-percent) chance of selection (which happens very infrequently). If a probability sampling design is used in a quantitative study, the researcher must decide what level of statistical precision is desired for the final sample. If it is a qualitative study, the researcher must decide how many people (or, for example, elements of content) are needed to gather data from to make the researcher comfortable and confident with the amount of data that are gathered; not in a statistically precise way, but to me still a similar concept. With a quantitative approach, these sampling decisions lead to the calculation of confidence intervals in probability samples or in nonprobability samples to calculating credibility intervals (cf. AAPOR 2012).

These sampling choices lead to the creation of the initially designated sample from which data will try to be gathered. However, it is almost always the case that data will not be gathered from all the elements in the designated sample, be the study quantitative or qualitative. This is due to all the reasons for nonresponse in social, behavioral, and marketing research—all the reasons that people do not participate; e.g., either we never contact them or when we do contact them they refuse. (There are other reasons for nonresponse; e.g., language barriers, illiteracy, poor health, etc.). The main differences between the designated sample and the final sample are caused by unit nonresponse and lead to concerns about possible Nonresponse Error. And if the nonresponders are different from the responders in nonignorable ways on key variables of interest, then there may be nonnegligible bias due to the study’s nonresponse. This may happen when the nonresponders are not missing at random—it is not a random subset of the designated sample that is missing from the final sample. In turn, if your respondents and nonresponders are not different on the key measures of interest, then nonresponse bias (regardless of the size of the nonresponse in a study) is not an issue. But the anticipated size of the nonresponse that will be experienced in a study is important for the researcher to accurately estimate, as it is needed to decide how much larger the size of the designated sample will need to be compared to the size of the final sample. For example, if a response rate is only 10 percent (as is the case in many national RDD studies in the United States), then the designated sample will need to be at least ten times larger in size than the final sample.

In addition to nonresponse at the unit level, there is a different form of nonresponse caused by those members in the final sample who do not provide all the data that researchers are seeking from them; i.e., what is called item nonresponse because of missing data from some respondents to particular variables the study is trying to measure. So, for example, if the people who are not responding with their income are materially different in income from people who do provide their income, there may be item-nonresponse bias remaining in the data even if the missing data for income are imputed.

Finally, on the representation side, from a TE perspective, we have the realm of Adjustment Error. Here, the quantitative people are thinking about the process of weighting. (Of note, I believe that this is an area about which our disciplines need a lot more disclosure, transparency, and openness.) It is at this point that quantitative researchers are thinking that issues related to noncoverage, sampling choices, and nonresponse have made their final samples no longer representative of their target populations. So, what can I do statistically to correct for this? To my mind, the statistical choices that many researchers use are just blindly following past practices: OK, age, education, gender, and race—that is all I have to weight for, and that is going to fix all the representation problems. But that is just not the case...it cannot be the case. And in quantitative research, we get from these adjustment decisions into the issues of design effects and effective sample size. But an additional point I would like to make about adjustment and adjustment error, and I know there are people who will disagree with me, is that even qualitative researchers make choices about how to emphasize some of their data over some other of their data. And I would call that the semi-equivalent of what this is all about—the process of making adjustments to one’s data in the hope and belief that it will improve the representativeness of one’s final sample.

Now, let’s go over to the Measurement side of figure 2.

Here, we start with identifying the various constructs that need to be measured. And it is here that the issue of Specification Error arises if researchers do not accurately identify all the important attributes that make up those constructs. When this happens, researchers are setting themselves up to not gather robust enough data about what they purport to be studying: for example, if a researcher is interested in studying brand awareness but only measures a portion of what that brand actually is. In that case, we would say the researcher has committed a Specification Error.

Once decisions have been made about what constructs are going to be measured and what factors associated with those constructs need to be measured, a researcher must decide how to actually gather those data. Whether one is using a discussion guide, a content analysis coding form, a questionnaire, or some other information-gathering approach, we have to devise an instrument or procedure to help us gather our data—the way astronomers use a telescope or a biologist uses a microscope to gather their data. Social, behavioral, and marketing researchers must build and deploy some kind of tool via which they operationalize the constructs and all their factors robustly. And, we all know from personal experience that there are many things that can go wrong with the wording, the ordering, and the formatting (layout) of these measurement instruments.

Respondents, subjects, or participants from whom or about whom we are gathering data also can be a source of errors, and that also falls under the measurement realm. They are not always capable of providing us with accurate information about what we are seeking from them, nor are they always willing to provide us with accurate information, regardless of whether one is conducting quantitative or qualitative research.

Researchers often have other human beings involved in gathering the data for their study. And in gathering these data, those humans are actively creating the data the researcher later will analyze. In many cases, these human data generators are affecting the data that are being measured and recorded. Although there are ways that these effects can be minimized (e.g., training and monitoring), often problems still exist. But when that happens, the nature and size of these effects at least can be measured. For example, it could be something about the human data collectors’ behaviors, beliefs, attitudes, etc., that could be conscious or unconscious, that is affecting the data; or it could be something that is immutable, such as their personal characteristics (e.g., their accent or other voice parameters).

The final component of Measurement Error is the different modes that researchers might use to gather their data that can be used on their own or in combination with other modes. And, research has shown conclusively that different modes can yield different data when measuring the same constructs; for example, sensitive data generally are reported with more accuracy using a self-administered mode compared to an interviewer-administered mode. So, the mode or modes that a researcher chooses also can affect their data quality.

After data are gathered, they are not yet ready to be analyzed. Instead, researchers must first “massage” their “raw data.” For example, in a focus-group study or an in-depth interview study, the audiotapes often must be transcribed. In survey research, the data must be cleaned and further processed. Furthermore, new variables may need to be created from the original data to use in analyses, missing data may need to be imputed, bad data may need to be dropped, etc. And when these processes are applied, Processing Errors can arise in the form of bias or variance.

Finally, something that is very important to me, that I do not see in other people’s frameworks, is something I call Inferential Error. It has two components. My background is research psychology, and I still believe that Donald Campbell is the top research methodologist who has existed on this planet. He and Julian Stanley published a very small book in the 1960s, Experimental and Quasi-Experimental Designs for Research(Campbell and Stanley 1966). You need to read it, if you have not already. They would call this first part of the TE framework “statistical conclusion validity”; that is, how researchers quantitatively make sense of their data by the statistical procedures they choose to use and how other researchers qualitatively make sense of data when they are not using formal statistical procedures.

The second “inferential” issue also comes from my psychology background where I was steeped in experimental design. This raises the question of whether a researcher’s study used a design that provides the researcher strong Internal Validity (Campbell and Stanley 1966). Ultimately, this addresses whether an unconfounded true experiment was conducted—one that allows the researcher to make causal attributions that are supported directly by the study’s design, and thus are not based merely on the researcher’s own speculative logic. We, as researchers, are going to be drawing cause-and-effect conclusions all the time, but most of the time we cannot say that our study’s design supports those strongly. Rather, it is our logic, our experience, our judgment, etc., that we apply on which we base those claims, and not the Internal Validity of our study.

So, figure 2 is what the Total Error framework, which I am advancing, looks like in the big picture.

A Total Quality Framework: The Qualitative Side of Total Error

I am primarily a quantitative researcher, but I have done a good deal of qualitative research, mostly from the mid-1970s to the early 1990s, including a lot of federally funded evaluation research. Over the years, my use of qualitative methods has comprised primarily in-depth interviewing (including cognitive interviewing), content analysis, focus groups, and observational research. I value qualitative research a lot because of its considerable potential to enrich our understanding of the phenomena we are studying.

So far today, most of the language I have used likely has resonated with the quantitative researchers present. However, it may not have resonated as much (or at all) with qualitative researchers. To that end, about two years ago Margaret Roller, an AAPOR member who is a qualitative researcher, contacted me with a question prompted by my prior postings on AAPORnet. It led to a telephone conversation between us, and that conversation eventually led us to the conclusion that we ought to try to write a book together about qualitative research methods—and that is what we are in the midst of doing. I thank Margaret for all that I have learned from her about how qualitative researchers conceptualize and approach their research studies. It has been a very rewarding adventure so far, and our book is due to come out in 2014, published by Guilford Press. The book contains the TE concepts that I have been talking about, but uses different language for those concepts—language that is much more familiar to and embraced by qualitative researchers.

I want to show you the language and concepts that we are using in our book and how I think the TE framework maps onto this qualitative research terminology. We have worked very hard on developing this framework, which Margaret and I call the Total Quality Framework (TQF). As shown in figure 3, it comprises four interlocking components: Credibility, Analyzability, Transparency, and Usefulness.

Figure 3.

Figure 3.Total Quality Framework for Qualitative Research Methods (Roller and Lavrakas forthcoming).


This encompasses the two subareas of Scope and Measurement. From my perspective, Scope is how well a qualitative research study ultimately represents the target population to which the study’s findings are meant to apply. This is what Campbell and Stanley (1966) would broadly have called External Validity. From a TE perspective, Scope encompasses Coverage/Noncoverage and Coverage Error, Sampling Issues, and Nonresponse and Nonresponse Error.

The Measurement side of Credibility is what Campbell and Stanley would have called Construct Validity. From a TE perspective, it encompasses Specification Issues and Specification Error, and the various types of Measurement Error, including Instrument-Related, Participant-Related, Data Collector–Related, and Mode-Related Measurement Errors.


This addresses the completeness and accuracy of the analyses and the interpretations that are performed with the data that are gathered in a qualitative study. It encompasses Processing and Validation.

Processing refers to how the qualitative researcher “makes sense” of the data that are gathered. Validation is the ways the researcher carefully checks on, and considers threats to, the accuracy of her/his conclusions and thereby gains confidence that the analyses and conclusions drawn from those analyses are “correct.” This is done by utilizing analytic techniques such as (a) peer debriefings; (b) reflexive journals; (c) the deviant case approach; and (d) triangulation. From a TE perspective, Analyzability encompasses Processing Error, Adjustment Error, and Inferential Error.


This is the noble goal to which AAPOR is fully committed—that is, wanting all social, behavioral, and marketing researchers to practice full and accurate disclosure of their methods in their final documentation. Within the TQF perspective, this refers to the need for qualitative researchers to use “thick descriptions” and “rich details” in how they document what they did to gather and process their data, including describing how they made sense of it (i.e., analyzed it). This may also include the researcher commenting on other contexts to which her/his methods might be properly transferred and thus applied.


This final component of the TQF is all about what can be done with what one’s study has found. Usefulness is the ultimate goal of qualitative research; i.e., the ability to actually do something and/or advance knowledge with the outcomes—for example, whether a researcher can support, refine, or refute hypotheses, generate new hypotheses, and such. Also, whether the researcher’s methods can be used again in other contexts. Or, how the study’s findings can and should be implemented by the client or sponsors.

Our hopes are that the TQF we are advancing will be of value to qualitative researchers who want to bring greater rigor and accountability to their research.

Tailoring the TE Perspective to Different Research Methodologies

I strongly believe the TE perspective should be applied to any social, behavioral, or marketing research method, be it qualitative or quantitative. I will demonstrate what I mean by this using experimentation, content analysis, and focus groups as exemplar research methods. In each case, I am not detailing all the ways that TE applies to the method, but selectively choosing to discuss a few of the key ways. (I am choosing not to address how TE applies to survey research because a great deal already has been written on that.)


Some may think that in planning a true experiment, essentially all the researcher needs to do is ensure that random assignment is used to choose which of the subjects in the experiment are in the treatment condition(s) and which are in the control condition. But, as shown in figure 4, there are many other important considerations that the TE perspective identifies for researchers to think carefully about in order to increase the quality of the data their experiments will generate.

Figure 4.

Figure 4.Tailoring the TE to Experimental-Design Research.

For example, from a coverage perspective, what will be the population to which the experimental findings are meant to generalize, and thus how well does the pool of subjects that is available to participate in the experiment represent that population? Furthermore, what limitations on generalizability related to noncoverage of the target population should be imposed on the experimental findings? Or, from the standpoint of allocating subjects to experimental conditions, should a simple random approach be used or should a form of stratified allocation be deployed based on what is known about each member of the subject pool before he or she is assigned to only one of the conditions. When stratification is feasible and when the stratifying attributes that are known about each subject prior to the random assignment being made are correlated with the dependent variable(s) the experiment is measuring, it is preferable to stratify that assignment because simple randomization does not always lead to initial equivalence across conditions. Instead, by stratifying the pool of subjects before their assignment to conditions is carried out, the researcher ensures that the groups of subjects in each experimental condition will be equivalent at least on the characteristics that were used for the stratification.

From the perspective of nonresponse, should the experimental researcher be concerned that bias in the experiment may have resulted because of subjects who were to participate in the experiment, but ultimately did not? If so, what is the size and nature of this nonresponse bias likely to be? And, how, if at all, should the experimental researcher adjust her/his data in regard to problems with noncoverage, the manner in which assignments were made to conditions, and/or nonresponse?

Turning to Measurement concerns in experimentation, and from a Specification Error standpoint, the researcher also should give very careful attention to making certain that the experimental treatments are not confounded in ways that make it impossible to sort out exactly what was different between the treatment condition(s) and the control condition. (For example, many survey-based methodological experiments are hopelessly confounded due to the failure of the researchers to randomly assign interviewers to one and only one of the experimental treatment conditions.) Also, from the standpoint of specification, the researcher should think carefully about which covariates should be gathered from or about each subject, regardless of which condition the subject has been assigned, so that the analyses will provide greater statistical power.

Also from a Respondent-Error standpoint, how will the researcher ensure that all subjects will be equally motivated to provide accurate data within the specific condition to which they are assigned? For example, too often the control condition in social, behavioral, or marketing experiment research is not as cognitively engaging as are the treatment conditions. When this occurs, it may become a confound in the experiment, making the data not fit for the purpose to which they were intended.


Figure 5 shows how I believe TE should be applied to doing content analysis research. I have taught a graduate course in content analysis using a TE error perspective and have voiced my belief that there typically is disproportionate attention given to the measurement side of content analysis and not enough to the representation side. Unless one has a limited and manageable population of content that has been chosen for the study (e.g., the fifty-two lead Sunday editorials in the New York Timesin 1974), then it is incumbent upon the researcher to think carefully about what constitutes the entire population of content that is meant to be represented and to decide how extensively that content can be covered. Once those decisions are made, the researcher may need to redefine the population of content that the proposed content analysis study actually will represent. Then the researcher should be considering what type of sample should be drawn from the content to minimize sampling error in a quantitative study, or to minimize its equivalent in a qualitative study, including how to decide the number of elements of content that should be coded given the “precision” (confidence) that a qualitative researcher is striving to achieve. Also, the researcher must decide on what level the unit of analysis should be (e.g., a word, a sentence, a paragraph, a whole article, etc.). Furthermore, all sampled content may not prove to be available to the researcher, and if not, how is the researcher going to determine if these “missing” elements of content may contribute nonresponse bias to the study?

Figure 5.

Figure 5.Tailoring the TE to Content Analysis.

On the Measurement side of content analysis, researchers need to be very careful in how they devise their coding forms (and accompanying codebooks) if human coders are used to create data from the elements of content that have been sampled, or in how the most appropriate syntax should be created to guide what is coded if software is to be used to create data from textual content. Researchers also need to give special and detailed consideration to how coders will be selected, trained, and monitored throughout the data-coding field period, and how the codebook and coding form will be revised as new insights about coding are identified during the field period. Once the raw data in a content analysis study are created via the coding processes used, the researcher must decide how to process those data (including cleaning them, transforming them into new variables, etc.) in order to create the final data set that is ready to be analyzed.


Now I will turn to tailoring the TE to focus-group research (see figure 6). Let us start by thinking about coverage. How often do focus-group researchers start asking themselves or saying to their clients, “Is such and such the target population that we are interested in?” They do not have to use the “coverage” word, but are they asking and sufficiently answering an equivalent question? And are they then saying, “How are we going to cover that target population?” Again, one does not have to use those exact words, but the concepts of coverage and sampling frames are important to planning credible and useful focus groups. Thus, focus-group researchers should not just assemble the first ten bodies that are available and willing to participate. In taking a “most convenient” approach, there is a rigor missing that often could be gained with relatively little additional cost and time.

Figure 6.

Figure 6.Tailoring the TE to Focus-Group Research.

And, how much active thinking do focus-group researchers commonly do when it comes to nonresponse? When one is inviting people to a focus group and they are turning you down right and left—the same way telephone researchers are getting turned down right and left—what does that say about the representativeness of the people who actually do turn out to participate in focus groups? Is there active thinking among focus-group researchers about how the people who they sampled and invited but who did not participate in the focus group are different from those who did agree to participate? And to what extent are these differences likely to have biased the findings of the focus group compared to what would have been learned had a truly representative group of participants been assembled for the group discussion?

On the Measurement side of the Focus Group method, what constructs should be specified for measurement and how are those constructs going to be operationalized in the group discussion agenda? And how is the moderator going to remain in control of the flow and timing of the group discussion without stifling important conversation from the participants and without biasing what the participants say? In turn, how is the moderator going to create a “safe and welcoming” environment so that all participants are motivated to converse fully and accurately about the topics the focus group is discussing?


What little I have said here about the TE perspective as adapted to planning, conducting, and interpreting Experiments, Content Analysis, and Focus Group research is simply to illustrate how TE is relevant to every social, behavioral, and marketing research method. And I see it as the responsibility of researchers who conduct these and other types of quantitative and qualitative research to use a comprehensive framework to guide the thinking about their research studies. Obviously, I believe the TE framework is the best one for this purpose.

Ways to Apply TE on an Everyday Basis

In addition to tailoring TE to whatever research method one wants to utilize, I believe TE should be applied routinely in other ways.

The following are examples of how I find applying the Total Error perspective to be useful. And generally the clients, coworkers, and students with whom I interact find the TE perspective helpful to them once they come to understand it. It provides them with a logical and comprehensive structure for better understanding the entire social, behavioral, and marketing research process. Within an hour, I can provide them an overview of the Total Error framework that they can use to start to feel comfortable to then begin talking about the specifics.


If you are teaching any kind of research methods curriculum, I believe it should be organized around Total Error. This includes any class project you and your students may plan, conduct, and analyze so as to help them better ground themselves in research methods. In my own case, I start out providing a comprehensive overview of TE, taking the first month or so of the semester; and then I guide the class through the process of planning, implementing, and analyzing our class research project so that all aspects of the TE are taken into consideration. After the project is completed, we cycle back and thoroughly discuss it from a TE perspective. And instead of a final exam, I assign a final paper in which each student must critique the class project from the TE perspective.


Too often literature reviews are conducted in a way that appears to not involve the author thinking critically about the reliability and validity of each study that is reviewed. Rather, these authors appear to treat the various studies as though all are equally reliable and valid. I find that students are particularly likely to conduct such reviews, including many who are writing their dissertations. However, not all studies should be equally “weighted” in a literature review (including meta-analyses), and to avoid this mistake, the reviewer should apply a logical and comprehensive system to evaluate whether each paper, article, chapter, or book being reviewed reports on research that was carried out in a manner that yielded reliable and accurate findings. (That speaks to my belief that a research methods course should be taken very early in one’s degree program.) If this were not the case, why would one want to use the findings from an unreliable or invalid study in his or her literature review? Granted, the type of literature review I am advocating to be conducted will require much greater knowledge on the reviewer’s part and much more time to conduct, but is that not what good science requires of us?


Whether one is evaluating their own or others’ research, one ultimately should be asking, “How much confidence can I place on this information being correct ‘enough’ for the decisions I need to make?” The TE framework provides one with an excellent basis from which to reach decisions about the credibility and ultimate usefulness of any research study.


I believe TE has great relevance to creating RFPs, preparing proposals, and scoring proposals. In October 1993, Bob Groves and I taught a course in Albuquerque about Total Survey Error to a large group of state-level evaluation officers from around the nation. This was the first time I recall explicitly recommending that if one is writing an RFP, it should be structured along the lines of the TE framework so that those writing proposals are required to address all the various errors the funders identify as being important to their needs. That is, I advised that their RPFs should be explicit about how the proposed research designs that vendors submit will work to reduce various errors and/or how the vendors propose to measure the nature and size of the errors. Furthermore, when one is scoring proposals, a TE approach provides a way to systematically and comprehensively evaluate how well the proposed research designs will meet the needs of accurately measuring the phenomena to be studied.


TE explicitly identifies all the key issues that researchers should disclose about their research study, be that a quantitative or qualitative study. That is why AAPOR’s Transparency Initiative (TI) is being developed to require that those survey organizations seeking TI certification readily disclose a considerable amount of information related to many of the components of the TE framework. But even if a researcher does not disclose that detailed level of information about a research study, I believe that researchers will benefit greatly by holding themselves to a higher standard in forcing themselves to think about all the TE components as they relate to each research project they conduct.


Those of us who do expert legal testimony and other related work know that there is a dearth of literature about how one should organize one’s reports and testimonies related to research-based evidence. To my knowledge, none of the existing literature covers the breadth of the components in the Total Error perspective, and there are some very conspicuous topics that are missing from what most lawyers and judges have been taught about social, behavioral, or marketing research in terms of the potential problems that can undermine a study’s reliability and/or validity. Thus, I have found that using the TE perspective is extremely useful in my legal work, whether it is to write expert reports that criticize an opposing party’s study or to plan an original research study for a client so it hopefully will hold up to the scrutiny and criticism of opposing experts. By introducing the TE framework at the start of my work on a case, I explain the “big picture” perspective I propose to take and how each TE component is related to that Big Picture. And, I have found that introducing this Big Picture, based on TE, strengthens all I have to say about each individual component of the framework.


And I will go so far as to say that “Yes, using a TE perspective helps me in my everyday life.” I think that the TE framework has direct application to the decisions that we make in life and to assessing the quality of the information (its reliability and validity) on which we base those decisions.

So, with that said, I would like to thank you for your attention today. I greatly appreciate the opportunity that the membership has afforded me to present this 2013 AAPOR presidential address.