Jump to Content
Mario Callegaro

Mario Callegaro

Mario Callegaro is user experience survey researcher at Google UK, London, in the Cloud Platform User Experience team (CPUX). He works on any survey related projects within his organization. He also consults with numerous other internal teams regarding survey design, sampling, questionnaire design and online survey programming and implementation. Mario holds a M.S. and a Ph.D. in Survey Research and Methodology from the University of Nebraska, Lincoln. Prior to joining Google, Mario was working as survey research scientist for now Ipsos KnowledgePanel previously known as Knowledge Networks KnowledgePanel. Current research areas: user experience research, web survey design, smartphone surveys, survey paradata, and questionnaire design in which he has published numerous papers, book chapters and conference presentations. He published (May 2014) an edited Wiley book titled Online Panel Research: A Data Quality Perspective together with Reginald P. Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick and Paul J.. Lavrakas. Mario also completed a book titled: "Web survey methodology" with Katja Lozar-Manfreda and Vasja Vehovar from the University of Ljubljana, Slovenia, published by Sage in June 2015 which is also available as open access PDF and Epub at this Sage URL: https://study.sagepub.com/web-survey-methodology
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Preview abstract Welcome to the 14th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. Special issues of journals have a space in this article because, in our view, they are like edited books. We also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write ups, a short book, if you wish on a specific subject. This article is an update of the books and journals published in the 2020 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. You will note that we use very broad definitions of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. This is because there are many books published in different outlets that can be very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely we have exhaustively listed all new books in each subcategory; we did our best scouting different resources and websites, but we take full responsibility for any omissions. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (October 2023) and with the printed copyright year of 2021. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. If you want to send information for the next issue, please send it to surveypractice.new.books@gmail.com. View details
    Preview abstract This article talks about survey paradata also called logs data and how they can use to shed light on questionnaire design issues. Written for the AAPOR newsletter View details
    Preview abstract Welcome to the 15th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science and user experience research. Special issues of journals have a space in this article because, in our view, they are like edited books. We also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write ups, a mini book, if you wish on a specific subject. This article is an update of the books and journals published in the 2021 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. You will note that we use very broad definitions of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. This is because there are many books published in different outlets that can be very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely we have exhaustively listed all new books in each subcategory; we did our best scouting different resources and websites, but we take full responsibility for any omissions. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (October 2023) and with the printed copyright year of 2022. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. If you want to send information for the next issue, please send it to surveypractice.new.books@gmail.com. View details
    Preview abstract After having reviewed hundreds of surveys in my career I would like to share what I learned from it. In this hands-on workshop I will discuss how you can improve the survey you are planning to field for your UX project. We will start with a decision tree to decide if a survey is actually the best method for your research process. Then we will move to some feasibility issues such as sample size and response rates. Before starting the second part of the workshop I will present the top 10 issues I found in my career reviewing surveys. In the second part of the workshop I will answer survey questions the audience has using the chat feature in Hopin (or join on screen to pose live questions). View details
    Preview abstract In this study, we explore the use of a hybrid approach in online surveys, combining traditional form-based closed-ended questions with open-ended questions administered by a chatbot. We trained a chatbot using OpenAI's GPT-3 language model to produce context-dependent probes to responses given to open-ended questions. The goal was to mimic a typical professional survey interviewer scenario where the interviewer is trained to probe the respondent when answering an open-ended question. For example, assume this initial exchange: “What did you find hard to use or frustrating when using Google Maps?” “It wasn't easy to find the address we were looking for” The chatbot would follow-up with “What made it hard to find the address?” or “What about it made it difficult to find?” or “What steps did you take to find it?”. The experiment consisted of a Qualtrics survey with 1,200 participants, who were randomly assigned to one of two groups. Both groups answered closed-ended questions, but the final open-ended question differed between the groups, with one group receiving a chatbot and the other group receiving a single open-ended question. The results showed that using a chatbot resulted in higher quality and more detailed responses compared to the single open-ended question approach, and respondents indicated a preference towards using a chatbot to open-ended questions. However, respondents also noted the importance of avoiding repetitive probes and expressed dislike for the uncertainty around the number of required exchanges. This hybrid approach has the potential to provide valuable insights for survey practitioners, although there is room for improvement in the conversation flow. View details
    Preview abstract Customer satisfaction surveys are common in technology companies like Google. The standard satisfaction question asks respondents to rate how satisfied or dissatisfied they are with a product or service generally going from very satisfied to very dissatisfied. When the scale is presented vertically, some survey literature suggests placing the positive end of the scale on top as “up means good” to avoid confusing respondents. We report from 2 studies. The first study shows that reversing the response options of a bipolar satisfaction question (very dissatisfied on top) leads to significantly lower reported satisfaction. In a between group experiment, 3,000 Google Opinion Rewards (Smartphone panel) respondents took a 1-question satisfaction survey. When the response options were reversed participants were 10 times more likely to select the very dissatisfied option (5% versus 0.5% prevalence). They also took 11% more time to answer the reversed scale. The second study shows that this effect can be partially explained by respondents mistaking the word dissatisfied for satisfied. ~1750 people responded to a reversed satisfaction question in an in-product survey on fonts.google.com. In a follow-up verification question (“You selected [answer option], was this your intention?”), 42.1% of the respondents indicated that they had selected very dissatisfied by mistake. Open ended feedback suggests that respondents hadn’t read properly and expected the positive option on top. More experiments should be conducted on different samples to better understand the interaction of scale orientation versus the type of scale (unipolar vs. bipolar). View details
    Preview abstract It is a common practice in market research to set up cross sectional survey trackers. Although many studies have investigated the accuracy of non-probability-based online samples, less is known about their test-retest reliability which is of key importance for such trackers. In this study, we wanted to assess how stable measurement is over short periods of time so that any changes observed over long periods in survey trackers could be attributed to true changes in sentiment rather than sample artifacts. To achieve this, we repeated the same 10-question survey of 1,500 respondents two weeks apart in four different U.S. non-probability-based samples. The samples included: Qualtrics panels representing a typical non-probability-based online panel, Google Surveys representing a river sampling approach, Google Opinion Rewards representing a mobile panel, and Amazon MTurk, not a survey panel in itself but de facto used as such in academic research. To quantify test-retest reliability, we compared the response distributions from the two survey administrations. Given the attitudes measured were not expected to change in a short timespan and no relevant external events were reported during fielding to potentially affect the attitudes, the assumption was that the two measurements should be very close to each other, aside from transient measurement error. We found two of the samples produced remarkably consistent results between the two survey administrations, one sample was less consistent, and the fourth sample had significantly different response distributions for three of the four attitudinal questions. This study sheds light on the suitability of different non-probability-based samples for cross sectional attitude tracking. It is a common practice in market research to set up cross sectional survey trackers. Although many studies have investigated the accuracy of non-probability-based online samples, less is known about their test-retest reliability which is of key importance for such trackers. In this study, we wanted to assess how stable measurement is over short periods of time so that any changes observed over long periods in survey trackers could be attributed to true changes in sentiment rather than sample artifacts. To achieve this, we repeated the same 10-question survey of 1,500 respondents two weeks apart in four different U.S. non-probability-based samples. The samples included: Qualtrics panels representing a typical non-probability-based online panel, Google Surveys representing a river sampling approach, Google Opinion Rewards representing a mobile panel, and Amazon MTurk, not a survey panel in itself but de facto used as such in academic research. To quantify test-retest reliability, we compared the response distributions from the two survey administrations. Given the attitudes measured were not expected to change in a short timespan and no relevant external events were reported during fielding to potentially affect the attitudes, the assumption was that the two measurements should be very close to each other, aside from transient measurement error. We found two of the samples produced remarkably consistent results between the two survey administrations, one sample was less consistent, and the fourth sample had significantly different response distributions for three of the four attitudinal questions. This study sheds light on the suitability of different non-probability-based samples for cross sectional attitude tracking. View details
    KANO ANALYSIS: A CRITICAL SURVEY SCIENCE REVIEW
    Chris Chapman
    Proceedings of the 2022 Sawtooth Software Conference, May 2022 Orlando, FL, Sawtooth Software (2022)
    Preview abstract The Kano method gives a “compelling” answer to questions about features, but it is impossible to know whether it is a correct answer. To put it differently, it will tell a story— quite possibly an incorrect story. This is because the standard Kano questions are low quality survey items, often paired with questionable theory and scoring. The concepts are based on durable consumer goods and may be inapplicable for technology products. We follow our theoretical assessment of the Kano method with empirical studies to examine the response scale, reliability, validity, and sample size requirements. We find that Kano validity is suspect on several counts, and a common scoring model is inappropriate because the items are multidimensional. Beyond the questions about validity, we find that category assignment may be unreliable with small samples (N < 200). Finally, we suggest alternatives that obtain similarly compelling answers using higher quality survey methods and analytic practices. View details
    Preview abstract Quant UX Con 2022 was the first ever general industry conference for Quantitative User Experience Researchers. When it was planned in the Fall of 2021, we expected to host an in-person, loosely structured “unconference” event for 150–200 people. By Spring 2022, registrations exceeded 2000 people and the organizing committee radically revised the format to be an online conference open to anyone, anywhere. When the event occurred in June 2022, there were over 2500 attendees with an average viewing time of 7.5 hours. It was a surprise and a delight to meet attendees from all over the world — more than 70 countries — who were interested in Quant UX. This volume compiles the papers and slides from presenters at Quant UX Con 2022. We are excited to share these with you! As you review the materials here, please keep a few points in mind: ● There were no recordings of the talks. When we planned the event, it was expected to be in person, and speakers expected it not to be recorded. ● Not every presentation is included. Some speakers were not able to include their materials due to publication restrictions. Other sessions were discussion panels that had no materials other than audience questions. ● The materials have varied formats. Some presenters shared raw slides; others shared annotated slides; and others shared full papers. ● If you have any questions or wish to follow up with authors, please contact them directly. In addition to this PDF, individual files are available at https://bit.ly/3SruyKD View details
    Preview abstract Welcome to the 12th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science and user experience research. After a hiatus due to the pandemic which affected my productivity, I am publishing this 2019 update, and shortly, I will publish the 2020 update. Special issues of journals have a space in this article because, in my view, they are like edited books. I also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art writeups, a mini book, if you wish on a specific subject. View details
    Preview abstract Estimating Covid infection rates in England. A look at administrative records, surveys, and Big Data Applying the Total Survey Error, Total Error Framework, and Fit For Purpose to a crucial measurement topic Roundtable paper for the session on Big Data of the conference: "The Future of Survey Research". Hosted by Duke University with support from the National Science Foundation https://sites.duke.edu/surveyresearch/ View details
    Preview abstract Welcome to the 13th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science and user experience research. After a hiatus due to the pandemic which affected my productivity, I am publishing this 2020 update. Special issues of journals have a space in this article because, in my view, they are like edited books. I also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art writeups, a mini book, if you wish on a specific subject View details
    Automatic Versus Manual Forwarding in Web Surveys - A Cognitive Load Perspective on Satisficing Responding
    Arto Selkälä
    Mick P. Couper
    Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis, Springer (2020), pp. 130-155
    Preview abstract We examine the satisficing respondent behavior and cognitive load of the participants in particular web survey interfaces applying automatic forwarding (AF) or manual forwarding (MF) in order to forward respondents to the next item. We create a theoretical framework based on the Cognitive Load theory (CLT), Cognitive Theory of Multimedia Learning (CTML) and Survey Satisficing Theory taken also into account the latest findings of cognitive neuroscience. We develop a new method in order to measure satisficing responding in web surveys. We argue that the cognitive response process in web surveys should be interpreted starting at the level of sensory memory instead of at the level of working memory. This approach allows researchers to analyze an accumulation of cognitive load across the questionnaire based on observed or hypothesized eye-movements taken into account the interface design of the web survey. We find MF reducing both average item level response times as well as the standard deviation of item-level response times. This suggests support for our hypothesis that the MF interface as a more complex design including previous and next buttons increases satisficing responding generating also the higher total cognitive load of respondents. The findings reinforce the view in HCI that reducing the complexity of interfaces and the presence of extraneous elements reduces cognitive load and facilitates the concentration of cognitive resources on the task at hand. It should be noted that the evidence is based on a relatively short survey among university students. Replication in other settings is recommended. View details
    Response Option Order Effects in Cross-Cultural Context. An experimental investigation
    Rich Timpone
    Marni Hirschorn
    Vlad Achimescu
    Maribeth Natchez
    2019 Conference of the European Association for Survey Research (ESRA), Zagreb (2019) (to appear)
    Preview abstract Response option order effect occurs when different orders of rating scale response options lead to different distribution or functioning of survey questions. Theoretical interpretations, notably satisficing, memory bias (Krosnick & Alwin, 1987) and anchor-and-adjustment (Yan & Keusch, 2015) have been used to explain such effects. Visual interpretive heuristics (esp. “left-and-top-mean-first” and “up-means-good”) may also provide insights on how positioning of response options may affect answers (Tourangeau, Couper, & Conrad, 2004, 2013). Most existing studies that investigated the response option order effect were conducted in mono-cultural settings. However, the presence and extent of response option order effect may be affected by “cultural” factors in a few ways. First, interpretive heuristics, such as “left-means-first” may work differently due to varying reading conventions (e.g., left-to-right vs. right-to-left). Furthermore, people within cultures where there are multiple primary languages and multiple reading conventions might possess different positioning heuristics. Finally, respondents from different countries may have varying degree of exposure and familiarity to a specific type of visual design. In this experimental study, we investigate rating scale response option order effect across three countries with different reading conventions and industry norms for answer scale designs -- US, Israel, Japan. The between-subject factor of the experiment consists of four combinations of scale orientation (vertical and horizontal) and the positioning of the positive end of the scale. The within-subject factors are question topic area and the number of scale points. The effects of device (smartphone vs. desktop computer/tablet), age, gender, education, and the degree of exposure to left-to-right contents will also be evaluated. We incorporate a range of analytical approaches: distributional comparisons, analysis of response latency and paradata, and latent structure modeling. We will discuss implications on choosing response option orders for mobile surveys and on comparing data obtained from different response option orders. View details
    Preview abstract Welcome to the 10th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science and user experience research. Yes, it is the 10th anniversary of this paper series started in March 2009. In the first article, there were only books in public opinion and survey methods, but over the years, I added more and more topics related to survey methods given our science is getting more interdisciplinary as ever. Special issues of journals have a space too because in my view, they are like edited books. Finally, I also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write-ups, a mini book, if you wish on a specific subject. I hope the readers enjoyed the articles over the years and were able to find interesting books to improve their knowledge on a particular subject. View details
    Detecting Response Scale Inconsistency in Real Time
    Carol Haney
    74th Annual Conference of the American Association for Public Opinion Research (2019) (to appear)
    Preview abstract Researchers often face the challenge of seemingly conflicting respondent answers to different questions about the same subject. Some respondents will give positive open-ended evaluations of a subject immediately after having provided a low rating for the same subject. In some proportion of these possibly confused cases, the culprit may be features of the survey design that are influencing respondent answers in unexpected ways. This paper examines a research experiment where the scale was presented in 4 ways using a commonly-used 5 point, fully-labeled, bi-polar scale, with vertical orientation, positive on top; vertical with negative on top; horizontal orientation with positive on the left; and horizontal with positive on the right. We are looking at two groups of respondents,those who respond via desktop or mobile. Each respondent was assigned to two out of five unipolar and two out of five bipolar response scale. The questions asked about physical and mental health, financial situation & work satisfaction. In total we used about 4,500 U.S. respondents from the online panel Survey Sampling International. We had about 450 respondents per each desktop condition and about 650 for each mobile condition (2 by 4 design). Mobile respondents tended to be younger, more female and slightly less educated than desktop respondents. For each condition, a follow-up open-end question was asked on why the respondent gave the score that they gave. In real-time, right after the respondent completed the open-end response, the response was auto-coded by Google Cloud Natural language API AnalyzeSentiment web service on a scale of -1 to 1. Then, the sentiment and scale response were checked for inconsistency of the two responses from the respondent (scale answer was positive and open-end response was auto-coded as having negative sentiment, or scale answer was negative and open-end response was auto-coded as having positive sentiment). If inconsistency was identified we have the opportunity to the respondent who elected it to change one of their answers and the reasons why they chose to change their response. Paradata logs such as time per question, number of clicks, and change of answers were also collected. In this work we assess two main questions: we want to identify which of the 4 scales provides more consistent responses, and the accuracy of the auto-coding of sentiment using manual coding. The main findings were the following: Mobile respondents wanted to change response options more often than desktop respondents Answering a scale from the negative end almost always takes longer Higher inconsistency showed for mobile respondents Unipolar scales showed higher inconsistency overall than bipolar scales Unipolar question showed higher inconsistency for horizontal positive left & vertical negative top Bipolar scales showed higher inconsistency for vertically oriented scale When manually analyzed the coding of Google NLP on sentiment for the open ended answers, the quality was really good provided the amount of text written per open end. View details
    Exploring New Statistical Frontiers at the Intersection of Survey Science and Big Data: Convergence at “BigSurv18”
    Craig A. Hill
    Paul Biemer
    Trent Buskirk
    Ana Lucía Córdova Cazar
    Adam Eck
    Lilli Japec
    Antje Kirchner
    Stas Kolenikov
    Lars Lyberg
    Patrick Sturgis
    Survey Research Methods, vol. 13 (2019), pp. 123-135
    Preview abstract Held in October 2018, The Big Data Meets Survey Science conference, also known as “BigSurv18,” provided a first-of-its-kind opportunity for survey researchers, statisticians, computer scientists, and data scientists to convene under the same roof. At this conference, scientists from multiple disciplines were able to exchange ideas about their work might influence and enhance the work of others. This was a landmark event, especially for survey researchers and statisticians, whose industry has been buffeted of late by falling response rates and rising costs at the same time as a proliferation of new tools and techniques, coupled with increasing availability of data, has resulted in “Big Data” approaches to describing and modelling human behavior. View details
    Preview abstract In this blogpost Daniel Russell and Mario Callegaro share some strategies to be better searchers using online search engines View details
    Preview abstract Welcome to the 9th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, and user experience research. This article is an update of the 2016 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. Given that the last update listed books available as of August 2016, I added a few books, papers, and special issues that came out in late 2016, so there is no gap. This year I added a new section on user experience research. User experience research is a growing field with many applications to desktop and mobile platforms. Given almost all data collection methods in survey research rely heavily on technology, the learnings from the user experience field can be very beneficial to the survey researcher and practitioner. You will also note that I use very broad definitions of public opinion, survey methods, survey statistics, Big Data, and user experience research. This is because there are many books published in different outlets that can very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely I have exhaustively listed all new books in each subcategory; I did my best scouting different resources and Websites, but I take full responsibility for any omission. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (January 2018) and with copyright year as either 2016 or 2017. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. View details
    The Role of Surveys in the Era of “Big Data”
    The Palgrave handbook of Survey Research, Palgrave (2018), pp. 175-192
    Preview abstract Survey data have recently been compared and contrasted with so-called “Big Data” and some observers have speculated about how Big Data may eliminate the need for survey research. While both Big Data and survey research have a lot to offer, very little work has examined the ways that they may best be used together to provide richer datasets. This chapter offers a broad definition of Big Data and proposes a framework for understanding how the benefits and error properties of Big Data and surveys may be leveraged in ways that are complementary. This chapter presents several of the opportunities and challenges that may be faced by those attempting to bring these different sources of data together. View details
    Preview abstract Welcome to the 8th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, and Big Data. This year I officially added Big Data to the title as there is very strong interest on the topic and surveys and Big data are becoming more and more interrelated. This article is an update of the April 2015 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. It is unlikely to list all new books in the field; I did my best scouting different resources and websites, but I take full responsibility for any omission. The list is also focusing only on books published in English language and available for purchase (as ebook or in print) at the time of this review (October 2016). Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. View details
    An assessment of the causes of the errors in the 2015 UK general election opinion polls
    Patrick Sturgis
    Jouni Kuha
    Nick Baker
    Stephen Fisher
    Jane Green
    Will Jennings
    Benjamin E. Lauderdale
    Patten Smith
    Journal of the Royal Statistical Society, Series A (2017)
    Preview abstract The opinion polls that were undertaken before the 2015 UK general election underestimated the Conservative lead over Labour by an average of 7 percentage points. This collective failure led politicians and commentators to question the validity and utility of political polling and raised concerns regarding a broader public loss of confidence in survey research. We assess the likely causes of the 2015 polling errors. We begin by setting out a formal account of the statistical methodology and assumptions that are required for valid estimation of party vote shares by using quota sampling. We then describe the current approach of polling organizations for estimating sampling variability and suggest a new method based on bootstrap resampling. Next, we use poll microdata to assess the plausibility of different explanations of the polling errors. Our conclusion is that the primary cause of the polling errors in 2015 was unrepresentative sampling. View details
    DIY research
    Research World, vol. 2017 (2017), pp. 38-41
    Preview abstract An interview with Tim Macer about what DIY research means for Market Research View details
    Preview abstract Grids (or matrix, table) are commonly used on self-administered surveys. In order to optimize online surveys for smartphones, grid designs aiming for small-screen devices are emerging. In this study we investigate four research questions regarding the effectiveness and drawbacks of different grid designs, more specifically do the grid design effect: Data quality, as indicated by breakoffs, satisfying behaviors and response errors Response time Response distributions Inter-relationships among questions We conducted two experiments. The first experiment was conducted in April 2016 in Brazil, US and Germany. We tested a progressive grid, a responsive and a collapsable grid. Results were analyzed for desktop/laptops only due to the small number of respondents who took the study via smartphones. We found the collapsable grid eliciting the highest amount of error prompts for item nonresponse. The second experiment was fielded in August 2016 testing grid designs on three types of answer scales: a 7-point fully-labeled rating scale, a 5-point fully-labeled rating scale, and a 6-point fully-labeled frequency scale. Respondents from the US and Japan to an online survey were randomly assigned to one of three conditions: (a) no grid, where each question was presented on a separate screen; (b) responsive grid, where a grid is shown on large screens and as single-column vertical table on small screens (with question stem fixed as header); (c) progressive grid, where grouped questions were presented screen-by-screen with question stem and sub-questions (stubs) fixed on top. Quotas were enforced so that half of the respondents completed the survey on large-screen devices (desktop/tablet computers) and the other half on smartphones. Respondents were 600 per grid condition per screen size per country. Findings showed that progressive grid had less straightlining and response errors whereas responsive grid had less break-offs. Differences were also found between grid designs in terms of response time and response distributions; however patterns varied by country, screen size and answer scales. Further analysis will explore the effect of grid design on question inter-relationships. While visual and interactive features impact the utility of grid designs, we found that the effects might vary by question types, screen sizes, and countries. More experiments are needed to explore designs truly optimized for online surveys. View details
    ESOMAR/GRBN Guideline on Mobile Research
    Reg Baker
    Guy Rolfe
    Simon van Duivenvoorde
    Kathy Joe
    Steve Gutterman
    Betsy Leichliter
    Oriol Llaurado
    Peter Milla
    Paul Quinn
    Lisa Salas
    Michael Schlueter
    Navin Williams
    ESOMAR (2017)
    Preview abstract This new Guideline on Mobile Research aligns global policies with developing regulations and technology and the latest international developments for best practice in this area. Mobile research is a rapidly evolving field and a growing market which accounts for $1.8bn global annual turnover and is widely used in advanced as well as developing economies. Mobile research ranges from calling or texting respondents to ask them questions, to participants videoing how they perform daily tasks such as cooking and more recently, to collecting data generated by mobile devices such as geo-location data, all to provide researchers with richer insights about attitudes and behaviour. This new guideline is designed to help researchers address legal, ethical and practical considerations in using new technologies when conducting mobile research. The text has been drafted by a team of international experts to ensure that it incorporates the latest practices of mobile research, so that the new Guideline takes into account the continuing innovation in technology that has created information sources that are relevant to research. These include: -Passive data collection including biometric data, photos and recordings and instore tracking -Mystery shopping through camera and video -Data that may have been collected for a non-research purpose which is used in research including geolocation data from mobile providers, or usage data from app providers View details
    Preview abstract Survey research is increasingly conducted using online panels and river samples. With a large number of data suppliers available, data purchasers need to understand the accuracy of the data being provided and whether probability sampling continues to yield more accurate measurements of populations. This paper evaluates the accuracy of a probability sample and non-­probability survey samples that were created using various different quota sampling strategies and sample sources (panel versus river samples) on the accuracy of estimates. Data collection was organized by the Advertising Research Foundation (ARF) in 2013. We compare estimates from 45 U.S. online panels of non-­probability samples, 6 river samples, and one RDD telephone sample to high-­quality benchmarks ­­ population estimates obtained from large-­scale face-­to-­face surveys of probability samples with extremely high response rates (e.g., ACS, NHIS, and NHANES). The non-probability samples were supplied by 17 major U.S. providers. Online respondents were directed to a third party website where the same questionnaire was administered. The online samples were created using three quota methods: (A) age and gender within regions; (B) Method A plus race/ethnicity; and (C) Method B plus education. Mean questionnaire completion time was 26 minutes, and the average sample size was 1,118. Comparisons are made using unweighted and weighted data, with different weighting strategies of increasing complexity. Accuracy is evaluated using the absolute average error method, where the percentage of respondents who chose the modal category in the benchmark survey is compared to the corresponding percentage in each sample. The study illustrates the need for methodological rigor when evaluating the performance of survey samples. View details
    Report of the Inquiry into the 2015 British general election opinion polls
    Patrick Sturgis
    Nick Baker
    Stephen Fisher
    Jane Green
    Will Jennings
    Jouni Kuha
    Ben Lauderdale
    Patten Smith
    National Centre for Research Methods (2016), pp. 115
    Preview abstract Executive Summary The opinion polls in the weeks and months leading up to the 2015 General Election substantially underestimated the lead of the Conservatives over Labour in the national vote share. This resulted in a strong belief amongst the public and key stakeholders that the election would be a dead heat and that a hung-parliament and coalition government would ensue. In historical terms, the 2015 polls were some of the most inaccurate since election polling first began in the UK in 1945. However, the polls have been nearly as inaccurate in other elections but have not attracted as much attention because they correctly ndicated the winning party. The Inquiry considered eight different potential causes of the polling miss and assessed the evidence in support of each of them. Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree. The other putative causes can have made, at most, only a small contribution to the total error. We were able to replicate all published estimates for the final polls using raw microdata, so we can exclude the possibility that flawed analysis, or use of inaccurate weighting targets on the part of the pollsters, contributed to the polling miss. The procedures used by the pollsters to handle postal voters, overseas voters, and unregistered voters made no detectable contribution to the polling errors. There may have been a very modest ‘late swing’ to the Conservatives between the final polls and Election Day, although this can have contributed – at most – around one percentage point to the error on the Conservative lead. We reject deliberate misreporting as a contributory factor in the polling miss on the grounds that it cannot easily be reconciled with the results of the re-contact surveys carried out by the pollsters and with two random surveys undertaken after the election. Evidence from several different sources does not support differential turnout misreporting making anything but, at most, a very small contribution to the polling errors. There was no difference between online and phone modes in the accuracy of the final polls. However, over the 2010-2015 parliament and in much of the election campaign, phone polls produced somewhat higher estimates of the Conservative vote share (1 to 2 percentage points). It is not possible to say what caused this effect, given the many confounded differences between the two modes. Neither is it possible to say which was the more accurate mode on the basis of this evidence. The decrease in the variance on the estimate of the Conservative lead in the final week of the campaign is consistent with herding - where pollsters make design and reporting decisions that cause published estimates to vary less than expected, given their sample sizes. Our interpretation of the evidence is that this convergence was unlikely to have been the result of deliberate collusion, or other forms of malpractice by the pollsters. View details
    Metrics and Design Tool for Building and Evaluating Probability-Based Online Panels
    Charles DiSogra
    Social Science Computer Review, vol. 34 (2016), pp. 26-40
    Preview abstract Probability-based online panels are beginning to replace traditional survey modes for existing established surveys in Europe and the United States. In light of this, current standards for panel response rate calculations are herein reviewed. To populate these panels cost-effectively, more diverse recruitment methods, such as, mail, telephone, and recruitment modules added to existing surveys are being used, either alone or in combinations. This results in panel member cohorts from different modes complicating panel response rate calculations. Also, as a panel ages with inevitable attrition, multiple cohorts result from panel refreshment and growth strategies. Formulas are presented to illustrate how to handle multiple cohorts for panel metrics. Additionally, drawing on relevant metrics used for a panel response rate, we further demonstrate a computational tool to assist planners in building a probability-based panel. This provides a means to estimate the recruitment effort required to build a panel of a predetermined size. View details
    Web Survey Methodology
    Katja Lozar Manfreda
    Vasja Vehovar
    Sage, London (2015), pp. 344
    Preview abstract Web Survey Methodology guides the reader through the past fifteen years of research in web survey methodology. It both provides practical guidance on the latest techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. Core topics from preparation to questionnaire design, recruitment testing to analysis and survey software are all covered in a systematic and insightful way. The reader will be exposed to key concepts and key findings in the literature, covering measurement, non-response, adjustments, paradata, and cost issues. The book also discusses the hottest research topics in survey research today, such as internet panels, virtual interviewing, mobile surveys and the integration with passive measurements, e-social sciences, mixed modes and business intelligence. The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research. The book is available as open access in PDF and Epub format. Click the download link above for the PDF version REVIEWS Comprehensive and thoughtful! Those two words beautifully describe this terrific book. Internet surveys will be at the centre of survey research for many decades to come, and this book is a must-read handbook for anyone serious about doing online surveys well or using data from such surveys. No stone is left unturned - the authors address every essential topic and do so with a remarkable command of the big picture and the subtleties involved. Readers will walk away with a clear understanding of the many challenges inherent in conducting online studies and with an appropriate sense of optimism about the promise of the methodology and how best to implement it. Jon Krosnick Frederic O. Glover Professor in Humanities and Social Sciences, Stanford University This is an excellent, academic standard, book that every serious market researcher should own and consult. The authors have compiled an immense amount of useful and well-referenced information about every aspect of web surveys, creating an invaluable resource. Ray Poynter Managing Director, The Future Place Read on and pick from the basket of useful pieces of advice that Mario Callegaro and his colleagues have put together! This is a very rich resource for practitioners and students within web survey methodology. Ulf-Dietrich Reips Professor, Department of Psychology, University of Konstanz The authors guide us through the whole survey process and include modern developments, such as paradata and mobile surveys. A must-have for everyone planning an online survey. Edith de Leeuw MOA professor of survey methodology, Utrecht University Their book takes the reader through the past fifteen years of research in Web survey research and methodology. It provides practical guidance on the current techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. These include: preparation for questionnaire design; recruitment testing; data analysis; and survey software. These major topics are covered in a systematic and insightful way. Karl M van Meter Bulletin de Methodologie Sociologique The authors of the present work have opted for the 'longest way', the 'most difficult', yet to expose all aspects related to the survey tool, making the reader with little experience to develop a complete investigation. It is a text accessible to the beginner and extremely exhaustive for the experienced by providing other points of view and numerous bibliography in which to deepen. Vidal Díaz de Rada Revista Española de Investigaciones Sociológicas View details
    Yes–no answers versus check-all in self-administered modes. A systematic review and analyses
    Mike Murakami
    Ziv Tepman
    Vani Henderson
    International Journal of Market Research, vol. 57 (2015), pp. 203-223
    Preview abstract When writing questions with dichotomous response options, those administering surveys on the web or on paper can choose from a variety of formats, including a check-all-that-apply or a forced-choice format (e.g. yes-no) in self-administered questionnaires. These two formats have been compared and evaluated in many experimental studies. In this paper, we conduct a systematic review and a few meta-analyses of different aspects of the available research that compares these two formats. We find that endorsement levels increase by a factor of 1.42 when questions are posed in a forced-choice rather than check-all format. However, when comparing across a battery of questions, the rank order of endorsement rates remains the same for both formats. While most authors hypothesise that respondents endorse more alternatives presented in a forced-choice (versus check-all-that-apply) format because they process that format at a deeper cognitive level, we introduce the acquiescence bias hypothesis as an alternative and complementary explanation. Further research is required to identify which format elicits answers closer to the ‘true level’ of endorsement, since the few validation studies have proved inconclusive. View details
    ESOMAR/GRBN Online research guideline
    Reg Baker
    Peter Milla
    Melanie Courtright
    Brian Fine
    Philippe Guilbert
    Debrah Harding
    Kathy Joe
    Jackie Lorch
    Bruno Paro
    Efrain Ribeiro
    Alina Serbanica
    Esomar, Esomar (2015)
    Preview abstract This ESOMAR/GRBN Online Research Guideline is designed to help researchers address legal, ethical and practical considerations in using new technologies when conducting research online and is an update of guidance issued in 2011. To ensure that it is in line with most recent practice, in addition to other updated sections, this new draft Guideline also contains: New guidance on passive data collection requirements A new section on incentives, sweepstakes and free prize draws A new section on sample source and management An updated section on specific online technologies such as tracking, cloud storage and static and dynamic IDs View details
    AAPOR standard definition 8th edition
    Tom E. Smith
    Rob Daves
    Paul J. Lavrakas
    Mick P. Couper
    Timothy P. Johnson
    Sara Zuckerbraun
    Katherine Morton
    David Dutwin
    Mansour Fahimi
    AAPOR (2015)
    Preview abstract Background: For a long time, survey researchers have needed more comprehensive and reliable diagnostic tools to understand the components of total survey error. Some of those components, such as margin of sampling error, are relatively easily calculated and familiar to many who use survey research. Other components, such as the influence of question wording on responses, are more difficult to ascertain. Groves (1989) catalogues error into three other major potential areas in which it can occur in sample surveys. One is coverage, where error can result if some members of the population under study do not have a known nonzero chance of being included in the sample. Another is measurement effect, such as when the instrument or items on the instrument are constructed in such a way to produce unreliable or invalid data. The third is nonresponse effect, where nonrespondents in the sample that researchers originally drew differ from respondents inways that are germane to the objectives of the survey. Defining final disposition codes and calculating survey outcome rates is the topic for this booklet. Often it is assumed — correctly or not — that the lower the response rate, the more question there is about the validity of the sample. Although response rate information alone is not sufficient for determining how much nonresponse error exists in a survey, or even whether it exists, calculating the rates is a critical first step to understanding the presence of this component of potential survey error. By knowing the disposition of every element drawn in a survey sample, researchers can assess whether their sample might contain nonresponse error and the potential reasons for that error. With this report AAPOR offers a tool that can be used as a guide to one important aspect of a survey's quality. It is a comprehensive, well-delineated way of describing the final disposition of cases and calculating outcome rates for surveys conducted by telephone (landline and cell), for personal interviews in a sample of households, for mail surveys of specifically named persons (i.e., a survey in which named persons are the sampled elements), and for Web surveys. AAPOR urges all practitioners to use these standardized sample disposition codes in all reports of survey methods, no matter if the project is proprietary work for private sector clients or a public, government or academic survey. This will enable researchers to find common ground on which to compare the outcome rates for different surveys. ​The eighth edition (2015) was edited by Smith who chaired the committee of Daves, Lavrakas, Couper, and Johnson. The revised section on establishment surveys was developed by Sara Zuckerbraun and Katherine Morton. The new section on dual-frame telephone surveys was prepared by a sub-committee headed by Daves with Smith, David Dutwin, Mario Callegaro, and Mansour Fahimi as members. View details
    Preview abstract Welcome to the 7th edition of this column on recent books and journal articles in the field of public opinion, survey methods, and survey statistics. This year I had the chance to visit the London book fair, so I was able actually to see some of the new books in our field. This article is an update of the April 2014 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. It is unlikely to list all new books in the field; I did my best scouting different resources and websites, but I take full responsibility for any omission. The list is also focusing only on books published in English language and available for purchase (as an Ebook or in print) at the time of this review (June 2015). Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. Given our field is becoming more and more interdisciplinary, this year I added a new section called “big data, social media and other relevant books” to capture areas that are overlapping more and more with public opinion, survey research, and survey statistics. View details
    Web Surveys for the General Population: How, why and when?
    Gerri Nicolaas
    Lisa Calderwood
    Peter Lynn
    Caroline Roberts
    Natcen (2014), pp. 22
    Preview abstract Cultural and technological change has made the web a possible and even desirable mode for complex social surveys, but the financial challenges faced by the Research Councils and the UK Government has accelerated this shift, creating an urgent need to explore both its potential and hazards for a range of studies. While some progress in carrying out large-scale complex social surveys on the web has been made, there is still no consensus about how this can best be achieved while maintaining population representativeness and preserving data quality. To address this problem, the NCRM funded a network of methodological innovation “Web Surveys for the General Population: How, Why and When?” (also known by its acronym GenPopWeb). A key objective of the network’s activities was to review and synthesise existing knowledge about the use of web-based data collection for general population samples and to identify areas where new research is needed. The network “Web Surveys for the General Population: Why, How and When?” was supported with funding from the ESRC National Centre for Research Methods under the initiative Networks for Methodological Innovation 2012. We are also grateful to the Institute of Education and the University of Essex for hosting the two main events of the network. We would like to thank all of the presenters at the events as well as the participants for their contribution. Particular thanks are due to the UK Core Group for their time, advice and support: Bill Blyth, TNS Global Mario Callegaro, Google UK Ed Dunn & Laura Wilson, ONS Rory Fitzgerald, City University London Joanna Lake, ESRC Carli Lessof & Joel Williams, TNS BMRB Nick Moon, GfK NOP Patten Smith, Ipsos MORI Professor Patrick Sturgis, NCRM Joe Twyman & Michael Wagstaff, YouGov UK View details
    Internet and mobile ratings panels
    Philip M. Napoli
    Paul J. Lavrakas
    Online Panel Research: A Data Quality Perspective, Wiley (2014), pp. 387-407
    Preview abstract This chapter examines how Internet (PC and mobile) ratings panels are constructed, managed, and utilized. We provide an overview of the history and evolution of Internet/mobile ratings panels and examines the methodological challenges associated with creating and maintaining accurate and reliable Internet/mobile ratings panels. The research that has assessed the accuracy and validity of online panel data is critically discussed; as well as research that illustrates the type of scholarly and applied research questions that can be investigated using online ratings panel data. The chapter concludes with a discussion of the future of online ratings panels within the rapidly evolving field of Internet audience measurement. View details
    Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys
    Michael W. Link
    Joe Murphy
    Michael F. Schober
    Trent D. Buskirk
    Jennifer Hunter Childs
    Casey Langer Tesfaye
    Jon Cohen
    Elizabeth Dean
    Paul Harwood
    Josh Pasek
    Michael Stern
    AAPOR (2014)
    Preview abstract Public opinion research is entering a new era, one in which traditional survey research may play a less dominant role. The proliferation of new technologies, such as mobile devices and social media platforms, are changing the societal landscape across which public opinion researchers operate. The ways in which people both access and share information about opinions, attitudes, and behaviors have gone through perhaps a greater transformation in the last decade than in any previous point in history and this trend appears likely to continue. The rapid adoption of smartphones and ubiquity of social media are interconnected trends which may provide researchers with new data collection tools and alternative sources of information to augment or, in some cases, provide alternatives to more traditional data collection methods. However, this brave new world is not without its share of issues and pitfalls – technological, statistical, methodological, and ethical. As the leading association of public opinion research professionals, AAPOR is uniquely situated to examine and assess the potential impact of these “emerging technologies” on the broader discipline and industry of opinion research. In September 2012, AAPOR Council approved the formation of the Emerging Technologies Task Force with the goal of focusing on two critical areas: smartphones as data collection vehicles and social media as platform and information source. The purposes of the task force are to: define and delineate the scope and landscape of each area; describe the potential impact in terms of quality, efficiency, timeliness and analytic reach; discuss opportunities and challenges based on available research; delineate some of the key legal and ethical considerations; and detail the gaps in our understanding and propose avenues of future research. The report here examines the potential impact of mobile technologies on public opinion research – as a vehicle for facilitating some aspect of the survey research process (i.e., recruitment, questionnaire administration, reducing burden, etc.) and/or augmenting or replacing traditional survey research methods (i.e., location data, visual data, and the like). View details
    Online Panel Research: A Data Quality Perspective
    Reg Baker
    Jelke Bethlehem
    Anja S. Goritz
    Jon A. Krosnick
    Paul J. Lavrakas
    Wiley (2014), pp. 512
    Preview abstract This edited volume provides new insights into the accuracy and value of online panels for completing surveys Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data. This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. Online Panel Research presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues. The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online_panel View details
    Social media in public opinion research
    Michael Link
    Joe Muphy
    Michael F. Schober
    Trent D. Buskirk
    Jennifer Hunter Childs
    Casey Langer Tesfaye
    Jon Cohen
    Elizabeth Dean
    Paul Harwood
    Josh Pasek
    Michael Stern
    AAPOR, AAPOR (2014), pp. 57
    Preview abstract AAPOR announces the release of an important report, Social Media in Public Opinion Research, authored by the Emerging Technologies Task Force. As social media platforms – such as Facebook, Twitter, and LinkedIn to name a few – expand and proliferate, so does access to users’ thoughts, feelings and actions expressed instantaneously, organically, and often publicly, across these platforms. At question is how researchers and others interested in public opinion derive reliable and valid insights from the data generated by social media users. The report, Social Media in Public Opinion Research, highlights the use of social media as a vehicle for facilitating the survey research process (i.e., questionnaire development, recruitment, locating, etc.) and as a way of potentially supplementing or replacing traditional survey methods (i.e., content analysis of existing data). It offers an initial set of guidelines and considerations for researchers and consumers of social media-based studies, noting the opportunities and challenges in this new area. View details
    Sui sondaggi politici in Italia
    Piergiorgio Corbetta
    Il Mulino, vol. 5 (2014), pp. 827-838
    Preview abstract In this discussion piece, Piergiorgio Corbetta and Mario Callegaro analyse the results of Italian pre-election polls for the European election of May 2014. The paper is in Italian language. View details
    A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples
    Ana Villar
    David S. Yeager
    Jon A. Krosnick
    Online Panel Research: A Data Quality Perspective, Wiley (2014), pp. 23-53
    Preview abstract his chapter provides an overview of studies comparing the quality of data collected by online survey panels by looking at three criteria: (1) comparisons of point estimates from online panels to high-quality, established population benchmarks; (2) comparisons of the relationship among variables; and (3) the reproducibility of results for online survey panels conducted on probability samples to panels conducted on nonprobability samples. When looking at point estimates, all online survey panels differed to some extent from the population benchmarks. However, the largest comparison studies suggest that point estimates from online panels of nonprobability samples have higher differences as compared to benchmarks than online panels of probability samples. This finding is consistent across time and across studies conducted in different countries. Moreover, post-stratification weighting strategies helped little and in an inconsistent way to reduce such differences for data coming from online panels of nonprobability samples, whereas these strategies did bring estimates from online panels of probability samples consistently closer to the benchmarks. When comparing relationships among variables, it was found that researchers would reach different conclusions when using online panels of nonprobability samples versus panels of probability samples. When looking at reproducibility of results, the limited evidence found suggests that there are no substantial differences in replication and effect size across probability and nonprobability samples for question wording experiments and when comparing students samples to other samples. It is worth noting that in pre-election polls, an area where abundant prior knowledge exists, online panels of nonprobability samples have consistently performed as well and in some cases better than polls based on probability samples in predicting election winners. View details
    Online panel research: History, concepts, applications and a look at the future
    Reg Baker
    Jelke Bethlehem
    Anja S. Goritz
    Jon A. Krosnick
    Paul J. Lavrakas
    Online Panel Research: A Data Quality Perspective, Wiley (2014), pp. 1-22
    Preview abstract In this introductory chapter, written by the six editors of this volume, we introduce and attempt to systematize the key concepts used when discussing online panels. The connection between Internet penetration and the evolution of panels is discussed as are the different types of online panels, their composition, and how they are built. Most online panels do not use probability-based methods, but some do and the differences are discussed. The chapter also describes in some detail the process of joining a panel, answering initial profiling questions, and becoming an active panel member. We discuss the most common sampling techniques, highlighting their strengths and limitations, and touch on techniques to increase representativeness when using a non-probability panel. The variety of incentive methods in current use also is described. Panel maintenance is another key issue, since attrition often is substantial and a panel must be constantly refreshed. Online panels can be used to support a wide range of study designs, some cross-sectional or and others longitudinal, where the same sample members are surveyed multiple times on the same topic. We also discuss industry standards and professional association guidelines for conducting research using online panels. The chapter concludes with a look to the future of online panels and more generally online sampling via means other than classic panels. View details
    Web Coverage in the UK and its Potential Impact on General Population Web Surveys
    Web surveys for the general population: How, why and when?, 25-26 February 2013. Institute of Education, London (2013)
    Preview abstract Mario Callegaro (Google UK) provided some data on internet access in the UK and the digital divide. He concluded that the UK internet access is steadily increasing and is likely to soon reach a level of almost universal coverage. But high coverage does not imply that everyone with access would be capable or willing to take part in web surveys. Furthermore, internet access is becoming mobile (e.g. Smartphone) and respondents are using a wide variety of devices to answer web surveys. Making web surveys View details
    Where Am I? A Meta-Analysis of Experiments on the Effects of Progress Indicators for Web Surveys
    Ana Villar
    Yongwei Yang
    Social Science Computer Review, vol. 31 (2013), pp. 744 - 762
    Preview abstract The use of progress indicators seems to be standard in many online surveys. Researchers include them in surveys in the hope they will help reduce drop-off rates. However, there is no consensus in the literature regarding their effects. In this meta-analysis, we analyzed 32 randomized experiments comparing drop-off rates of an experimental group who completed an online survey in which a progress indicator was shown to drop-off rates of a control group to whom the progress indicator was not shown. In all the studies, a drop-off was defined as a discontinuance of the survey (at any point) after it has begun, resulting in failure to complete the survey. Three types of progress indicators were analyzed: constant, fast-to-slow, and slow-to-fast. Our results show that, overall, using a constant progress indicator does not significantly help reduce drop-offs and that effectiveness of the progress indicator varies depending on the speed of indicator: Fast-to-slow indicators reduced drop-offs, whereas slow-to-fast indicators increased drop-offs. We also found that among the studies in which a small incentive was promised, showing a constant progress indicator increased the drop-off rate. These findings question the common belief that progress indicators help reduce drop-off rates. View details
    Preview abstract Simon Chadwick from Research World interviews Mario Callegaro about why we need to share knowledge View details
    Paradata in Web Surveys
    Improving Surveys with Paradata: Analytic Uses of Process Information, Wiley, Hoboken, NJ (2013), pp. 263-282
    Preview abstract An important technical distinction regarding the collection of paradata in web surveys is that they can be collected on the server side and/or the client side. In web surveys, paradata is categorized into device-type paradata and questionnaire navigation paradata. Device-type paradata provide information regarding the kind of device used to complete the survey. Questionnaire navigation paradata describe the entire process of filling out the questionnaire. This chapter provides examples of usage for device-type and questionnaire navigation paradata. Another use of paradata pioneered in the early 2000s by Jeavons is adaptive scripting. Adaptive scripting refers to using paradata in real time to change the survey experience for the respondent. The chapter also discusses two main classes of software to collect paradata such as specific paradata software and paradata collection tools embedded in commercial and non-commercial survey platforms. Ethical and communication issues are important considerations in using web survey paradata. View details
    Italy
    Teresio Poggio
    Telephone surveys in Europe: Research and practice, Springer, Berlin (2012), pp. 59-72
    Preview abstract This chapter highlights the current Italian situation about telephone surveys. Table of contents: Introduction The reality of phone surveys in Italy Main recent changes in the technological and social context Coverage error as the big issue in phone surveys Conclusions: no way to skip the low cost-low quality vicious cycle? View details
    Preview abstract When designing online surveys, researchers must choose from a variety of pagination options. Respondents' expectations, experiences, and behaviors may vary depending on a survey's pagination, affecting both breakoffs and responses themselves. Surprisingly little formal experimentation has been conducted on the effects of survey pagination, with initial evidence focused on a long survey of university students (Peytchev, Couper, McCabe, Crawford 2006). This experiment is intended to further inform the effects on pagination in online surveys. In a split-ballot experiment, we served respondents one of three versions of a short online questionnaire (~15 questions) on attitudes and experiences toward an online product. questionnaire are randomly served to respondents constructed with a) one question per page, b) logical groupings of questions over several pages, and c) as few pages as possible. Effects of pagination are evaluated on breakoff rates, response time, item and unit nonresponse, interitem correlations, and perceived length/difficulty. We hypothesize that the questionnaire with the fewest (longest) pages will cause greater initial breakoff, and the one with most pages will suffer increased breakoff during the survey. View details
    IVR and web administration in structured interviews utilizing rating scales: Exploring the role of motivation as a moderator to mode effects
    Yongwei Yang
    Dennison S. Bhola
    Don A. Dillman
    International Journal of Social Research Methodology, vol. 14 (2011), pp. 1-15
    Preview abstract Survey researchers have reported differing results on frequency distributions when the same item is delivered via an interactive voice response (IVR) system versus the web. The current paper expands such research into the organizational research field and evaluated the hypothesis that respondent motivation affects the occurrence of mode differences. In this study, personnel selection instruments using five‐point Likert scales were administered to job applicants and job incumbents. Data were collected via IVR or via the web. With job incumbents, the mode effect observed was similar in magnitude to that observed in the survey research literature. However, with job applicants the mode effect was smaller. View details
    Combining landline and mobile phone samples A dual frame approach
    Oztas Ayhan
    Siegfried Gabler
    Sabine Haeder
    Ana Villar
    Gesis working paper 2011/13 (2011)
    Preview abstract More and more households abandon their landline phones and rely solely on cell phones. This implies a challenge for survey researchers: since the cell phone only households are not included in the frames for landline telephone surveys, samples based on these frames are in danger to be seriously biased due to undercoverage, if respondents who do not have a landline are systematically different from respondents who have a landline. Thus, strategies for combining samples from different frames need to be developed. In this paper we give theoretical foundations for a dual frame approach to sampling, explain how samples can be optimally allocated from these two frames, and describe an empirical application of a survey conducted in Germany that used a dual frame approach. View details
    How the Order of Response Options in a Running Tally Can Affect Online Survey Estimates
    Tom Wells
    Charles DiSogra
    Papers Presented at the 64th Annual Conference of the American Association for Public Opinion Research (AAPOR), AMSTAT (2011), pp. 5582-5585
    Preview abstract In the design of online surveys, running tallies or constant sums are used to help respondents sum up the allocation of amounts so that the total sums to 100%. We hypothesized that for time allocation, the order of the presentation of the time categories could make a difference in the distribution of reported time spent. We expected primacy effects, with the first-presented time category having a higher allocation of time than the later-presented options. An experiment was conducted with a general population adult sample from KnowledgePanel®. In the experiment, respondents were asked to provide running tallies of the percentage of television they typically watch during the morning, afternoon, and evening (separately for weekdays and weekends). The order of the categories was rotated. Primacy effects were detected, however differences by position were small and not statistically significant. Because time spent watching TV is a regular activity, viewing patterns are more likely to be encoded or ingrained in memory, and more likely to be reported reliably, with responses less susceptible to order effects. View details
    ‘N the Network?’’ Using Internet Resources for Predicting Cell Phone Number Status
    Trent D. Buskirk
    Kumar Rao
    Social Science Computer Review, vol. 28 (2010), pp. 271-286
    Preview abstract Despite higher hit rates for cell phone samples, inefficiencies in processing calls to these numbers relative to landline numbers continue to be documented in the U.S. literature. In this study, we propose one method for using cell phone provider information and Internet resources for validating number status. Specifically, we describe how we used ‘‘in network’’ options available from three major providers’ web sites to determine the validity of cell phone numbers. We tested differences in working number rates (WNRs) among valid and nonvalid numbers against a normal processing control group and determined that the WNR among valid numbers was approximately 14 percentage points higher than the WNR of the comparison group. This process also shows promise in reducing the effort required to determine working status and may provide a basis for developing screening tools for cell phones that capitalize on resources that are unique to this technology. View details
    Preview abstract The type of devices that can be used to go online is becoming more varied. Users access the internet through traditional desktops and laptops, as well as netbooks, tablets, videogame consoles, mobile phones and ebook readers. Because many online surveys are designed to be taken on a standard desktop or laptop screen, it is important to monitor from which device your online sample is taking the survey, and to consider the consequences the device might have for visual design impact and survey estimates. A survey designed to be taken on a desktop does not necessarily or automatically look the same when taken from netbooks, smartphones and other devices. This article will present a description of some tools to collect paradata that allow us to understand from which device the online survey is accessed, along with an initial suggestion for best practices. View details
    Who’s calling? The impact of Caller ID on telephone survey response
    Allan L. McCutcheon
    Jack Ludwig
    Field Methods, vol. 22 (2010), pp. 175-191
    Preview abstract The Gallup Organization conducted a caller ID randomized study with a pre-and postexperimental design to test the impact of different caller ID displays (names) on the response, contact, and cooperation rates for telephone surveys. This research focuses on the impact of caller ID listing on the frequency of final dialing dispositions. The authors find initial evidence for the hypothesis that the caller ID transmission works as a sort of “condensed survey research organization business card” that can trigger brand awareness, thus legitimating the survey and diminishing suspicions of collector or telemarketing calls. View details
    Response latency as an indicator of optimizing in online questionnaires
    Dennison Bhola
    Don A. Dillman
    Katherine Chin
    Bullettin de Methodologie Sociologique, vol. 103 (2009), pp. 5-25