Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Wednesday, 08/Sept/2021
10:00 - 1:00Workshop 1
 
 

Changing the Question: How to collect data which is closer to the truth

Steve Wigmore

Kantar UK

Duration of the workshop: 2.5 h

Target groups: No specific target group

Is the workshop geared at an exclusively German or an international audience? International audience

Workshop language: English

Description of the content of the workshop: The workshop will introduce a number of techniques for writing questions to reduce biases and dishonesty in online surveys. It will also cover techniques to increase respondent engagement and increase the quality of data collected. Participants will put these techniques into practice via the re-writing of a provided questionnaire.

Goals of the workshop: To improve the participants' knowledge of techniques for creating engaging surveys which encourage respondents to answer honestly (and to spot when they are not)

Necessary prior knowledge of participants: none

Literature that participants need to read prior to participation: none

Recommended additional literature: none

Information about the instructor: Steve Wigmore comes originally from an advertising agency background but has been working in market research for the last 15 years. He has held a number of roles but and is currently Director of Modern Surveys at Kantar.

Maximum number of participants: 25

Will participants need to bring their own devices in order to be able to access the Internet? Will they need to bring anything else to the workshop? No.

 
10:00 - 1:00Workshop 2
 
 

Collecting and Analyzing Twitter Data Using R

Dorian Tsolak, Stefan Knauff

Bielefeld University, Germany

Duration of the workshop: 2.5 h

Target groups: No specific target group

Is the workshop geared at an exclusively German or an international audience? International audience

Workshop language: English

Description of the content of the workshop: This workshop provides an overview about Twitter data and how to collect and analyze it using R. Participants learn how to access Twitter’s API in order to collect data for their own research projects. A number of examples illustrate how to preprocess and analyze the content and meta-information of Tweets. Participants are encouraged to have their own installation of R ready in order to take part in the practical exercises.

Goals of the workshop: Learning Goals: 1) Getting a better understanding of Twitter data, and thus, the potentials and limitations of using it in research projects. 2) Being able to collect Twitter data using R, and 3) Being able to perform explorative analyses of the data using R

Necessary prior knowledge of participants: Basic knowledge in R or any other statistical software/language (SPSS, Stata, SAS, Python)

Literature that participants need to read prior to participation: none

Recommended additional literature: none

Information about the instructors: Dorian Tsolak is a PhD candidate at Bielefeld University. His research interests cover stereotypes, racism and migration across the field of computational social science. Stefan Knauff is a master student at Bielefeld University interested in methods of computational sociology and research on inequality and globalization.

Maximum number of participants: 20

Will participants need to bring their own devices in order to be able to access the Internet? Will they need to bring anything else to the workshop? No.

 
1:00 - 2:00Break
 
2:00 - 5:00Workshop 3
 
 

Create impact with data - know your audience and communicate well

Marcel Hebing1,2,3, Larissa Wunderlich4

1Impact Distillery, Germany; 2Digital Business University of Applied Sciences, Germany; 3Alexander von Humboldt Institut für Internet und Gesellschaft, Germany; 4larissawunderlich.de, Germany

Duration of the workshop: 2.5 h

Target groups: Everyone working with data and statistics

Is the workshop geared at an exclusively German or an international audience? International audience

Workshop language: English

Description of the content of the workshop: Discussion of target groups, tools for science transfer, and principles of effective visual communication.

Goals of the workshop: Participants know how to develop a communication / transfer strategy and are provided with a basic toolset for effective communication.

Necessary prior knowledge of participants: none

Literature that participants need to read prior to participation: none

Recommended additional literature: none

Information about the instructors: Marcel is professor for Data Science (at DBU) and founder of Impact Distillery. Larissa is designer (https://www.larissawunderlich.de/) and visual communication expert.

Maximum number of participants: 20

Will participants need to bring their own devices in order to be able to access the Internet? Will they need to bring anything else to the workshop? No

 
Date: Thursday, 09/Sept/2021
11:00 CESTTrack A: Survey Research: Advancements in Online and Mobile Web Surveys
 
11:00 CESTTrack B: Data Science: From Big Data to Smart Data
 
11:00 CESTTrack C: Politics, Public Opinion, and Communication
 
11:00 CESTTrack D: Digital Methods in Applied Research
 
11:00 CESTTrack T: GOR Thesis Award 2021
 
11:00 - 11:30 CESTGOR 21 Conference Kick-off
 
11:30 - 12:30 CESTA1: Probability-Based Online Panel Research
Session Chair: Florian Keusch, University of Mannheim, Germany
 
 

The Long-Term Impact of Different Offline Population Inclusion Strategies in Probability-Based Online Panels: Evidence From the German Internet Panel and the GESIS Panel

Carina Cornesse1, Ines Schaurer2

1University of Mannheim; 2GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question:

While online panels offer numerous advantages, they are often criticized for excluding the offline population. Some probability-based online panels have developed offline population inclusion strategies: providing internet equipment and offering an alternative survey mode. Our research questions are:

1. To what extent does including the offline population have a lasting positive impact across the survey waves of probability-based online panels?

2. Is the impact of including the offline population different when providing internet equipment than when offering an offline participation mode?

3. Is the impact of offering an alternative participation mode different when extending the alternative mode offer to reluctant internet users than when only making the offer to non-internet users?

Methods & Data:

For our analyses, we use data from two probability-based online panels in Germany: the GIP (which provides members of the offline population with internet equipment) and the GESIS Panel (which offers members of the offline population as well as reluctant internet users the possibility of participating in the panel via postal mail surveys). We assess the impact of including the offline population in the GIP and GESIS Panel across their first 12 panel survey waves regarding two panel quality indicators: survey participation (as measured using response rates) and sample accuracy (as measured using the Average Absolute Relative Bias). Our analyses are based on nearly 10,000 online panel members, among them more than 2,000 members of the offline population.

Results:

We find that, even though recruitment and/or panel wave response rates are lower among members of the offline population than among members of the online population, including the offline population has a positive long-term effect in both panels, which is particularly due to the success of the inclusion strategies in reducing biases in education. In addition, it pays off to offer an offline population inclusion strategy to people who use the internet but do not want to use it for the purpose of completing online surveys.

Added Value:

Ours is the first study to compare the impact of different offline population inclusion approaches in probability-based online panels.



Why do people participate in probability-based online panel surveys?

Sebastian Kocar, Paul J. Lavrakas

Australian National University, Australia

Relevance & Research Question: Survey methodology as a research discipline is predominantly based on quantitative evidence about particular methods since it is fundamentally a quantitative research approach. Probability-based online panels are relatively few in number and there are many knowledge gaps that merit investigation. In particular, more evidence is required to understand the successes and failures in recruiting and maintaining the on-going participation of sampled panelists. In this study, we aim to identify the main motivation factors and barriers in all stages of the online panel lifecycle – recruitment to the panel, wave-by-wave data collection, and voluntary/opt-out attrition.

Methods & Data: The data were collected with an open-ended question in a panel survey and semi-structured qualitative interviews. First, 1500 panelists provided an open-ended verbatim about their motivations for joining the panel, which was gathered in a 2019 wave of Life in Australia™. Between April 2020 and February 2021, fifteen of these panelists were classified into three distinct groups based on their panel response behavior and participated in an in-depth qualitative interview. Each of these panelists also completed a detailed personality inventory (DiSC test). Due to COVID-19 crisis, the in-depth interviews were conducted virtually or over the phone.

Results: The results showed that (1) having the opportunity to provide valuable information, (2) supporting research, (3) having a say and (4) sharing their opinions were the most common reasons reported for people joining the panel and completing panel surveys. The most commonly reported barriers were (1) major life change, (2) length of surveys, (3) survey topics and (4) repetitive or difficult questions. In terms of personality types (DiSC), we can report that non-respondents on average scored much lower on dominance and higher on steadiness than frequent respondents.

Added Value: The study uses qualitative data to link the reported motivation and barriers with the existing survey participation theories, including social exchange theory, self-perception and compliance heuristics. It also relates the theories and the panelists’ reporting of their online panel behavior with their personality types. At the end, we turn the evidence from this study into practical recruitment and panel maintenance solutions for online panels.

 
11:30 - 12:30 CESTB1: Digital Trace Data and Mobile Data Collection
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

The Smartphone Usage Divide: Differences in People's Smartphone Behavior and Implications for Mobile Data Collection

Alexander Wenz, Florian Keusch

University of Mannheim, Germany

Relevance & Research Question: Researchers increasingly use smartphones for data collection, not only to implement mobile web questionnaires and diaries but also to capture new forms of data from the in-built sensors, such as GPS positioning or acceleration. Existing research on coverage error in these types of studies has distinguished between smartphone owners and non-owners. With increasing smartphone use in the general population, however, the digital divide of the "haves" and "have-nots" has shifted towards inequalities related to the skills and usage patterns of smartphone technology. In this paper, we examine people’s smartphone usage pattern and its implications for the future scope of mobile data collection.

Methods & Data: We collected survey data from six samples of smartphone owners in Germany and Austria between 2016 and 2020 (three probability samples: n1=3,956; n2=2,186; n3=632; three nonprobability samples: n4=2,623; n5=2,525; n6=1,214). Respondents were asked about their frequency of smartphone use, their level of smartphone skills, and the activities that they carry out on their smartphone. To identify different types of smartphone users, we conduct a latent class analysis (LCA), which classifies individuals based on their similarity in smartphone usage patterns.

Results: First, we will assess which smartphone usage types can be identified in the samples of smartphone owners. Second, we will examine whether the different smartphone usage types vary systematically by socio-demographic characteristics, privacy concerns towards research activities on a smartphone, and key survey variables. Third, we will investigate how the composition of the smartphone usage types change over time.

Added Value: Smartphone-based studies, even those relying on passive data collection, require participants to be able to engage with their device, such as downloading an app or activating location tracking. Therefore, researchers not only need to understand which subgroups of the population have access to smartphone technology but also how people are able to use the technology. By studying smartphone usage patterns among smartphone owners in Germany and Austria, this paper provides initial empirical evidence on this important issue.



Digital trace data collection through data donation

Laura Boeschoten, Daniel Oberski

Utrecht University, Netherlands, The

Relevance and Research Question

Digital traces left by citizens during the course of life hold an enormous potential for social-scientific discoveries, because they measure aspects of social life that are difficult to measure by traditional means. Typically, digital traces are collected through APIs and web scraping. This is however not always suitable for social-scientific research questions. Disadvantages are that the data cannot be used for questions on individual level, only public data is provided, the data pertain to a non-random subset of the platform’s users and users who generate the data cannot be contacted for their consent. We aim to develop an alternative workflow that overcomes these issues.

Method

We propose a workflow that analyses digital traces by using data download packages (DDPs). As of May 2018, any entity that processes the personal data of citizens of the European Union is legally obligated by the GDPR to provide that data to the data subject upon request in digital format. Most major private data processing entities, comprising social media platforms, smartphone systems, search engines, photo storage, e-mail, banks, energy providers, and online shops comply with this right.

Our proposed workflow consists of five steps. First, data subjects are recruited as respondents using standard survey sampling techniques. Next, respondents request their DDPs with various providers, storing these locally on their own device. Stored DDPs are then locally processed to extract relevant research variables, after which consent is requested of the respondent to send these derived variables to the researcher for analysis.

Results and added value

We will present a proof-of-concept of developed software that enables the proposed workflow together with some use-cases. By using the workflow and the developed software, researchers can answer research questions with digital trace data while overcoming the current measurement issues and issues with informed consent.



Smartphone behavior during the Corona pandemic – How Germans used apps in 2020.

Konrad Grzegorz Blaszkiewicz1,2, Qais Kasem1, Clara Sophie Vetter1,3, Ionut Andone1,2, Alexander Markowetz1,4

1Murmuras, Germany; 2University of Bonn, Germany; 3University of Amsterdam, Netherlands; 4Philipps University of Marburg, Germany

The year 2020 changed dramatically our everyday routines. With Corona pandemic-related social distancing measures, connecting virtually helped us cope with isolation. While no single tool provides a whole picture of these changes, smartphones capture a significant part of online behavior. We looked at the usage of top smartphone apps to answer the following research questions:

What were the most popular smartphone apps in 2020?

How did they differ by demographics groups and occupations?

How did they change over the year?

Were these changes COVID-19 related?

Methods & Data:

Our academic partners recruited 1070 Participants from Germany for scientific purposes. We collected their real-time app usage data via the Murmuras app with the fully GDPR-conform consent and conducted exploratory data analysis.

Results:

Our participants spent on average almost 4 hours using their smartphones. The top five apps - WhatsApp, Instagram, YouTube, Chrome, and Facebook are popular throughout all demographics. Together they constitute almost 50% of all usage we recorded. Three of them - WhatsApp, Instagram, and Facebook capture most of our online social and communication behavior. WhatsApp remains at the top for all demographics groups. Instagram is used longer by women, younger people, and students. Facebook is still the second most used app for people above 30 and employed people.

Phone usage increased significantly in March and April, marked by a large number of COVID-19 cases and strict lockdown. Usage of social and communication apps in these months increased by over 20%. Time spent in entertainment and media apps showed a slight decrease in March and a rapid increase in the following months. Interestingly with the second wave of the pandemic in Autumn, we see an increase in media and social categories but no changes in communication apps.

Added Value:

With the use of real data, our study brings a better understanding of online behavior in the year 2020, when compared to questionnaires and app store-based studies. We look into demographic and occupational differences as well as changes throughout the year and the influence of lockdowns. This new perspective provides insight into changes in our habits brought by the COVID-19 pandemic.

 
11:30 - 12:30 CESTC1: Social Media and Public Opinion
Session Chair: Pirmin Stöckle, University of Mannheim, Germany
 
 

The Discourse about Racism on German Social Media - A Big Data Analysis

Anna Karmann, Dorian Tsolak, Stefan Knauff, H. Long Nguyen, Simon Kühne, Hendrik Lücking

Bielefeld University, Germany

Relevance & Research Question:

Racism is a social practice encompassing both actions and rationales for action, which naturalize differences between humans and thus take for granted the objective reality of race (Fields & Fields 2012). In 2020, events such as the terrorist attack in Hanau and the death of George Floyd illustrated the omnipresence of racism in its different facets globally. Thus, a new debate about racism in society emerged, which was conducted on social media to a considerable extent. Both, the rise of hashtag-based activism and the emergence of filter bubbles attest to the importance of social media on societal discourses.

Methods & Data:

Our study is concerned with the systematic measurement of the prevalence and magnitude of ‘racist’ discourses. By analyzing social media text data from Twitter, we draw conclusions regarding how these discourses vary over time and region.

From October 2018 to this present day, we have used a database of nearly 1 billion German tweets (~1.1 million tweets per day). We employ a combination of word embedding models and topic modeling techniques to identify clusters that include discourse about racism (Sia 2020). We link information on regional time series data to augment our dataset with social structural information.

Results:

We find that the discourse about racism in Germany peaked in the Summer of 2020 and made up about 4% of all German tweets in that timeframe. Most notably, every third tweet regarding this discourse has been retweeted by other users, which is indicative of a highly active network structure. Analyses using the regional data reveal distinct spatial differences between regions of Germany, not only in its prevalence but also in the perception of the racist discourse. Regression models using social structural data can account for some of this regional variance.

Added Value:

Our approach allows us to detect changing trends and continuities of the racial and anti-racial discourse over 2.5 years separated by regional differences. Our rich data on an abundance of different topics enables us to connect the discourse about racism to closely related topics discussed on social media.



Assessing when social media can complement surveys and when not: a longitudinal case study

Maud Reveilhac, Davide Morselli

Lausanne University (Switzerland), Faculty of social and political sciences, Institute of social sciences, Life Course and Social Inequality Research Centre

Researchers capitalizing on social media data to study public opinion aimed at creating point estimates like those elaborated by opinion surveys (e.g., Klašnja et al. 2018). By doing so, most attempts are directed toward whether social media data can predict election outcomes (see review from Rousidis et al. 2020). Other studies have investigated how the social media and public agendas from a representative public correlate and what affects the rhythms of attention (e.g., Stier et al. 2018). Our study is situated at the nexus of these two approaches and seeks to assess under what circumstances social media data can reliably complement survey data collection.

We rely on a two-year longitudinal data collection of tweets emitted by more than 100’000 identified Swiss users. We compare tweets with survey data across a range of topic areas.

In a first research step, we assess the extent to which Twitter data can validly reflect trends found in traditional public opinion measures, such as voting decisions in popular votes, and main political concerns. Concerning the former, text similarity measures between tweets about popular votes and open-ended survey pros and cons arguments about the same voting objects allow us to reflect the majority voting decision. Concerning the latter, we show a discrepancy between the off-line and on-line public agendas, especially in the ranking of the importance of policy concerns.

Beside the alignment of both data sources, there are numerous ways in which social media can complement survey data. Most notably, social media data are very reactive to events and can thus offer a useful complement to survey data for accounting for social movements. Namely, the timing of a survey might not always coincide with the timing of a protest. Our results reflect major “real-life” events (e.g., strikes and mobilizations) and allow us to extract salient aspects surrounding these events.

Our study disentangles circumstances in which social media reflect survey patterns, especially by looking at voting objects and main political concerns. It also insists on circumstances in which social media provide complementary insights to surveys, especially for social movement detection and analysis.



Personal Agenda Setting? The effect of following patterns on social media during Election

Yaron Ariel, Vered Elishar-Malka, Dana Weimann-Saks

Max Stern Academic College of Emek Yezreel, Israel

Relevance and research question:

Agenda-setting studies assume a correlation of agendas, in which media agenda influences the audiences' agendas. This assumption has been continuously challenged in the current multi-channel-online environment, where traditional media operate alongside social media accounts. Thus, scholars should posit that different audiences (perhaps even at the individual-user level) could form a “personal agenda-setting.” We explored the differences of the agenda topics salience among those exposed to content through different ‘follow-up’ patterns in online social networks.

Methods and data:

Respondents who represent the Israeli voters' population for the March 2020 elections were invited from an online panel to create a cluster sample. After a filtering question about using online social networks, this study is based on the answers of 448 respondents. The questionnaire examined the voting intentions, the topics on the respondents' agenda, and patterns of following candidates on online social networks: Facebook, Twitter, and Instagram.

Results:

When the prominent topics on the general agenda were examined, it was found that 48% of the respondents mentioned a security incident, 35% a health crisis, 22% a welfare issue, 20% an economic crisis, 17% a coalition formation. Nonetheless, considerable differences exist when inspecting the respondent's following patterns: for example, there is a significant difference (t = 1.74, p <.05) between those who follow politicians' accounts in social networks than those who do not. Among followers, the topic of ‘Welfare’ was ranked significantly higher. Respondents who follow politicians on Twitter tend to rank the ‘Economic crisis’ higher (t = 1.8, p <.05). There is a significant difference (t = 2.07, p <.05) between exclusive followers of the leading opposition candidate (Benjamin Gantz) concerning the topic of ‘Health’ which they ranked higher. Multivariant analyses were conducted to identify the personal agendas of specific topics concerning several combinations of the following patterns.

Added value:

This study implies that the traditional approach to agenda-setting research is less compatible for studying users’ agendas in online environments. A better understanding of online agendas' formation is paramount when examining users’ passive and active exposure to political content through social networks.

 
11:30 - 12:30 CESTD1: GOR Best Practice Award 2021 Competition I
Session Chair: Alexandra Wachenfeld-Schell, GIM Gesellschaft für Innovative Marktforschung mbH, Germany
Session Chair: Otto Hellwig, respondi/DGOF, Germany

in German

sponsored by respondi
 
 

Mobility Monitoring COVID-19 in Switzerland

Beat Fischer1, Peter Moser2

1intervista AG, Switzerland; 2Statistical Office of the Canton of Zurich, Switzerland

Relevance & Research Question:

$With the outbreak of the Corona pandemic in Switzerland, the authorities took measures and issued recommendations to severely restrict mobility behaviour. The questions arose as to whether the population will adhere to the measures and recommendations and what influence this will generally have on mobility behaviour in Switzerland?

Methods & Data:

On behalf of the Statistical Office of the Canton of Zurich, the Federal Statistical Office and the COVID-19 Science Task Force, the research institute intervista launched the Mobility Monitoring COVID-19 in March 2020. A geolocation tracking panel with 3,000 participants serves as the basis, and their locations are continuously recorded via a smartphone app. With the data, the distances travelled, the means of transport used, the purpose of mobility and the proportion of commuters are analysed in detail on a daily basis. Since the panel was already set up before the outbreak of the pandemic, data was analysed retrospectively since 1.1.2020. The project is still running and new results are published on an ongoing basis.

Results:

With this study, which provides almost live data, the current developments in mobility behaviour can be clearly traced. It showed that after the lockdown in March 2020, average daily distances fell from around 40 km to less than 15 km. It could be shown that older and also younger people cut back considerably. Commuting shares decreased and public transport has been used significantly less since the outbreak of the pandemic. At times, the use of public transport even dropped by about 80% compared to the time before the pandemic.

Added Value:

This study is of considerable value to the authorities as a tool for managing the pandemic. With the results, the effectiveness of the measures taken and recommendations made could be directly monitored. By disseminating the results in the media, the population received immediate feedback on how the social norm regarding mobility was changing, which may have additionally strengthened the effect of the measures taken. The monitoring also provides important planning data for the economy and serves as a basis for various scientific research projects.



Shifting consumer needs in the travel industry due to Covid-19 – AI based Big Data Analysis of User Generated Content

Johanna Schoenberger1, Jens Heydenreich2

1Dadora GmbH, Germany; 2Verischerungskammer Bayern, Germany

Relevance & Research Question: How does COVID-19 change the consumer needs of various types of Germany based tourists and how can a travel insurer provide maximum assistance in meeting these needs (of tourists, but also of tour operators & travel agencies)?

Methods & Data: Starting May 2020 analysis of 14 million discussions of German speaking travel forums (User Generated Content) using Artificial Intelligence, Natural Language Processing and Machine Learning.

Results: Awareness of (mostly) uncontrollable risks before and during a trip has increased significantly among all travelers due to the Corona pandemic. The desire to take out insurance for possible, unforeseeable reasons for cancellation as well as for advice and support in disputes with tour operators, portals, e.g. has become clearly larger. Above all, the need for information when planning a trip has shifted noticeably, at least in the short term. Although Covid-19 only really started in Germany at the end of March / beginning of April 2020, results were already available by the end of May 2020. The derivation of measures therefore did start as early as June 2020.

Added Value: Important contribution to the innovation challenge "ReStart Reise 2021" of VKB to identify and prioritize product and service elements for the travel insurance. The most relevant results have been introduced to the market by VKB, such as a medical concierge service (https://www.vkb.de/content/services/reiseservices/) and acoverage of COVID related risks https://www.urv.de/content/privatkunden/reiseversicherungen/covidschutz/), which are intended to positively support the travel / booking behavior of customers.



Hungry for Innovation: The Case of SV Group's Augmented Insights Brand Concept Fit Analysis

Steffen Schmidt1, Stephanie Naegeli2, Tobias Lang2, Jonathan T. Mall3

1LINK Marketing Services AG, Switzerland; 2SV (Schweiz) AG, Switzerland; 3Neuro Flash, Germany

Relevance & Research Question: SV Group is challenged to develop and adapt current, but also new catering concepts, especially against the backdrop of the Corona pandemic and the emergence of new trends, such as increased home office work or changing office behavior. The aim of the empirical study was to analyze the fit and sharpen the positioning of different concepts and brands in order to not only remain the number one caterer in Switzerland, but also to continue to grow by developing new innovative catering concepts.

Methods & Data: First, an AI-based neurosemiotic Big Data web technique was used to uncover associations on the topic of "lunch and snacks at work" as initial input for the b2b and b2c online survey. For the survey itself, implicit association test and MaxDiff method were used. Universal structural modeling (USM) with Bayesian neural networks was applied to identify the most salient implicit associations. In addition, TURF analyses using MaxDiff scores uncovered the top feature combinations that resonate with the most consumers. The USM and MaxDiff-TURF results were in turn used as input for further neurosemiotic Big Data web analysis to create an extended association network. A total of n=250 b2c participants and n=248 b2b participants were surveyed in November 2020.

Results: The results showed that most, but not all, of the catering concepts and only one brand offered sufficient activation potential. Considering the extended association network, four specific clusters were identified, which in turn were used as communicative input for the roll-out of the respective concepts. This four-cluster network enabled highly targeted and evidence-based positioning, especially when it comes to triggering the right associations at each touchpoint (website, app, etc.) in consumer’s mind.

Added Value: The combination of methods used created an innovative augmented insights loop from the beginning of the research start to the end and beyond to create an evidence-based management foundation. The AI-based neurosemiotic Big Data web insights analysis was the initial starting point, but also the means to further refine the uncovered insights from the other advanced methods. In addition, it can now be used on a daily basis to review and, if necessary, optimize human-generated content (e.g., claims, product descriptions) in light of the identified salient association network without further surveying consumers. This approach ensures both substance and speed for better management with evidence.

 
11:30 - 12:30 CESTGOR Thesis Award 2021 Competition
Session Chair: Olaf Wenzel, Wenzel Marktforschung, Germany

sponsored by Tivian
 
 

Generalized Zero and Few Shot Transfer for Facial Forgery Detection

Shivangi Aneja

Technical University of Munich, Germany

Relevance & Research Question:

With recent developments in computer graphics and deep learning, it is now possible to create high-quality fake videos that look extremely realistic. Over the last two years, there has been tremendous progress in the creation of these altered videos, especially Deepfakes. This has several benign applications in computer graphics, but on the other hand, this can also have dangerous implications on society, such as in political propaganda and public shaming. Especially, the fake videos of politicians can be used to spread misinformation. This calls for urgency to build a reliable fake video detector. Different manipulation methods come out every day. So, even if we build a reliable detector to detect fake videos generated from one manipulation method, the question still remains how successfully it will detect videos forged with a different and unseen manipulation method. This thesis is a step towards this direction. Taking advantage of available fake video creation methods and using as few images as possible from a new and unseen manipulation method, the aim is to build a universal detector that detects most of the fraudulent videos surfacing the internet to the best of its capability.

Methods & Data:

We begin the thesis by exploring the relationship between different computer-graphics and learning-based manipulation methods, i.e., we evaluate how well a model trained with one manipulation method generalizes to a different and unseen manipulation method. We then investigate how to boost the performance for a different manipulation method or dataset in case of limited data availability. For this, we explored a variety of transfer learning approaches and proposed a new transfer learning technique and an augmentation strategy. This proposed technique was found to be surprisingly effective in detecting facial manipulations in zero-shot (when the model has no knowledge about new videos) and few-shot (when the model has seen very few frames from the new videos) settings.

We used the standard classification backbone architecture (ResNet) for all our experiments and evaluated different pointwise metric-based domain transfer methods like MMD, Deep coral, Ccsa, D-sne. Since none of these methods worked well on unseen videos and datasets, we proposed a distribution-based approach where we model each of our classes (real or fake) as a component of mixture model and our model learns these distribution components, which we enforce with a loss function based on Wasserstein distance. Inspired by our insights, we also propose a simple data augmentation strategy that spatially mixes up images from the same classes but different domains. The proposed loss function and augmentation cumulatively perform better compared to existing state-of-the-art supervised methods as well as transfer learning methods. We benchmarked our results on several face forgery datasets like FaceForensics++, Google DF, AIF and even evaluated our results on in-the-wild deepfake videos (Dessa dataset).

The FaceForensics++ dataset provides fake videos created with 4 different manipulation techniques including Face2Face, FaceSwap, Deepfakes, and Neural Textures and corresponding real videos. The Google DF dataset provides fake videos generated with high-quality deepfake videos. The AIF dataset is the most challenging dataset that was donated to authors by AI Foundation which consists of deepfake videos generated in very bad illumination conditions and cluttered environments. And finally, we used the Dessa dataset which consists of high-quality deepfake videos downloaded from youtube.

Results :

We compare our results with current state-of-the-art transfer learning methods, and the experimental evaluation suggests that our approach consistently outperforms these methods. We also provide a thorough analysis of transferability among different manipulation methods, which provides a clear picture of which methods are more closely related to each other and exhibit a good transfer. We notice that learning + graphics-based methods transfer relatively well within each other, however purely graphics-based methods do not exhibit transfer. Additionally, we also compare transfer on different datasets to explore out-of-distribution generalization. Overall, we achieve a large 10% improvement (64% to 74%) compared to baseline across dataset generalization where the model has never seen the videos (zero-shot) and 7% improvement (78% to 85%) for few-shot transfer for in-the-wild deepfake videos.

Added Value :

The standard supervised classification models build by researchers detect fakes really well on datasets that they are trained on, however fail to generalize to unseen videos and datasets that the model has not seen before, commonly known as out-of-domain generalization. With this thesis, we combat these failure cases and were able to successfully build an unsupervised algorithm, where our model has no or very little knowledge about the unseen datasets and still is able to generalize much better compared to standard supervised methods. Our proposed technique generalizes better compared to other state-of-the-art methods and hence generates more reliable predictions, thus can be deployed to detect in-the-wild videos on social media and video sharing platforms. The proposed method is novel and effective, i.e. the thesis proposed a new loss function based on learning the class distributions that empirically generalizes much better compared to other loss functions. The added spatial augmentation further boosts the performance of our model by 2-3%. The proposed technique is not only limited to faces but can also be applied to various other domains where the datasets are diverse and scarce.



How Does Broadband Supply Affect the Participation in Panel Surveys? An analysis of mode choice and panel attrition

Maikel Schwerdtfeger1,2

1GESIS - Leibniz-Institut für Sozialwissenschaften, Germany; 2University of Mannheim

Relevance & Research Question:

Over the last decades, online surveys became a crucial part of quantitative research in the social sciences. This development yielded coverage strategies such as implementing mixed-mode surveys and motivated many scientific studies to investigate coverage problems. From the perspective of the research on coverage, having a broadband connection often implies that people can participate in online surveys without any problems. In reality, the quality of the broadband supply can vary massively and thereby affect the online experience. Negative experiences lower the motivation to use online services and thus also reduce individual skills and preferences. Considering this, I expect that regional differences in broadband supply have a major impact on survey participation behavior, which leads me to the following research questions:

1st Research Question: How does the broadband supply affect the participation mode choice in a mixed-mode panel survey?

2nd Research Question: How does broadband supply determine attrition in panel surveys?

Methods & Data:

In order to investigate the effects of broadband supply on participation mode choice and panel attrition, I combine geospatial broadband data of the German “Breitbandatlas” and geocoded survey data of the recruitment interview and 16 waves of the mixed-mode GESIS Panel. The geospatial broadband data classifies 432 administrative districts in Germany into five ordinal categories according to their proportion of broadband supply with at least 50 Mbit/s, which is seen as a threshold value for sufficient data transmission.

To answer the first research question, I apply a binomial logistic regression model to estimate the odds of choosing the online participation mode based on broadband supply, internet familiarity, and further control variables. Besides broadband supply, I included internet familiarity as a substantially relevant independent variable based on previous research results in the field of participation mode choice.

Following the theoretical background (see 2.2. Mode choice), I expect a person deciding between online or offline participation in a recruitment interview to consider their last and most prominent internet experiences with a particular focus on their internet familiarity and their perceived waiting times. The waiting times are largely affected by the data transmission rate of the available broadband supply.

Consequently, I derive the following two hypotheses for participation mode choice in mixed-mode panel surveys that provide web-based and paper questionnaires:

1st Hypothesis: Having a more pronounced internet familiarity increases the probability of deciding for online participation in a mixed-mode panel.

2nd Hypothesis: Living in a region with better broadband supply increases the probability of deciding for online participation in a mixed-mode panel.

To answer the second research question, I apply a Cox regression model to estimate the hazard ratios of panel dropout based on broadband supply, perceived survey duration, and further control variables. Besides broadband supply, I considered perceived survey duration as substantially relevant based on previous research results in the field of panel attrition.

According to the theoretical background (see 2.3. Panel attrition), I expect a person in a panel survey to constantly evaluate their satisfaction and burden of participation, whereas the flow experience and the perceived expenditure of time are the crucial factors in the decision process. The flow experience is largely determined by the quality of the available broadband supply. Consequently, I derive the following two hypotheses for attrition in panel surveys:

3rd Hypothesis: Living in a region with better broadband supply decreases the risk of attrition in an online panel survey.

4th Hypothesis: Evaluating the survey duration as shorter decreases the risk of attrition in an online panel survey.

Results:

The results of the first analysis show that both living in a region with better broadband supply and having a higher internet familiarity increases the probability of choosing the online mode in a mixed-mode panel survey. However, the effect of internet familiarity is found to be substantially more powerful and stable.

The results of the second analysis show that a longer perceived survey duration increases the risk of panel dropout, whereas the effect of broadband supply is small, opposite to the hypothesis, and not significant.

For the interpretation of the results in the overall context, it must be noted that the classification of about 400 administrative districts in Germany into five groups with different proportions of sufficient broadband supply is not ideal for the purpose of this analysis. Despite this limitation, the weak effect of broadband supply in the first analysis suggests greater potential in this methodological approach. In the discussion section, I provide further details on this issue and an outlook for a follow-up study that can test the presented methodological approach with more precise broadband data.

Added Value:

The present study aims to expand methodological research in the context of online surveys in two different ways. First, the approach of combining geospatial data on broadband supply and survey data is a novelty in survey methodology. The advantage is that there is no need to ask additional questions about the quality of the internet connection, which reduces survey duration. Additionally, geospatial data is not affected by motivated or unintentional misreporting of respondents. This is particularly important in the case of information that is excessively biased by subjective perceptions or by misjudgments due to lack of knowledge or interest. Technical details on broadband supply are vulnerable to this kind of bias.

Second, analyzing response behavior in the context of available broadband supply allows to draw conclusions about whether participants with poor broadband supply still choose the online mode. And if so, whether they have a higher probability of panel attrition than panelists with better broadband supply. These conclusions can be used to develop targeting strategies that actively guide the participation mode choice based on the panelists' residence, thereby reducing the likelihood of panel attrition.



Voice in Online Interview Research

Aleksei Tiutchev

HTW Berlin, Germany

Relevance & Research Question: Recently, voice and speech technologies’ developments significantly improved, reaching a high accuracy of speech recognition for the English language. Among others, the technologies could also be applied in market research. In the last years, only a few studies addressed the possibility of using speech recognition in online market research. The thesis further investigates the possibility of incorporating speech recognition technology into online surveys in various languages in six continents. Research Question is “What is the impact of voice in global online interviewing by the example of several languages and countries regarding…

... technological capabilities of participants?

... willingness to participate?

... quality of voice answers?

... the respondents’ level of engagement?

... respondents’ satisfaction?”

Methods & Data: Based on the review of the current state of speech recognition and related literature, online questionnaires with voice input and text input in five languages (English, German, French, Russian, and Spanish) were created and distributed through the online panel to 19 countries. The questionnaires consisted of 40 questions with 14 open questions on various topics, which participants could answer either with text or with voice depending on the technical possibilities and willingness to participate in a voice study. In addition to the open questions, the surveys included the Kano Model questions to measure how the respondents perceive the possibility of answering the survey with voice, Net Promoter Score question, and others. The data were collected between September 3, 2020, and October 27, 2020, and 1958 completed questionnaires became the focus of the study. Out of the all completed surveys, 1000 were filled in with text input, whereas 958 were filled in with voice input. Collected data were analysed with IBM SPSS Statistics v.27.

Results: The results of the study demonstrated that the technological capabilities of the respondents to participate in the voice research varied from country to country. The highest number of browsers and devices that support voice input was observed in developing countries. Also, those countries had the highest number of participants who use smartphones to fill in the questionnaires. At the same time, in developed countries, due to the popularity of iOS devices, which did not support voice input, it was more challenging to conduct voice research. Even with technical possibilities, 43 per cent of respondents were still unwilling to grant access to their microphones. The answers collected through voice input were 1.8 longer comparing to the text input answers. At the same time, questions with voice input took on average two seconds more time to answer. Moreover, surveys with voice input had two times higher dropout rate. Participants with voice input were more satisfied with the surveys and showed a very high willingness to participate in voice studies again. Meanwhile, respondents’ technological capabilities to participate in voice surveys, dropout rates, response times, and quality of voice answers significantly differed depending on the country. Analysis of the Kano Model questions demonstrated the participants’ indifference to the possibility to answer the surveys with voice. Key Driver Analysis demonstrated that such categories as tech-savvy, early adopter or data security concerns did not influence respondents’ willingness to participate in voice research again. Meanwhile, the most important categories that influenced such decision were frequency of Internet usage and information seeker behaviour.

Added Value: The study results have partially confirmed previous research on speech recognition use in online questionnaires in regards to higher dropout rates and longer answers in terms of characters for answers received through voice input. At the same time, some results have contradicted previous studies, as the voice answers appeared to be longer in time compared to text input answers, thus not confirming the lower response burden of the voice input answers in online surveys. In addition to that, the results of the study have complemented the existing research and provided more information about the use of voice input in online surveys in different countries. The technology is still new and currently not all devices support such technologies, which makes the research more complicated, more expensive, and more time-consuming in the countries, where the number of not supporting devices is great. Starting from the technological possibilities of the voice questionnaires to dropout rates and amount of data received with voice input, everything varied significantly and notably depended on the geographical location of the study. Even though voice input in online surveys requires more effort and demands higher costs for participants’ recruitment, and the transcriptions are not perfect in terms of quality, especially in non-English languages, marketers and researchers of different industries might consider using voice input in their studies to receive extensive quality data through online questionnaires. This method may allow professionals to conduct research among people, who cannot or do not want overwise to participate in classical text surveys.

 
12:30 - 12:50 CESTBreak
 
12:50 - 1:50 CESTP 1.1: Poster I
sponsored by GIM
 
 

Role of risk and trust beliefs in willingness to submit photos in mobile surveys

Jošt Bartol1,2, Vasja Vehovar1, Andraž Petrovčič1

1Centre for Social Informatics, Faculty of Social Sciences, University of Ljubljana, Slovenia; 2Faculty of Arts, University of Ljubljana, Slovenia

Relevance & Research Question: Smartphones provide promising new ways to collect survey data. An interesting option is to ask respondents to submit photos. However, virtually no research exists on how trust in surveyors about proper handling of submitted photos and risk beliefs related to submitting photos in mobile surveys impact the willingness to submit photos. Thus, we addressed three research questions: (1) What are smartphone users’ attitudes toward submitting photos in a mobile survey? (2) How do trust and risk beliefs differ according to the sensitivity of photos? (3) How do trust and risk beliefs affect the willingness to submit photos in a mobile survey?

Methods & Data: A follow-up subsample of respondents from the Slovenian Public Opinion Survey was used (n = 280 smartphone users). Respondents were presented with a hypothetical scenario of a mobile survey requesting three different photos: a window panorama, an open refrigerator, and a selfie. The respondents were first asked in an open-ended question to write their thoughts about the scenario. Next, they were asked about their willingness to submit the three photos and to indicate their trust and risk beliefs for each. The data were analyzed qualitatively (open-ended question), and quantitatively by three regression models.

Results: The respondents believed that submitting photos can be a threat to anonymity, and they would only submit photos that they did not perceive as too sensitive in terms of possible abuses. Interestingly, 47.9% of respondents would submit a photo of an open window panorama, 40.4% of an open refrigerator, and only 8.6% their selfie. Additionally, photos perceived as more sensitive were associated with lower trust and higher risk beliefs. Moreover, trust beliefs increased their willingness to submit photos while risk beliefs decreased it.

Added Value: The study indicates that only photos that respondents do not perceive as a threat to their anonymity can be collected in mobile surveys. Indeed, risk and trust beliefs play an important role in the decision to submit photos. Future research might investigate different types of trust and risk beliefs as well as study respondents’ actual submission of photos in mobile surveys.



Survey Attitudes and Political Engagement: Not Correlated as Expected for Highly Qualified and Professional Respondents

Isabelle Fiedler, Thorsten Euler, Ulrike Schwabe, Andrea Schulze, Swetlana Sudheimer

German Centre for Higher Education Research and Science Studies, Germany

Relevance & Research Question:

In times of declining response rates, investigating the determinants of survey participation in general and panel participation in particular are of special importance. Empirical evidence indicates that general attitudes towards surveys do predict willingness to participate in (online) surveys (de Leeuw et al. 2017; Jungermann et al. 2019). Beyond survey attitudes themselves, however, political engagement can be seen as another predictor for survey participation (Silber et al. 2020). The underlying assumption is that answering questions is one way to express personal opinion. Therefore, we analyse to what extent survey attitudes and political attitudes are associated.

Methods & Data:

We use data from two different panel surveys for groups of highly qualified: starting cohort 5 of the National Educational Panel Study (NEPS, n=3879) and cohort 2009 of the DZHW graduate panel (DZHW GP, n=619). Both surveys include the Survey Attitude Scale (SAS) in its nine item short form as proposed by de Leeuw et al. (2019) as well as different measures for political engagement.

Results:

Overall, our results show only weak and few significant correlations between the three dimensions of the SAS and different measures of political engagement. Survey Value shows significant positive correlations with different measures for social trust and political interest. In contrast, Survey Burden is significantly negative associated with participation in the last national election and general trust in others as well as general political activities. Finally, we find significant positive correlations between Survey Enjoyment and political interest as well as membership in a political party or association.

Added Value:

In sum, our empirical findings do not show theoretically expected strong associations between the SAS and political engagement. However, our sample consists of participants of already well established panel studies. Being asked in the 14th wave in the case of NEPS and the third wave (within ten years) in the case of DZHW GP, they can be regarded as professional respondents. Consequently, we suggest replicating the study by Silber et al. (2020) with a sample of newly sampled respondents of the highly qualified, because it is interesting to contrast this group against general population.

 
12:50 - 1:50 CESTP 1.2: Poster II
sponsored by GIM
 
 

Covid-19 and the attempt to understand the new normal – A behavioral science approach

Prof. Dirk Frank1,2, Evelyn Kiepfer2, Manuela Richter2

1University of Applied Sciences Pforzheim; 2ISM GLOBAL DYNAMICS, Germany

The market research industry has reacted to the massive uncertainties regarding future consumer behaviour emerging from the corona pandemic. It is providing the various stakeholders from industry and society with numerous studies, which are intended to guide them through the thicket of the New Normal: What (changed) attitudes do consumers show because of Corona? How do priorities in purchasing behaviour change, as do our needs? Most studies published follow the classical “explicit” attitude measurement paradigm using scaled answers. As a consequence, most findings, pretending to predict future or describe current consumer behaviour in the pandemia, suffer from the well-researched “say-do gap” and the general weakness of explicit attitude measure to predict real behaviour. In an international study we applied an implicit, reaction-time based methodology to assess Covid-related attitudes (towards politics, nutrition, vaccination, health-related behaviours) to highlight differences between countries in coping with Corona and showing a methodological approach to separate pure lip service from real behaviour intentions.

Led by our Polish research partner NEUROHM a large-scale global comparative study “COVID-19 Fever” was conducted between the late April and early May 2020, followed by a national wave in Germany in January 2021 to assess attitudes towards vaccination in more detail. The international study was conducted in ten countries with 1000 respondents each as a syndicated project involving universities and commercial research agencies specializing in behavioural economics. The theoretical basis of the applied measurement model of NEUROHM (iCode, see also Ohme, Matukin & Wicher 2020) is the “Attitude Accessibility” model of Fazio (1989). iCode is an algorithm that allows the calculation of a confidence index (CI), which integrates the explicit and implicit measures of attitudes in one score showing the tension between rationalizing opinions and the underlying security and trustworthiness in the form of implicit confidence.

Results clearly showed the need to distinguish between superficial, socially desirable answers and implicit, well-internalised beliefs when it comes to coping with Covid-19. If politicians or companies want to develop sound strategies based on highly predictable behaviours of consumers or citizens, they should add research paradigms from behavioural economics in their studies.



Gender and Artificial Intelligence – Differences Regarding the Perception, Competence Self-Assessment and Trust

Swetlana Franken, Nina Mauritz

Bielefeld University of Applied Sciences, Germany

Relevance & Research Question:

Technical progress through digitalisation is constantly increasing. Currently, the most relevant and technically sophisticated technology is artificial intelligence (AI). Due to the strong influence of AI, it is necessary that it meets with broad social acceptance. However, it is apparent that the prerequisites for this are distributed differently according to gender. Women are less frequently involved in research and development on AI. What are the differences between men and women in their perception, evaluation, development, and use of AI in the workplace?

Methods & Data:

A quantitative online survey consisting of 45 items was conducted among company representatives and students from July to September 2020 [N = 382 (age; M = 35.9, SD = 13.5, 69.6% female, 61.4 % university degree)]. To determine differences in the variables of interest, a t-test or ANOVA was calculated, if the prerequisites were fulfilled.

Results:

The results show that men, in contrast to women, see more opportunities in AI (t(317) = -2.88, N = 319, p = .004), rate their own AI-competence higher (t(317) = -6.65, N = 319, p < .001), and trust more in AI (U = 8401.00, Z = -3.604, p < .001). One reason for the significant results could be the fact that men are more involved and have more experience with AI than women (χ² (2, N = 319) = 7.902, p = .019). Men and women agree in their desire for better traceability in AI-decision-making processes (t(317) = .375, N = 319, p = .708), and both show a high motivation for further training (t(317) = -.522, N = 319, p = .602).

Added Value:

Developing one's own AI-competence takes away fears and promotes trust and acceptance towards AI – an important prerequisite for openness towards AI. Promoting interest in and the willingness to deal with AI can at the same time sensitize people to the possible risks of AI applications in terms of prejudice and discrimination and mobilize more women to engage in AI development.

 
12:50 - 1:50 CESTP 1.3: Poster III
sponsored by GIM
 
 

Willingness to participate in in-the-moment surveys triggered by online behaviors

Carlos Ochoa, Melanie Revilla

Research and Expertise Centre for Survey Methodology, Universitat Pompeu Fabra

Relevance & Research Question:

Surveys are a fundamental tool of empirical research. However, surveys have limitations that may produce errors. One of their most well-known limitations is related to memory recall errors: people can have difficulties to recall relevant data related to events of interest for researchers. Passive data solve this problem partially. For instance, online behaviours are increasingly researched using tracking software (“meter”) installed on the browsing devices of members of opt-in online panels, registering which URLs they visit. However, such a meter also suffers from new sources of error (e.g., the meter may not collect data temporally). Moreover, part of the objective information cannot be collected passively, and subjective information is not directly observable. Therefore, some information gaps must be filled, and some information must be validated. Asking participants about such missing/dubious information using web surveys conducted in the precise moment an event of interest is detected has the potential to fill the gap. However, to what extent people may be willing to participate raise doubts about the applicability of this method. This paper explores which parameters affect the willingness to participate in in-the-moment web surveys triggered by the online activity recorded by a meter installed by the participants on their devices, using a conjoint experiment.

Methods & Data:

A cross-sectional study will be developed to ask members of an opt-in panel (Netquest) in Spain about their willingness to participate in in-the-moment surveys. A choice based conjoint analysis will be used to determine the influence of different parameters and different characteristics of participants.

Results:

This research is in progress, results are expected in July-2021. Three key parameters are expected to play a crucial role in the willingness to participate: length of the interview, maximum time allowed to participate and incentivization.

Added Value:

This research will allow to design effective experiments to collect data in the moment to prove the actual value of this method. The use of a conjoint experiment is a new approach to explore the willingness to participate in research activities that may lead to a better understanding of the relevant factors that influence participation.



Memory Effects in Online Panel Surveys: Investigating Respondents’ Ability to Recall Responses from a Previous Panel Wave

Tobias Rettig1, Bella Struminskaya2, Annelies G. Blom1

1University of Mannheim, Germany; 2Utrecht University, the Netherlands

Relevance & Research Question:

Repeated measurements of the same questions from the same respondents have several applications in survey research in longitudinal studies, pretest-posttest experiments, and to evaluate measurement quality. However, respondents’ memory of their previous responses can introduce measurement error into repeated questions. While this issue has recently received renewed interest from researchers, most studies have only investigated respondents’ ability to recall their responses within cross-sectional surveys. The present study aims to fill this gap by investigating how well respondents can recall their responses in a longitudinal setting after 4 months in a probability-based online panel.

Methods & Data:

Respondents of the German Internet Panel (GIP) received 2 questions on environmental awareness at the beginning of the November 2018 wave. Four months later, respondents were asked (1) whether they could recall their responses to these questions, (2) to repeat their responses, and (3) how certain they were about their recalled answer. We compare the proportions of respondents who correctly repeated their previous response among those who alleged that they could recall it and those who did not. We also investigate possible correlates of correctly recalling previous responses including question type, socio-demographics, panel experience, and perceived response burden.

Results:

Preliminary results indicate that respondents can correctly repeat their previous response in about 29% of all cases. Responses to attitude and behavior questions were more likely recalled than responses to belief questions, as were extreme responses. Age, gender, education, panel experience, perceived response burden, switching devices between waves and participation in the panel wave between the initial questions and repetitions did not have significant effects on recall ability.

Added Value:

The implications of respondents’ ability to recall their previous responses in longitudinal studies are nearly unexplored. This study is the first to examine respondents’ recall ability after a realistic interval for longitudinal settings of 4 months, which is an important step in determining adequate time intervals between question repetitions in longitudinal studies for different types of questions.



Default layout settings of sliders and their problems

Florian Röser, Stefanie Winter, Sandra Billasch

University of Applied Sciences Darmstadt, Germany

Relevance & Research Question:

In online survey practice, sliders are increasingly used to answer questions or to query attitudes / consents. In the social sciences, however, the rating scale is still the most widely used scale type. The question arises as to whether the default layout settings of these two types of scales in online survey systems have effects on the answers of the test persons (first of all independent of the content of the questions).

Methods & Data:

We used a 2 (rating scale vs. slider) x 2 (default vs. adjusted layout) factorial experimental design. Each subject answered 2 personality questionnaires, which were taken from the ZIS (open access repository for measurement instruments) database: A questionnaire with an agreement scale (Big Five Inventory-SOEP (BFI-S); Schupp & Gerlitz, 2014) with originally 7 response options and a questionnaire with adjective pairs (Personality-Adjective Scales PASK5; Brandstätter, 2014) with originally 9 levels. In one setting, the default layout for a slider was used in the LimeSurvey survey tool. In another setting, the layout of the slider was adjusted so that the endpoints of the slider stopped where the first and last crosses could be placed on the scale.

Results:

A total of 344 subjects participated in the study. It was found that there were significant differences in the slider for most personality traits (regardless of the questionnaire) between the default and an adjusted design. In the default slider design, there was a significant shift in responses toward the middle compared to the rating scale.

Added Value:

With this study we were able to show that the use of a slider in online surveys in the default layout can lead to different results than a classical rating scale, and that this effect can be prevented by adjusting the layout of the slider. This result should sensitize online researchers not to simply change an answer type using the default layout settings and stimulate further research to analyze the exact causes and conditions.

 
12:50 - 1:50 CESTP 1.4: Poster IV
sponsored by GIM
 
 

Inequalities in e-government use among older adults: The digital divide approach

Dennis Rosenberg

University of Haifa, Israel

During the past two decades, governments across the globe have been utilizing the online space to provide their information and services. Studies report that several categories of population, including older adults, report relatively low rates of obtaining governmental information and services using the Internet. However, little attempt has been made to further understand what differs between e-government adopters and non-adopters in later life. The goal of the current study was to examine socio-demographic disparities in e-government use among older adults through the lens of the digital divide approach. The data for the current study were attained from the 2017 Israel Social Survey. The sample (N = 1173) included older adults (aged 60 and older) who responded either positively or negatively on the item assessing the e-government use three months prior to the survey. Logistic regression served for the multivariable analysis. The results suggest that being male, of younger age, having an academic level of education, being married and using the Internet on a daily basis increase the likelihood of e-government use among older adults. These results lead to the conclusion that the digital divide characterizes e-government use in later life, similar to other uses of the Internet. The results emphasize the need for further socialization among older adults in using government services. This in light of the ongoing transition of these services into the online sphere, of numerous advantages of the online provision of these services, and of their major relevance in later life.



Ethnic differences in utilization of online sources to obtain health information: A test of the social inequality hypotheses

Dennis Rosenberg

University of Haifa, Israel

Relevance & Research Question: People tend to utilize multiple sources of health information. Although ethnic differences in online health information search has been studied, little is known about such differences in utilization of specific online health information sources and their variety. The research question is: do ethnic groups differ in their likelihood of utilizing online sources of health information?

Methods & Data: The data were attained from the 2017 Israel Social Survey. The study population included adults aged 20 and older (N = 1764). Logistic regression was used as a multivariate statistical technique.

Results: Jews were more likely than Arabs to search for health information using the call centers or sites of Health Funds and other sites, and more likely to search for health information using more than one type of site. In contrast, Arabs were more likely to search for health information on the website of the Ministry of Health.

Added Value: The study used social inequality theories for examination of ethnic differences in use of online health information sources, while referring to specific sources of such information and their variety.



Recommendations in times of crisis - an analysis of YouTube's algorithms

Sophia Schmid

Kantar Public, Germany

Relevance and research question:

In recent years, the video platform YouTube has become a more and more important source of information, particularly for young users. During the Covid-19 pandemic, almost a fifth of all Germans used YouTube to find information on the pandemic. At the same time, disinformation on social media reached a peak in what the WHO called an “infodemic”. Therefore, we set up a study on the amount of disinformation and media diversity in YouTube’s video recommendations. As recommendations are an important driver of video reach, they were interested in whether YouTube’s recommendation algorithms promote disinformation. Moreover, the analysis set out to determine which videos, channels or topics dominate video recommendations.

Methods and data:

The study consisted of a three-step research design. As a first step, a custom-built algorithmic tool recorded almost 34.000 YouTube recommendations consisting of over 8.000 videos. After enriching those with metadata, we quantitatively analysed variables like channel type, number of views or likelihood of disinformation. In a second step, we selected 210 videos and quantitatively coded them on the specific topic or amount of disinformation. Finally, a qualitative content analysis of 25 videos delved into the characteristics and commonalities of disinformative videos. The poster will detail the specific make-up of this three-step methodology.

Results:

Our study showed that on the one hand, YouTube seems to have altered its recommendation algorithms so they recommend less disinformation, even though it is still present. However, on the other hand, the algorithm severely limits media diversity. Only a handful of videos and channels dominate the recommendations, without it being apparent which characteristics make a video more likely to be recommended.

Added value:

This study shows how a “big data” approach can be combined with more traditional research methodologies to provide extensive insight into the structures of social media. Moreover, it helped assess the extent of disinformation on YouTube and provided a window into what and how social media recommendation algorithms prioritise.



Residential preferences on German online accommodation platforms

Timo Schnepf

BIBB, Germany

Relevance & Research Question: Online accommodation platforms are a currently unused source to investigate the demand side of the housing market in urban areas. I show how this data source can serve to study individual residential preferences (RP) from different social groups for 236 mentioned districts from 4 German cities. Those information are otherwise hardly collectable by regular survey methods. Furthermore, I build a comparable “socio-economic residential preferences index” (SERPI) for each district and city based on the different residential preferences from academics and jobseekers.

Methods & Data: I scraped between Juli 2019 and April 2020 housing requests uploaded on Ebay-Kleinanzeigen from 8 German cities. I collected 19.123 individual requests. Online accommodation requests serve as good data source for natural language processing tasks as they are highly structured. I used named entity recognition and word matching to extract i) (informal) (sub-)district residential preferences and ii) socio-economic characteristics of the apartment seekers. Those were for instance employment status, occupational status, family status or maximum rents. I assume a biased sample towards those individuals who ‘seize every chance’ to find a new apartment.

Results: I find the highest differences between residential preferences from academics and jobseekers in Hamburg (SERPI=0.89), Munich (0.83), Cologne (0.73) and least differences in Berlin (0.36). Among 236 districts in those four cities, the district "Farmsen-Berne" (Hamburg) shows the strongest concentration of residential preferences from jobseekers, but almost no RP from academics (SERPI = -5.32). The strongest concentration of RP from academics can also be found in Hamburg in "Sternschanze" (3.10). Berlin's districts show the lowest levels of RP segregation. (More on the dashboard https://germanlivingpreferences.herokuapp.com/ (user: kleinanzeigen, pw: showme))

Added Value: The study presents a new approach for urban research to investigate residential preferences from actual apartment seekers - potentially in real time. The SERPI is a new instrument to investigate spatial segregation and gentrification processes. Further research could - for instance - investigate the causes of group specific RP.

 
12:50 - 1:50 CESTP 1.5: Poster V
sponsored by GIM
 
 

Does the Way how to Present Demanding Questions Affect Respondent’s Answers? Experimental Evidence from Recent Mixed-Device Surveys

Thorsten Euler, Isabelle Fiedler, Andrea Schulze, Ulrike Schwabe, Swetlana Sudheimer

Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Germany

Within the framework of total survey error, systematic bias – which negatively affects data quality - can occur either on the side of measurement or the side of representation (Grooves et al. 2004). We address measurement bias by asking ourselves whether the way how to present questions in online surveys affects response behaviour (Rossmann et al. 2018). As online surveys are mixed-device surveys (Lugtig & Toepoel 2015), questions are differently presented for mobile and non-mobile respondents.

To answer this question, we realised a survey experiment with split-half design in two recently conducted online surveys for students in summer term 2020 (n=29,389) and 2021 (n=10,044). As examples for cognitive demanding questions, we use two items: (i) time use (for private and study related issues) during semester and semester break as well as (ii) sources of income per month. Both are open questions. We regard them as cognitive demanding, because the retrospective information need to be retrieved without any pre-formulated categories being offered. The retrieval process is conducted only by accessible information. As mobile devices provide less display space, item grids are split into single parts. Thus, we expect that the given answers depend on the way the question is presented. To test our two assumptions, the item grid is differently split for mobile devices.

To check for differences in response behaviour, we first show descriptives for break-offs, response times and missing values for all groups. In a second step, we compare means and standard deviations between the control and experimental group. Our results indicate that there are differences in response behaviour depend on type of presenting the question. However, these patterns are quite mixed for the specific question asked.

Overall, our results have direct implications for designing mixed-device surveys for highly qualified. Especially among the group of students, using mobile devices for participating in surveys becomes more relevant. Thus, the question of how cognitive demanding questions are presented is of special importance for designing self-administered online surveys: the context affects answering behaviour. We close with reflecting on the generalizability of our findings.



Psychological factors as mediators of second screen usage during viewing sport broadcasts

Dana Weimann-Saks, Vered Elishar-Malka, Yaron Ariel

Max Stern Academic College of Emek Yezreel, Israel

Relevance and research question: One of the major sports events in the world is the World Cup. This study examines the effect of enjoyment from and transportation into the broadcasts events, on using social media as a second screen. The use of second screens- watching television while using another digital device- usually a smartphone or tablet, may be considered a type of “media multitasking” as it affects the viewers’ attention, the information they receive, and their social conduct during the broadcast.

We assumed that a negative correlation will be found between the level of enjoyment from watching the sports event and the second screen usage during the broadcasts. Moreover, we assumed that the correlation between enjoyment and second screen usage will be mediated by transporting into the broadcasts.

Method and data: An online representative sample of the Israeli population was obtained during the final ten days of the World Cup, from the quarterfinals to the final. 454 respondents completed the questionnaire.

Results: Findings revealed that using social media while watching the World Cup broadcasts, is strongly correlated with the enjoyment from watching the broadcasts. Thus, as assumed, the use of social media for non-game-related usages declined as the enjoyment from the broadcast increased (r = –.35, p < .001). Contrary to our first hypothesis, the use of social media for game-related usages increased as the enjoyment from the broadcast increased (r = .31, p < .001). Examining the role of transportation as a mediated variable revealed that the more enjoyment participants experienced, the more transported they were into the game, which leads to a significant rise in their game-related usage of social media, and a significant decline in their non-game-related usages of it [F(3, 439)= 42.80, p < .001, R2= 16.32%].

Added value: The function of social media as a second screen depends on the relevancy of the usages to the broadcast. These findings contribute to our understanding of the effects of psychological factors (enjoyment and transportation) on second screen usage in the context of live television broadcasts of major sports events.



Measuring self-assessed (in)ability vs. knowledge-based (in)certainty in detecting Fake-News and its forcing or inhibiting effect on its spread.

Daniela Wetzelhütter, Sebastian Martin

University of Applied Sciences Upper Austria, Austria

Relevance & Research Question: The coronavirus crisis is accompanied by an immense corona media hype. Unfortunately, conflicting information also circulated from people who were actually managing the crisis (e.g. regarding a curfew, positive benefit of face masks). This might have created insecurity about the credibility of information around the pandemic. Unsurprisingly, since insecurity may lead to misinformation being believed, the "digital crisis team" in Austria uncovered 150 different Fake-News stories within a week in March 2020. Problematic here is, that social media users are known as ambivalent about the usefulness of fact-checking and verification services. However, according to current knowledge, trust in the source seems to be the most important factor for the spread of Fake-News anyway. Nonetheless, the question arises: To what extent does the (in)certainty of recognizing Fake-News force or inhibit the spread of Fake-News?

Methods & Data: To answer the research question, a scale was developed to capture knowledge-based (in)certainty in recognizing Fake-News. The indicators measuring the certainty of recognizing true or untrue headlines were derived based on a content analysis of 119 newspaper reports (March to December 2020) on Fake-News. In addition, a single indicator was used for self-assessment of this. Data were collected with both a calibration (student sample) and a validation sample (n=201).

Results: A measurement instrument to capture knowledge-based (in)certainty was developed. The reliability of the scale is acceptable, but expandable. It depends on the selection of the headlines on the one hand, and in the response scale on the other. The test of construct validity shows that both self-assessed (in)ability and knowledge-based (in)certainty play a subordinate role in the forwarding of Fake-News. Although the former shows significant influences depending on the motive for forwarding Fake-News.

Added Value: Based on the results, more in-depth research is now possible to elicit why knowledge and self-assessment on Fake-News detection skills contributes less to Fake-News spread/stopping than might be assumed.

 
1:50 - 2:00 CESTBreak
 
2:00 - 3:00 CESTA2: Recruitment for Probability-Based Panels
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The
 
 

Enhancing Participation in Probability-Based Online Panels: Two Incentive Experiments and their Effects on Response and Panel Recruitment

Nils Witte1, Ines Schaurer2, Jette Schröder2, Jean Philippe Décieux3, Andreas Ette1

1Federal Institute for Population Research, Germany; 2GESIS; 3University of Duisburg-Essen

Relevance & Research Question

There are two critical steps when setting up online panels with the exclusive reliance on mail invitations. The first one is the transition from the analogous invitation letter to a digitalized online questionnaire. Survey methods aim to minimize the effort for users and to increase the attractiveness and the benefits of a potential participation. However, nonresponse at the initial wave of a panel survey is not the only critical step to consider. The second one is the transition from initial wave participation to panel recruitment. Little is known about the potential enhancement of both transitions, from offline invitation to online participation and to panel recruitment by means of incentives. We investigate how mail based online panel recruitment can be facilitated through incentives.

Methods & Data

The analysis relies on two incentive experiments and their effects on panel recruitment and the intermediate participation in the recruitment survey. The experiments were implemented in the context of the German Emigration and Remigration Panel Study and encompass two samples of randomly sampled persons. Tested incentives include a conditional lottery, conditional monetary incentives, and the combination of unconditional money-in-hand with conditional monetary incentives. Furthermore, we assess the costs of panel recruitment per realized interview.

Results

Multivariate analyses indicate that low combined incentives (€5/€5) or, where unconditional disbursement is unfeasible, high conditional incentives (€20) are most effective in enhancing panel participation. In terms of demographic bias, low combined incentives (€5/€5) and €10 conditional incentives are the favored options. The budget options from the perspective of panel recruitment include the lottery and the €10 conditional incentive which break even at net sample sizes of 1,000.

Added Value

The key contribution of our research is a better understanding of how different forms of incentives facilitate a successful transition from postal mail invitation to online survey participation and panel recruitment.



Comparing face-to-face and online recruitment approaches: evidence from a probability-based panel in the UK

Curtis Jessop

NatCen, United Kingdom

Key words: Surveys, Online panels, Recruitment

Relevance & Research Question:

The recruitment stage is a key step in the set-up of a probability-based panel study, but it can also represent a substantial cost. A face-to-face recruitment approach in particular can be expensive, but a lower recruitment rate from a push-to-web approach risks introducing bias and putting a limit on what subsequent interventions to minimise non-response can achieve. This paper presents findings on using face-to-face and push-to-web recruitment approaches when recruiting to the NatCen Panel.

Methods & Data:

The NatCen Panel is recruited from participants in the British Social Attitudes survey (BSA). While normally conducted face-to-face, the 2020 BSA was conducted using a push-to-web approach in response to the Covid-19 pandemic. This study compares the recruitment rates and overall response rates of the face-to-face survey and push-to-web recruitment approaches. It also compares the demographic profile of panel survey participants recruited using each approach to explore to what extent any differences in recruitment and response rates translate into bias in the sample.

Results:

We find that, despite a higher recruitment rate and participation rate in panel surveys, the overall response rate using a push-to-web recruitment approach is substantially lower than when using a face-to-face recruitment approach due to lower response rates at the recruitment interview. There are also differences in the sample profile. For example, people recruited using a push-to-web approach were more likely to be younger, better-off financially, heavier internet users, and interested in politics.

Added Value:

Findings from this study will inform the future design of recruitment for panel studies, providing evidence on the likely trade-offs that will need to be made between costs and sample quality.



Building an Online Panel of Migrants in Germany: A Comparison of Sampling Methods

Mariel McKone Leonard1, Sabrina J. Mayer1,2, Jörg Dollmann1,3

1German Center for Integration and Migration Research (DeZIM), Germany; 2University of Duisburg-Essen, Germany; 3Mannheim Center for European Social Research (MZES), University of Mannheim, Germany

Relevance and Research Question

Underrepresentation of members of ethnic minority or immigrant-origin groups in most panels available to researchers hinders the study of these individuals experiences of daily life as well as racism and discrimination, or how these groups are affected by and react to important events.

Several approaches for reaching these groups exist but each method introduces biases. Onomastic classification is the current gold standard for identifying minority individuals; however, it is cost-intensive and has been shown to systematically miss well-integrated individuals. Respondent-driven sampling is increasingly popular for sampling rare or hidden individuals, while Facebook samples are the easiest and least expensive method to implement, but yield non-probability samples.

In order to identify the most efficient and representative methods of sampling and recruiting potential participants, we compare three different sampling methods with regard to the resulting biases in distributions.

Methods and Data

We compare three sampling methods:

(1) mail push-to-web recruitment of a probability sample with name-based (onomastic classification)

(2) web-based respondent-driven sampling (web-RDS)

(3) Facebook convenience sampling

In order to systematically test these methods against each other, we designed a set of experimental conditions. We test these conditions by sampling and recruiting a national sample of 1st-generation Portuguese migrants and their children.

We will compare the conditions based on factors which may affect recruitment into a national German online panel such as degree of integration, survey language and self-assesses language fluency, and income. Because we give individuals in the probability sample the option to respond via mail or web, we will additionally be able to compare differences across survey modes.

Results

We began fielding the probability sample condition at the beginning of March. We anticipate fielding of the additional conditions from April until June. This will allow us time to conduct analyses and develop preliminary results prior to the conference start date.

Added Value

Our paper will present an overview of our implementation of each method; our evaluation criteria; and preliminary results. We will provide a more realistic understanding of the potential biases, strengths, and weaknesses of each method, thus supporting researchers in making better informed methods choices.

 
2:00 - 3:00 CESTB2: Geodata in Market and Survey Research
Session Chair: Simon Kühne, Bielefeld University, Germany
 
 

Innovative segmentation using microgeography: How to identify consumers with high environmental awareness on a precise regional basis

Franziska Kern, Julia Kroth

infas360, Germany

Relevance & Research question: Sustainability is the topic of the day. But how can customers who tend towards an ecological lifestyle be identified? Customer segmentation is a well-known method to create more efficient marketing and sales strategies. One of the core problems is the intelligent linking of very specific customer data with suitable general market data and ultimately the precise local determination of potential. This paper shows an innovative way to classify potential buyers at address level.

Methods & Data: First, a survey is conducted with about 10,000 respondents. Questions on general needs, actions and attitudes are used to calculate a sustainability score. By geocoding the respondents’ addresses, the results can be enriched with more than 700 microgeographic data, including information on sociodemographics, building type, living environment, energy consumption or rent. A cluster analysis identifies five sustainability types, two of which tend towards sustainability. By means of a discriminant analysis, the generated segments are transferred to all 22 million addresses and 41 million households in Germany.

Results: As a result, households prone to a sustainable lifestyle can be easily identified. Type 1 of these sustainable households are mostly married couples with children, on average 51 years old, living in single-family houses with solar panel in medium-sized towns and rural communities. They have a monthly net income of €2,500 to 3,500 and sustainability, innovation, family and pragmatism are important to them. Type 2 representatives live in big cities in apartment buildings, are between 18 and 49 years old, often single and have a monthly income between €1.500 and 2500. To facilitate the application of the typology in marketing and sales practice, the typical representatives of both cluster types were developed and described as personas.

Added value: The resulting information can be combined with existing customer data and thus used to identify corresponding sustainability attitudes within one's own customer portfolio. For the specific acquisition of new customers, the address-specific knowledge can be aggregated to any higher (micro-) geographical level and then used in sales and marketing strategies. Advertising activities can then be precisely targeted to the right type of potential buyers.



GPS paradata: methods for CAPI interviewers fieldwork monitoring and data quality

Daniil Lebedev, Aigul Klimova

HSE University, Moscow, Russia

Relevance & Research Question:

In recent years there has been a steady increase of interest in sensor- and app-based data collection which can provide new insights into human behavior. However, the quality of such data lacks research focus and still needs further exploration. The aim of this paper was to compare various methods of using GPS paradata in CAPI surveys for monitoring interviewers and assess GPS data quality differences among different CAPI interviewers and survey regions.

Methods & Data:

We compared geofencing (distance between locations at the beginning of an interview and at the end), curbstoning (tests if there are too dense groups of interview locations within some area), and interwave geofencing methods (distance between location of interviews with same respondent between panel waves) to check whether they identify interviews that have lower data quality in terms of completion times, criterion validity, and test-retest reliability based on CAPI data of 26th and 27th waves of Russia Longitudinal Monitoring Survey with 491 and 631 respondents, respectively. In case of GPS data quality, we compared missing data rate and geolocation measures’ accuracy by interviewers and regions.

Results:

We found that the geofencing method was quite efficient in flagging “suspicious” interviews that have lower data quality. Curbstoning method can be quite useful, however, the problem concerns the selection of the thresholds of area and density of interviews within this area. In addition, using accuracy-based measures as GPS measurement error instead of selecting a threshold was found to be more efficient. In terms of data quality, the region of an interview proved to be the main factor associated with lower data quality in terms of missing data and measurement error compared to others with remote regions providing GPS data of higher quality.

Added Value:

The comparison of various methods of interviewers’ monitoring shows ways of how GPS-paradata can be used and which of the approaches allow to detect interviewers with lower data quality. GPS data quality assessment is useful in terms of future geolocation data employment within social science research as it shows the possible sources of measurement and item nonresponse errors.



Combining Survey Data and Big Data to Rich Data – The Case of Facebook Activities of Political Parties on the Local Level

Mario Rudolf Roberto Datts, Martin Schultze

University of Hildesheim, Germany

Relevance & Research Question: In times of big data, we have increasing numbers of easily accessible data which can be used to describe human behavior and organization's activities on large sample sizes without the well-known biases of survey data. Yet, big data is not particular good in measuring attitudes and opinions, as well as information that has not been made public. Thus, the question rises how survey and big data can be combined to enable a more complete picture of reality.

Methods & Data: As a case study, we analyse the facebook communication of political parties in Germany. We seek to describe and explain facebook activities. While the descriptive part of our study can be investigated on basis of data that was gathered via the official web interface of facebook, the “why” is examined via an online survey among the district associations of the most important political parties in Germany (n= 2,370), which began on 2 May 2017 and ended on 16 June 2017.

Results: By combining big data and survey data, we are able to describe the facebook usage of the district associations over a time period of eight years, as well as identify several key factors explaining the very different facebook activities of political parties in Germany, like the number of members and certain expectations of the chairman regarding the merits of social media for political communication activities. Furthermore, we can show that almost half of our respondents perceive its local party chapter as a very active one, while API data indicates that they are, if any, moderate social media communicators.

Added Value: It was only possible through the combination of survey data and big data, to draw a rich picture of the political usage of facebook on the local level in Germany. Our findings also indicate that ”objective” big data and an individual’s perception regarding the same issue, might differ substantially. Thus, we recommend analyst to - whenever possible - combine big and survey data and be aware of the limitations.

 
2:00 - 3:00 CESTC2: Misinformation
Session Chair: Anna Rysina, Kantar GmbH, Germany
 
 

Emotional framing and the effectiveness of corrective information

Pirmin Stöckle

University of Mannheim, Germany

Relevance & Research Question:

Concerns about various forms of misinformation and its fast dissemination through online media have generated huge interest into ways to effectively correct false claims. An under-explored mechanism in this research is the role of distinct emotions. How do emotional appeals interact with corrective information? Specifically, I focus on the emotion of disgust, which has been shown to be linked to the moralization of attitudes, which in turn reduces the impact of empirical evidence on attitudes and makes compromise less likely. Substantively, I investigate the issue of genetically modified (GM) food. I hypothesize that (i) emotionally framed misinformation induces disgust and moralizes attitudes towards GM food, (ii) that this effect endures in the face of neutral correction even if the factual misperception is corrected, and (iii) that an emotional counter-frame reduces this enduring effect of the original frame.

Methods & Data:

I implement a pre-registered survey experiment within a panel study based on a probability sample of the general population in Germany (N ≈ 4,000). The experiment follows a between-subjects 3 x 3 factorial design manipulating both misinformation (none, low-emotion frame, high-emotion frame) and corrective information (none, neutral, emotional counter-frame). The informational treatments consist of fabricated but realistic online news reports based on the actual case of a later retracted study claiming to find a connection between GM corn and cancer. As outcomes, I measure factual beliefs about GM food safety, policy opinions, moral conviction, and emotional responses to GM food.

Results: - not yet available -

Added Value:

In the view of many scientists, genetic engineering provides avenues with large potential benefits, which may be impeded by public resistance possibly originating from misleading claims easily disseminated through online media. Against this background, this study provides evidence on the effect of emotionally charged disinformation on perceptions of GM food, and ways to effectively correct false claims. In a broader perspective, these results inform further studies and policy interventions on other issues where disinformation loads on strong emotions, ranging from social policy over immigration to health interventions such as vaccinations.



Forwarding Pandemic Online Rumors in Israel and in Wuhan, China

Vered Elishar-Malka1, Shuo Seah2, Dana Weimann-Saks1, Yaron Ariel1, Gabriel Weimann3

1Academic College of Emek Yezreel; 2Huazhong University of Science and Technology, China; 3University of Haifa

Relevance and research question: Starting in the last quarter of 2019, the COVID-19 virus, led to an almost unprecedented global pandemic with severe socioeconomic and political implications and challenges. As in many other large-scale emergencies, the media has played several crucial roles, among them as a channel of rumormongering. Since social media have penetrated our lives, they have become the central platform for spreading and sharing rumors, including about the COVID-19 epidemic. Based on the Theory of Planned Behavior and on the Uses and Gratifications theory, this study explored the factors that affected social media users' willingness to spread pandemic-related rumors in Wuhan, China, and in Israel, via each country's leading social media platform (WeChat and WhatsApp, respectively).

Methods and data: we tested a multi-variant model of factors that influence the forwarding of COVID-19 online rumors. Using an online survey that was simultaneously conducted in both countries between April-May 2020, 415 WeChat and 503 WhatsApp users reported their patterns of exposure to and spread of COVID-19 rumors. As part of the questioner, users were also asked to report on their motives to do so.

Results: The main result was that in Wuhan, personal needs, negative emotions, and the ability to gather information significantly predicted willingness to forward rumors. In contrast, rumors' credibility was found to be a significant predictor in the regression model. In Israel, only the first two predictors, personal needs and negative emotions, were found significant. The best predictor in Wuhan was personal needs, and the best predictor in Israel was negative emotions.

Added value: This study's findings demonstrate the significant roles that WeChat and WhatsApp, the leading social media in China and Israel, respectively, play in local users' lives during a severe national and global crisis. Despite the major differences between the two societies, several interesting similarities were found: in both cases, individual impetuses, shaped by personal needs and degree of negative feelings, were the leading motives behind spreading rumors over social networks. These findings may also help health authorities in planning the right communication strategies during similar situations.



Acceptance or Escape: A Study on the embrace of Correction of Misinformation on YouTube

Junmo Song

Yonsei University, Korea, Republic of (South Korea)

Relevance & Research Question:

YouTube is one of the most important channels for producing and consuming political news in south Korea. YouTube has the characteristic that not only traditional medias, but also new media based on the Internet, or individual news producers, can freely provide news because the platform does not play an active gatekeeper role.

In 2020, North Korea's leader Kim Jong-un's death was reported indiscriminately by both the traditional media and individual channels, but a definite correction was made at the national level. Therefore, this study explores the response to correction by using this case as a kind of natural experiment.

This study aims to analyze the difference in response between producers and audiences when fake information circulating on the YouTube platform is corrected and not. Ultimately, this study seeks to explore the conditions under which correction of misinformation accelerates or alleviates political radicalization.

Methods & Data:

Videos and comments are collected from the top 437 channels in the Politics/News/Social category on YouTube of Korean nationality. Data was collected through the YouTube API provided by Google. Then, classyfing channels into two group that traditional media and new media including individual channel. In addition, the political orientation of comments was classified as progressive/conservative through supervised learning.

Results:

In pilot analysis, In both media and individual channels, the number of comments has generally decreased after correction. In particular, the number of comments in conservative individual channel has drastically decreased.

In addition, after the misinformation was corrected, the difference in political orientation between comments from individual channels and media outlets has significantly decreased or disappeared.

However, existing conservative users did not change their opinions due to the correction of misinformation, and it is observed that they immediately move to other issues and consume.

Added Value:

YouTube has been relatively less analyzed in politics than other platform like SNS, community. This study examines how misinformation is accepted in a political context through the case of Korea in which YouTube has a profound influence on politics.

 
2:00 - 3:00 CESTD2: GOR Best Practice Award 2021 Competition II
Session Chair: Otto Hellwig, respondi/DGOF, Germany
Session Chair: Alexandra Wachenfeld-Schell, GIM Gesellschaft für Innovative Marktforschung mbH, Germany

sponsored by respondi
 
 

High Spirits – with No Alcohol?! Going digital with Design Thinking in the non-alcoholic drinks category – a case study in unlocking the power of digital for creative NPD tasks

Irina Caliste3, Christian Rieder2, Janine Katzberg1, Edward Appleton1

1Happy Thinking People, Germany; 2Happy Thinking People, Switzerland; 3Bataillard AG

Relevance and Research Question

Our client – a Swiss wine distribution company - wished to improve its position in the growing non-alcoholic drink category.

They were looking for a step-change in their innovation approach: embracing consumer centricity, digital working & Design Thinking principles.

Budgets were tight, and timing short. Could we help?

Design Thinking is proven and used widely offline – but 100% digital applications are still embryonic.

In this project we demonstrated how a careful mix of online qual tools – real-time and asynchronous – allowed us to innovate successfully, covering both ideation and validation phases in a highly efficient manner.

Methods and Data

Phase 1 involved a DIY-style pre-task to help stakeholders get to know their consumers – talking to friends, relatives about category experiences.

A digital workshop followed: all participants shared their experiences and identified the most promising customer types. Detailed personas were worked up, with a range of core needs.

External experts delivered short pep-talks as inspiration boosters.

Initial ideas were then developed – multiple prototypes visualised rapidly by an online scribbler.

Phase 2 was about interrogating & evaluating the ideas from phase 1.

“Real consumers” (recruited to match the personas) interacted directly with the client groups.

Customers re-joined later on for a high-speed pitch session: As in the TV format “The Dragon’s Den” (role-reversal), client groups presented their ideas to real customers.

Online mobile polling was used for a final voting session – individual voices helping to optimize the concepts.

Results

• A broad, rich range of actionable new ideas was generated.

• The client team was enthused. The desired mind-shift to Consumer Centricity and openness to innovation was achieved – a key step-change hoped for by the innovation manager & company CEO.

• DIY & a fusion of professional online qual research approaches complemented one another well. No quality was lost.

Added Value

• Digital Design Thinking works well and extremely efficiently for online creativity tasks.

• The rules of F2F co-creation success – playful, time-boxed, competitive, smaller groups – were all applicable online.

• Consumers jumping in and out of the workshop day is a new, efficient use of their time.

• Overall: creativity and online can work very well hand-in-hand!



The dm Corona Insight Generator – A mixed method approach

Oliver Tabino1, Mareike Oehrl1, Thomas Gruber2

1Q Agentur für Forschung, Germany; 2dm-drogerie markt GmbH + Co. KG, Germany

Relevance & Research Question:

As one of the biggest German drugstore brand, the Corona pandemic confronts dm with several major challenges in different areas and units. How political, social and medical developments affect consumers, consumer behaviour, fears and concerns at the PoS, and the image of dm are key issues in this project.

Methods & Data:

dm needed above all fast, timely, reliable insights on current and highly dynamic developments. Weekly trend reports at the beginning of the project could only be achieved through a mix of methods and a highly efficient and flexible research process.

Diversity: we set up a very diverse project team to cover different point of views and lifeworlds.

Intelligence of the Q crowd: internal knowledge management platform collecting weak signals, observations.

Web crawler: capturing, structuring and analysing the web

Social Listening: Tracking, reviewing and quantifying previously found trends

Netnography: content analytical approach to capture, understand and interpret need states

Google Trends Analyses: uncovering linked topics and search patterns from a consumers’ perspective

AI: automated detection of trends.

Last not least: expertise and research experience.

Because of extreme time pressure, we opted for an agile and tight project management.

The project includes special process steps:

Regular editorial meetings between dm and Q to challenge trends and weak signals before reporting and to check relevance for dm.

Extremely open communication between client and agency, which enables a deep understanding of dm’s questions and a quick and tailor-made preparation of insights.

Results:

The results are presented in a customised format. It is suitable for management and includes exemplary trend manifestations as well as concrete recommendations for dm. In addition, the results are embedded in a context of society as a whole:

Overview and classification of all found trends in a trend map.

The reporting cycle has been changed depending on the social dynamics and dm’s requirements.

Q also conducted short-term on-demand analyses.

Added Value:

The results are made available in an internal dm network for the different departments and units and are used by dm branches, communication teams, marketing, product development and corporate strategy.

The reports work at different company levels (granular, concrete vs. strategic) and for the different areas such as marketing, private label development, communication, etc. In addition, the insights offer touchpoints for dm’s keyproduct categories such as colour cosmetics, skincare, washing and cleaning, etc.



The end of slide presentations as we know them: How to efficiently and effectively communicate results from market research?

Andreas Krämer1, Sandra Böhrs2, Susanne Ilemann2, Johannes Hercher3

1exeo Strategic Consulting AG, Germany; 2simpleshow gmbh, Germany; 3Rogator AG, Germany

Relevance & Research Question:

Videos are becoming increasingly popular in market research when it comes to capturing information (Balkan & Kholod 2015). At the same time, results from studies can be communicated in a targeted manner in form of a video. This is especially true for explainer videos, i.e., short (1-3 min.), animated videos long and convey key messages. Today, different platforms offer to produce DIY explainer videos based on AI (Krämer & Böhrs 2020). However, a key question is whether it is possible to convey information better via explainer video than via slide presentation. Another open question is whether learning effects can be improved through interaction.

Methods & Data:

As part of a customer survey (n=472, March 2021) by simpleshow, a leading provider of explainer videos, in addition to questions on customer satisfaction and ease of use, current results on the topic of home-office were presented as part of an experimental design (randomized 2+2 factorial design). In the test, an explainer video and a slide presentation were used as the format. Both formats were presented once without interaction and once with interaction (additional questions on the topic). Afterwards, a knowledge test was used to check how well the study results were conveyed. In addition, the participants rated the type of presentation and well as subjective effects.

Results:

The explainer video format achieves significantly better results in knowledge transfer than the presentation of the results as a slide presentation. With a maximum achievable score of 7, the explainer video without interaction achieves a value of 5.0, while the slide format achieves only 2.2 points. The differences show a high statistical significancy as well as strong effect size . The interaction only leads to slightly better results in combination with slide presentation. The subjective evaluation of the presentation format also shows similar level differences between the test groups. Taking into account the length of viewing, the explainer video without interaction achieves by far the best result.

Added Value:

The study results firstly demonstrate clear advantages of knowledge transfer through explanatory videos in comparison with conventional slide presentations. Secondly, it appears that in the context of short presentations, interaction (additional questions about the topic) does not significantly increase learning, but it does increase viewing time. Thirdly: Beyond the actual experiment, the study results underline that explainer videos can also play an important role in the presentation of market research results in the future.

 
3:00 - 3:10 CESTBreak
 
3:10 - 4:10 CESTKeynote 1
 
 

Election polling is not dead: Forecasts can be improved using wisdom-of-crowds questions

Mirta Galesic

Santa Fe Institute, United States of America

Election forecasts can be improved by adding wisdom-of-crowds questions to election polls. In particular, asking people about the percentage of their social contacts who might vote for different political options (social-circle question) improved predictions compared to traditional polling questions about participants’ own voting intentions in three recent U.S. elections (2016, 2018, and 2020) as well as in three recent elections in European countries with larger number of political options (in 2017 French, 2017 Dutch, and 2018 Swedish elections). Using data from large national online panels, we investigate three reasons that might underly these improvements: an implicitly more diverse sample, decreased social desirability, and anticipating social influences on how people will vote. Another way to use wisdom of crowds is asking people to forecast who will win the election (election-winner question). We find that the social-circle question can be used to select individuals who are better election-winner forecasters, as they typically report more diverse social circles. A combination of social-circle, election-winner, and traditional own intention questions has performed best in 2018 and 2020 U.S. elections. Taken together, our results suggest that election polling can produce accurate results when traditional questions are augmented with wisdom-of-crowds questions.

 
4:10 - 4:20 CESTBreak
 
4:20 - 5:30 CESTA3: New Technologies in Surveys
Session Chair: Ines Schaurer, City of Mannheim, Germany
 
 

Participation of household panel members in daily burst measurement using a mobile app

Annette Jäckle1, Jonathan Burton1, Mick Couper2, Brienna Perelli-Harris3, Jim Vine1

1University of Essex, United Kingdom; 2University of Michigan, USA; 3University of Southampton, United Kingdom

Relevance:

Mobile applications offer exciting new opportunities to collect data, either passively using inbuilt sensors, or actively with respondents entering data into an app. However, in general population studies using mobile apps, participation rates have to date been very low, ranging between 10 and 20 percent. In this paper we experimentally test the effects of different protocols for implementing mobile apps on participation rates and biases.

Methods:

We used the Understanding Society Innovation Panel, a probability sample of households in Great Britain that interviews all household members aged 16+ annually. During the 2020 annual interview, respondents were asked to download an app and use it every evening for 14 days to answer questions about their experiences and wellbeing that day. We experimentally varied: i) at what point in the annual interview we asked respondents to participate in the wellbeing study (early vs. late), ii) the length of the daily questionnaire (2 vs 10 mins), iii) the incentive offered for the annual interview (ranging from £10 to £30), and iv) the incentives for completing the app study (in addition to £1 a day: no bonus; £10 bonus for completing all days; £2.50 bonus a day on four random days).

Results:

Of the 2,270 Innovation Panel respondents, 978 used the app at least once (43%). The length of the daily questionnaire, the incentives for the annual interview, and the incentives for the app study had no effects on whether respondents downloaded the app during the interview, whether they used the app at least once, or the number of days they used the app. However, respondents who were invited to the app study early in the annual interview were 8 percentage points more likely to participate than those invited late in the interview (47% vs 39%, p<0.001) and respondents who completed the annual interview online were 28 percentage points more likely to participate than those who completed the interview by phone (48% vs 20%, p<0.001). Further analyses will examine the reasons for non-participation and resulting biases.

Value:

This study provides empirically based guidance on best practice for data collection using mobile apps.



App-Diaries – What works, what doesn’t? Results from an in-depth pretest for the German Time-Use-Survey

Daniel Knapp, Johannes Volk, Karen Blanke

Federal Statistical Office Germany (Destatis)

Relevance & Research Question:

The last official German Time-Use-Survey (TUS) in 2012/2013 was based mainly on paper mode. In order to modernize the German TUS for 2022, two new modes were added – an app and a web instrument. As the literature on how to design specific elements of a diary-based TUS App is still scarce, our goal was to derive best-practice guidelines on what works and what doesn’t when it comes to designing and implementing such an App-Diary (e.g. whether and how to implement hierarchical vs. open text activity search functionalities).

Methods & Data:

Results are based on an in-depth qualitative pretest with 30 test persons in Germany. Test persons were asked to 1. Fill out a detailed time-use diary app for two days, 2. Document first impressions, issues and bugs on a short questionnaire, 3. Participate in individual follow-up cognitive interviews. Combining this data allowed us to evaluate various functionalities and implementations in detail.

Results:

Final results of the pretest are still work in progress and will be handed in at a later date. The presentation will also include a brief overview of the upcoming federal German Time-Use-Survey 2022 and its transformation towards Online First.

Added Value:

New insights to further expand the literature on how to design a diary-based time-use-app in the context of the harmonized European Time-Use-Survey. This study expands on literature by focusing on specific elements of a diary-based app and proposing best-practice guidelines on several detailed aspects, such as app structure, diary overview, and activity search functionality.



Using text analytics to identify safeguarding concerns within free-text comments

Sylvie Hobden, Joanna Barry, Fiona Moss, Lloyd Nellis

Ipsos MORI, United Kingdom

Relevance & Research Question:

Ipsos MORI conducts the Adult Inpatient and Maternity surveys on behalf of the Care Quality Commission (CQC). Both surveys collect patient feedback on recent healthcare experiences via a mixture of multiple choice and free-text questions. As the unstructured free-text comments could potentially disclose harm, all comments are manually reviewed and allocated a flag indicating whether any safeguarding concerns are disclosed. Flagged comments are escalated to the CQC for investigation. We piloted an approach that uses machine learning to make this process more efficient.

Methods & Data:

IBM SPSS modeler was used to construct a model which was developed through multiple stages. We aimed to use the model to separate safeguarding concerns (which require review and escalation) from non-safeguarding (which may require spot-checking of a random sample).

1. 2019 Adult Inpatient and Maternity pilot comments (n=9,862), that had previously been manually reviewed for safeguarding issues, were used to train the model to identify potential safeguarding comments. The model identified a relatively small pool of comments.

2. The model output was compared with the previous manual review to assess accuracy. Where the model failed to identify safeguarding comments correctly, a qualitative review was conducted to identify how the model should be revised to increase accuracy.

3. 2019 Adult Inpatient and Maternity mainstage comments (n=60,754) were analysed by the model. This sample was independent of the pilot sample, ensuring the model's accuracy was generalisable across all survey comments.

Results:

On average, the model identified 44% of comments as non-safeguarding with high accuracy. Given the scale of the surveys, this could equate to around 27,000 fewer comments that need manual review each year. This would provide cost savings and enable safeguarding comments to be escalated to the CQC quicker. We are currently exploring how the model will be used for the 2020/2021 surveys.

Added Value:

Text analytics uses machine learning to assist in the translation of large volumes of unstructured text into structured data. This is an innovative application of the approach which has resulted in huge efficiencies and could be developed and implemented on other surveys.

 
4:20 - 5:30 CESTB3: Smartphone Sensors and Passive Data Collection
Session Chair: Simon Kühne, Bielefeld University, Germany
 
 

Online Data Generated by Voice Assistants – Data Collection and Analysis Using the Example of the Google Assistant

Rabea Bieckmann

Ruhr-Universität Bochum, Germany

Relevance & Research Question:

Voice assistants play an increasing role in many people's everyday life. They can be found in cars, cell phones, smart speakers or watches and the fields of application are increasing. The use is seldom questioned, although meanwhile children grow up with it and the voice assistants are often people’s only "conversation partner" during one day. At the same time, a large amount of data is automatically generated and ongoing online logs in the form of conversations are created. The question arises as to how this mass of personal data can be used for sociological research and based on this, what the special features of communication between humans and voice assistants are.

Methods & Data:

The considered data consists of conversation logs from one person with the Google Assistant over a whole year. In addition, there is information about whereabouts, music the person listened to, shopping lists and many more aspects. The entries in the logs are provided with time markers and, in most cases, are stored with the recorded audio files. The logs can be downloaded as PDF-files from the user’s personal account. They are strictly anonymized and examined with a qualitative approach using conversation analysis.

Results:

Collecting and processing the data for sociological research requires much effort. The barriers to obtain the data are very high, but once it is available, it is of great value because it contains an enormous amount of information. The communication between human and voice assistant is also very special as it differs greatly from other forms of communication. It is characterized by an imperative way of speaking, paraphrases and constant repair mechanisms. The personalization of the voice assistant is also a key finding in the analysis of human-technology communication.

Added Value:

The study not only provides initial results and suggestions for approaches in the sociological handling of data from voice assistants. In addition, the findings on the specifics of communication between people and voice assistants are relevant as they are increasingly becoming part of households, work places, public space and thus changing social dynamics.



Eyes, Eyes, Baby: BYOD Smartphone Eye Tracking

Holger Lütters1, Antje Venjakob2

1HTW Berlin, Germany; 2oculid UG (haftungsbeschränkt), Germany

Relevance & Research Question

The methodology of eye tracking is an established toolset typically used in a laboratory setting. The established technological toolset of infrared devices creates solid results, but makes it impossible to go into the remote testing field. Research accepted lower quality with webcams as a trade-off for the better access to more diverse research samples.

With the rise of smartphones as the preferred digital device, the methodology did not keep pace so far. App concept or mobile website tests still take place in a confined environment of established hardware that is in effect more suitable for eye tracking on bigger screens.

The approach presented brings the technology right into the hands of a research participant, who can use their own device’s camera while performing research tasks. The idea of BYOD (Bring your own device) is not new, but now it offers a high-tech toolset with exceptional quality.

Methods & Data

The presented approach offers an online based framework for the setup of studies for the less tech savvy researcher who can design, distribute and analyze a smartphone eye tracking test. The tool captures eye movements and touch interactions of a participant on the screen. The recording of thinking aloud helps to better understand the individual’s attention while performing research tasks. The entire interaction data is uploaded to the online platform and can be analyzed individually or in comparison.

The contribution shows the first experiments with the new eye tracking app from the Berlin based start-up Oculid, showing how to test advertising material, online task solving and a market research questionnaire being eye tracked and user behaviour.

Results

The contribution will show the process of setting up a study, distribution and analysis using several experiments performed by external researchers using the tool. The entire process of set-up, field recruitment, connection to external tools and analysis will be explained with all their advantages, insights and challenges.

Added Value

Smartphone usage does not only grow in quantity, but also the mobile camera technology is outperforming compared to non-mobile installations. The smartphone BYOD concept therefore may be more than just competitive.



Separating the wheat from the chaff: a combination of passive and declarative data to identify unreliable news media

Denis Bonnay1,2, Philippe Schmitt1,3

1Respondi; 2Université Paris Nanterre; 3Toulouse School of Economics

Relevance & Research Question: Fake news website detection

Hype aside, fake news have grown massive and threaten the proper functioning of our democracies. The detection of fake news has thus become a major focus of research both to the social media industry and in the academia. While most approaches to the issue are aimed at classifying news items as fake or legit, one may also wish to look at the problem in terms of sources’ reliability, aiming at a classification of news emitters as trustworthy or deceptive. Our aim in the present research is to explore the prospects for an automated solution to this problem, by trying to predict and extend existing man-made classification of news sources in France.

Methods & Data: browsing data, random forest, NLP, deep learning

A sample of 3192 French panelists aged from 16 to 85 had their online browsing activity recorded for one year from November 2019 to October 2020. Additionally, a survey was conducted in May 2020 to gather information about their socio-demographics and degrees of beliefs in various fake news. On this basis, we are using four kinds of predictors: (1) websites’ traffic (mean time spent, etc.), (2) origins of traffic, (3) websites’ audience features, (4) types of articles read (clustering titles embeddings obtained via a fine-tuned BERT language model). Our predictive target is the binary adjusted version of Le Monde’s media classification where medias are either reliable or not (61% vs. 39% of the total sample).

Results:

Predictions are made with random forests algorithm and K-Fold cross-validated with K=10. Combining all sets of variables, we achieve 75.42% accuracy on the test set. The top 5 predictors are average age, number of pages viewed, total time spent on websites, category of preceding visits and panelists’ clusters based on degrees of belief in fake news.

Added Value: combining passive and declarative data

Combining passive and declarative data is a new standard for online research. In this study, we show the potential of such an approach to fake news detection, which is usually tackled with by means of brute force NLP or pattern based algorithms.



Measuring smartphone operating system versions in surveys: How to identify who has devices compatible with survey apps

Jim Vine1, Jonathan Burton1, Mick Couper2, Annette Jäckle1

1University of Essex, United Kingdom; 2University of Michigan, USA

Relevance:

Data collection using mobile apps relies on sample members having compatible smartphones, in terms of operating system (OS) and OS version. This potentially introduces selection bias. Measuring OS version is however difficult. In this paper we compare the quality of data on smartphone OS version collected with different methods. This research arose from analyses of the uptake of the coronavirus test & trace app in the UK, which requires smartphones running Android 6.0 and up or iOS 13.5 and up.

Methods:

We use data from the Understanding Society COVID-19 study, a probability sample aged 16+ in the UK. The analyses are based on 10,563 web respondents who reported having an Android or iOS smartphone. We compare three ways of measuring smartphone OS version: i) using the user agent string (UAS), which captures characteristics of the device used to complete the survey, ii) asking respondents to report the make and model of their smartphone and matching that to an external database, and iii) asking respondents to report the OS version of their smartphone (by checking its settings, typing “whatismyos.com” into its browser, or scanning a QR code opening that webpage).

Results:

The UAS provided a smartphone OS version for just 58% of respondents, as the rest did not use a smartphone to complete the survey; 5% of the OS versions were too old to use the coronavirus app.

Matching the self-reported smartphone make and model to a database provided an OS version for 88% of respondents; only 2% did not answer the question, but 10% of answers could not be matched to the database; 10% of OS versions were too old for the app.

When asked for the OS version of their smartphone, 66% answered, 31% said don’t know and 3% refused or gave an incomplete answer; 15% reported an OS version that was too old.

Further analyses will examine the reasons respondents gave for not providing the OS version and cross-validate the three measures.

Added Value:

This study provides evidence on how to identify sample members who have smartphones with the required OS version for mobile app-based data collection.

 
4:20 - 5:30 CESTC3: COVID-19 and Crisis Communication
Session Chair: Pirmin Stöckle, University of Mannheim, Germany
 
 

The Mannheim Corona Study - Design, Implementation and Data Quality

Carina Cornesse, Ulrich Krieger

SFB 884, University of Mannheim, Germany

Relevance & Research Question:

The outbreak of COVID-19 has sparked a sudden demand for fast, frequent, and accurate data on the societal impact of the pandemic. To meet this demand quickly and efficiently, within days of the first containment measures in Germany in March 2020, we set up the Mannheim Corona Study (MCS), a rotating panel survey with daily data collection on the basis of the long-standing probability-based online panel infrastructure of the German Internet Panel (GIP). In a team effort, our research group was able to inform political decision makers and the general public with key information to understand the social and economic developments from as early as March 2020 as well as advance social scientific knowledge through in-depth interdisciplinary research.

Methods & Data:

This presentation gives insights into the MCS methodology and study design. We will provide a detailed account of how we adapted the GIP to create the MCS and describe the daily data collection, processing, and communication routines that were the cornerstones of our MCS methodology. In addition, we will provide insights into the necessary preconditions that allowed us to react so quickly and set up the MCS so early in the pandemic. Furthermore, we will discuss the quality of the MCS data in terms of the development of response rates as well as sample representativeness across the course of the MCS study period.

Results:

Our results show how the German Internet Panel could be transformed in an agile measurement tool in times of crisis. Participation rates were stable over the 16 weeks of data collection. Data quality indicators such as the Average Absolute Relative Bias comparing key survey indicators to German Mikozensus show stable low deviation from benchmark.

Added Value:

In this presentation we demonstrate how an existing research infrastructure can be quickly transformed in an instrument to measure important societal change or crisis events.



Tracking and driving behaviour with survey and metered data: The influence of incentives on the uptake of a COVID-19 contact tracing app

Holger Nowak, Myrto Papoutsi

respondi, Germany

Relevance & Research Question:

Tracing the chain of infections is a substantial part of the strategy against SARS-CoV-2. But how is the German Corona Tracing App (CWA) used? Who are the users? Could uptake be boosted by just informing the population? Or are monetary incentives more effective? We study these questions by combining survey with passively metered behavioral data. The passive metering not only measures app usage more accurately but helps also to understand sensitive behaviour that is affected by social desirability.

Methods & Data:

100+ days (June to September 2020) survey with 2,500 participants; 1,100 participants of the passive tracking panel, which measures the usage of the CWA

3 wave survey:

• Baseline. Random assignment to 2 informational treatments and a control group

• Re-measurement of attitudes and behaviour. Assign to 3 monetary treatments and a control group

• Last measurement

The control group contains not only surveyed respondents but also part of the metered panel that was not interviewed.

Results:

First, we provide evidence on covariates linked with app usage. We observe higher usage rates among people who are already well informed and adhere to public health guidelines. Furthermore, a higher proportion of higher educated, digitally competent and older people are using the app, as well as those who report to trust the government. We can show the impact of information treatments on uptake is negligible, whereas small financial offers increase app usage substantially.

Added Value:

Due to the app’s privacy-by-default approach, individual-level determinants of usage have been difficult to identify. This study provides important behavioral evidence and highlights the advantage of passive data to measure potential socially desirable behaviour, as well as complex over-time behaviour which is difficult to report. It also shows how such data can be combined with an experimental design to evaluate the effects of possible policy interventions. While the nature of the online access panel prohibits strong conclusions about overall usage rates in the population of interest (smartphone users, whose mobile phones are technically compatible with the tracing app are anyway virtually impossible to sample from), conditional usage rates across different demographic and behavioral groups are informative about app usage.



Are people more likely to listen to experts than authorities during Covid-19 crisis? The case of crisis communication on Twitter during the covid-19 pandemic in Germany

Larissa Drescher1, Katja Aue1, Wiebke Schär2, Anne Götz2, Kerstin Dressel2, Jutta Roosen1

1c3 team, Germany; 2sine - Süddeutsches Institut für empirische Sozialforschung e.V. | sine-Institut gGmbH, Germany

Relevance & Research Question:

The worldwide spread of the Covid-19 virus has led to an increased need for information related to the pandemic. Social media plays an important role in the population's search for information.

Both authorities and Covid-19 experts use Twitter to directly share their own statements and opinions with the Twitter community – unfiltered and independently from traditional media. Little is known on the twitter communication behavior of these players. This study aims to analyze characteristics and differences of both authorities and experts regarding the Covid-19 virus communication on Twitter.

Methods & Data: The evaluation is carried out using sentiment analysis and quantitative text analysis. Tweets from 40 German experts (n = 18) and public health authorities (n = 22) are analyzed between January 2020 and January 2021. For the analysis 35,645 relevant tweets covering Covid-19 topics have been identified. This study is commissioned by the Federal Office for Radiation Protection in Germany.

Results: First findings show that experts (58,6%) have 1.4 times more followers and tweet more often about Covid-19 than authorities (41,4%). Due to a much broader range of topics authorities tweet significantly more about non-Covid-19 topics in 2020 than experts do. Another important finding shows that Covid-19 tweets replicates the Covid-19 cases-curve including a lower Twitter activity during the summer of 2020. Regarding the structural, content and style elements of crisis communication tweets remarkable differences are revealed. While Covid-19 tweets of authorities are obviously designed to follow the known rules of successful social media communication with a higher rate of structural elements like hashtags, URLs and images, experts’ tweets are much plainer. Contrary, experts address their followers more directly via style elements such as use of first or second person than authorities do. Overall, Covid-19 tweets of experts are exceedingly more successful compared to authorities which is shown by a mean retweet rate that is 7 times that of authorities.

Added Value: The results of this study provide not only insights into risk and crisis communication during the Covid-19 pandemic, but also helpful conclusions for future (health) crisis situations, particularly for communication between authorities and the population.



Targeted communication in weather warnings: An experimental approach

Julia Asseburg1, Nathalie Popovic2

1LINK Institut, Switzerland; 2MeteoSchweiz, Switzerland

Relevance & Research Question: Weather warnings, risk communication

Weather warnings inform the public about potentially dangerous weather events so that they can take precautionary measures to avoid harm and damages. However, weather warnings are often not user-oriented, which leads to poor understanding and low compliance rate. The present study focuses on the question, which elements of a warning message are the most important to influence risk perception and intended behavioural change.

Methods & Data: Vignette experiment, implicit associations, Web survey experiment

Using a single association test in a survey vignette experiment with 2000 Swiss citizens from all three language regions, we focus on implicit associations that citizens have, or do not have, when they see a warning message with varying elements (physical values, impact information, behavioural recommendations, warning level and labelling of the warning level). We test for associations with different concepts that play a role in the pre-decisional process of a warning response (e.g. personal relevance, risk perception). The experimental setup allows us to test for causal relationships between the different elements of a warning message and the intended behavioural response. Measuring the implicit associations enables us to better understand the first reactions triggered by the warning elements and how that impacts intended behavior.

Results: Multi-level analyses

Results show that risk and relevance have to be addressed unconsciously for weather warnings to impact the intention to act. The emphasis on behavioural recommendations and potential effects in weather warnings have a wake-up call character. In a nutshell, people need to know to what extent the weather can have an impact on their well-being and what they can do to protect themselves.

Added Value: Targeted communication to the public

First, by conducting a survey vignette experiment in combination with the single association test, we apply an experimental setup, which will open the black box of the perception of targeted communication. Second, the results add direct practical value as they inform the development of user-oriented weather warnings.Finally, the study contributes to research on risk perception and communication by providing a further insight to the cognitive process that underlies the decision to take protective actions

 
4:20 - 5:30 CESTD3: ResearchTech
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

ResearchTech: what are the implications for the insight industry?

Steve Mast

Delvinia, Canada

Recently, “ResearchTech” or “ResTech” has emerged as a new term in the world of consumer and data insight. Leading experts view it as the “next big thing”. Indeed, there is a new generation of online platforms and tools that are fundamentally changing the relationship between research and decision makers. ResearchTech is expected to boost the agility of the research process, increase speed and massively expand the circle of users of data-based insights. The presentation will give a brief introduction about the current state of ResearchTech, highlight relevant use cases, and talk about current and future implications for marketers and insight professionals.



Leveraging deep language models to predict advertising effectiveness

Christian Scheier

aimpower GmbH, Germany

While advertising testing has become more agile in the past few years, it still takes considerable time and effort to develop and deploy these tests, analyze results and derive key learnings from which to take actions.

This often means that tests are only conducted at the end of a creative development. Moreover, many of these tests lack clear predictive relationship with actual in-market results.

We show that by leveraging recent developments in deep language modelling, it becomes possible to predict actual sales results on just a single open-ended question respondents answer after having been exposed to the copy. Additional metrics then provide immediate insights into the reasons of a successful or failed concept. By implementing this solution as a SaaS platform, organizations for the first time have the opportunity to evaluate concepts / advertising assets along the entire development process, quickly iterating across versions to optimize ad effectiveness and thus sales.

The solution will be presented live with (anonymized) insights on how clients (FMCG Top3) actual use it and which results they achieved.



Opendata for better customer understanding

Christian Becker

FREESIXTYFIVE, Germany

Opendata is everywhere and companies need to keep pace with the accelerating speed of change challenging communication, products, services and the business model itself.

Research always helped to analyze data and provide solid insights for strategic planning.

FREESIXTYFIVE developed a SMART INTELLIGENCE framework to deliver interoperability throughout the research stack.

By delivering synchronized opendata we enrich structural research data with real-time insights. Thanks to this “hypercontexting” we are able to develop quick individual market maps and data training models across different industries. Smart Market Intelligence empowers customers to validate new markets, assess innovative product ideas or optimize their existing market activities through a better understanding of their customers and users.

The innovative framework will be illustrated with specific, real-world use cases, including the Gaming industry.



Advent of Emotion AI in Consumer Research

Lava Kumar

Entropik Tech, India

Emotion AI is adding 3As (Accuracy, Agility, and Actionability) to augment the traditional ways of consumer research. With more than 90% accuracy and computer-vision-based methods, Emotion Insights are making it easy for brands to humanize their media, digital, and shopper experiences. Thus, building a positive emotional connection with their customers and increase conversions.

Join Lava Kumar, CPO and Founder of Entropik Tech as he talks about:

• Introducing Emotion AI to Consumer Research

• Facial Coding, Eye Tracking, Voice AI, and Brainwave Mapping

• The 3As of Emotion AI: Accuracy, Agility, and Actionability

• Emotion AI in Media, Digital and Shopper Research

 
8:00 - 10:00 CESTVirtual GOR 21 Party
sponsored by mo'web
 
Date: Friday, 10/Sept/2021
11:00 CESTTrach A_1: Track A: Survey Research: Advancements in Online and Mobile Web Surveys
 
11:00 CESTTrack A_2: Track A: Survey Research: Advancements in Online and Mobile Web Surveys
 
11:00 CESTTrack B: Data Science: From Big Data to Smart Data
 
11:00 CESTTrack C: Politics, Public Opinion, and Communication
 
11:00 CESTTrack D: Digital Methods in Applied Research
 
11:00 - 12:00 CESTA4.1: Respondent Behavior and Data Quality I
Session Chair: Florian Keusch, University of Mannheim, Germany
 
 

Satisficing Behavior across Time: Assessing Negative Panel Conditioning Using a Randomized Experiment

Fabienne Kraemer1, Henning Silber1, Bella Struminskaya2, Michael Bosnjak3, Joanna Koßmann3, Bernd Weiß1

1GESIS - Leibniz-Institute for the Social Sciences, Germany; 2Utrecht University, Department of Methodology and Statistics, Netherlands; 3ZPID - Leibniz-Institute for Psychology, Germany

Relevance and Research Question:

Satisficing response behavior (i.e., taking short-cuts in the response process) is a threat to data quality. Previous research provides mixed-evidence on whether satisficing increases across time in a panel study impairing the quality of survey responses in later waves (e.g., Schonlau & Toepoel 2015; Sun et al. 2019). However, these studies were non-experimental so little is known about what accounts for possible increases. Specifically, past research did not distinguish between the effects of general survey experience (process learning) or the familiarity with specific questions (content learning).

Methods and Data:

Participants of a non-probability German online access panel (n=882) were randomly assigned to two groups. The experimental group received target questions in all six panel waves, whereas the control group received these questions only in the last wave. The target questions included six between-subject question design experiments, manipulating (1) the response order, (2) whether the question included a ‘don’t know’ option, and (3) whether someone received a question in the agree/disagree or the construct specific response format. Our design, in which all respondents have the same survey experience (process learning) allows us to test the hypothesis whether respondents increasingly employ satisficing response strategies when answering identical questions repeatedly (content learning).

Results:

Since the study will be finished by end of March 2021, we conducted preliminary analyses using within-subject comparisons of the first three waves of the experimental group. The question design experiments provide evidence for the appearance of all three forms of satisficing (i.e., primacy effects, acquiescence, and saying ‘don’t know’) in each of the three waves of the study. These response effects have an average magnitude of 10 to 15 percentage points. However, there seems to be no clear pattern of increase or decrease in satisficing over time, disconfirming the content learning hypothesis.

Added value:

Currently, it is unclear how process and content learning affect satisficing response behavior across waves in longitudinal studies. Our findings contribute to the understanding of whether there are unwanted learning effects when asking respondents to complete identical survey questions repeatedly, which is a critical study design to monitor social change.



Consistency in straightlining across waves in the Understanding Society longitudinal survey

Olga Maslovskaya

University of Southampton, United Kingdom

Relevance & Research Question: Straightlining is one of the important indicators of poor data quality. Straighlining can be identified when respondents give answers to batteries of attitudinal questions. Previous research suggests that the likelihood of straightlining is higher in online mode of data collection when compared to face-to-face interviews and there is a difference in the likelihood of straightlining depending on the choice of device respondents use in mixed-device online surveys. As many social surveys nowadays move to either mixed-mode designs with online mode available for some respondents or even to online data collection as a single mode, it is important to address various data quality issues in longitudinal context. When different batteries of questions are asked in different waves of longitudinal surveys, it is possible to identify whether some individuals consistently choose straightlining as a response style behaviour across waves. This paper addresses the research question of whether there is consistency in straightlining behaviour within individuals across waves in online component of a longitudinal survey? And if yes, what their characteristics are.

Methods & Data: The project uses online components of Understanding Society Survey Waves 8-10. These data provide a unique opportunity to study straighlining across time in an online mixed-device longitudinal survey in the UK context. In Wave 8 around 40% of households responded in online mode and in consecutive waves the proportions were even higher. Longitudinal data analysis is used to address the main research question.

Results: Preliminary results are already available, the final results will become available in June 2021.

Added Value: This project addresses an important issue of data quality in longitudinal mixed-device online surveys. When the individuals who consistently choose straighlining response behaviour across waves are identified, they can be targeted during survey data collection either through real-time data quality evaluation or by using the information about data quality from a previous wave in the current wave. Tailored treatment can then be employed to improve quality of data from these respondents.



Effects of ‘Simple Language’ on Data Quality in Web Surveys

Irina Bauer, Tanja Kunz, Tobias Gummer

GESIS – Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question:

Comprehending survey questions is an essential step in the cognitive response process that respondents go through when answering questions. Respondents who have difficulties understanding survey questions may not answer at all, drop out of the survey, give random answers, or take shortcuts in the cognitive response process – all of which can decrease data quality. Comprehension problems are especially likely among respondents with low literacy skills. We investigate whether the use of ‘Simple Language’ in terms of clear, concise, and uncomplicated language for survey questions helps mitigating comprehension problems and thus increase data quality. ‘Simple Language’ is a linguistically simplified version of standard language and is characterized by short and succinct sentences with a simple syntax avoiding foreign words, metaphors, or abstract concepts.

Methods & Data:

To investigate the impact of ‘Simple Language’ on data quality, we conducted a web survey of 10 minutes length among 4,000 respondents of an online access panel in Germany in December 2020. Respondents were randomly assigned to a questionnaire in ‘Standard Language’ or to a version of the questionnaire that had been translated into ‘Simple Language’. We compared both groups with respect to various measures of data quality, including item nonresponse, nondifferentiation, and speeding. In addition, we investigate several aspects of respondents’ survey assessment.

Results:

Our findings to date are based on preliminary analyses. We found an effect of language on item nonresponse: Respondents who received the questionnaire in ‘Simple Language’ were more likely to provide substantive answers compared with respondents who received a questionnaire in ‘Standard Language’. The findings regarding other quality indicators seem to be mixed and need further investigation.

Added Value:

The study contributes to a deeper understanding of the benefits of ‘Simple Language’ for question comprehension and data quality in web surveys. In addition, our findings should provide useful insights for improving the survey experience. These insights may be particularly helpful for low literate respondents who are frequently underrepresented in social science surveys.

 
11:00 - 12:00 CESTA4.2: Scale and Question Format
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The
 
 

Investigating Direction Effects Across Rating Scales with Five and Seven Points in a Probability-based Online Panel

Jan Karem Höhne1, Dagmar Krebs2

1University of Duisburg-Essen, Germany; 2University of Gießen, Germany

Relevance & Research Question: In social science research, survey questions with rating scales are a commonly used method in measuring respondents’ attitudes and opinions. Compared to other rating scale characteristics, rating scale direction and its effects on response behavior has not received much attention in previous research. In addition, a large part of research on scale direction effects has solely focused on differences on the observational level. To contribute to the current state of research, we investigate the size of scale direction effects across five- and seven-point rating scales by analyzing observed and latent response distributions. We also investigate latent means and the equidistance between scale points.

Methods & Data: For this purpose, we conducted a survey experiment in the probability-based German Internet Panel (N = 4,676) in July 2019 and randomly assigned respondents to one out of four experimental groups defined by scale direction (decremental or incremental) and scale length (five- and seven-point). All four experimental groups received identical questions on achievement motivation with end-labeled, vertically aligned scales and no numeric values. We used a single question presentation with one question per page.

Results: The results reveal substantial direction differences between five- and seven-point rating scales. Five-point scales seem to be relatively robust against scale direction effects, whereas seven-point scales seem to be prone to scale direction effects. These findings are supported by both the observed and latent response distributions. However, equidistance between scale points is (somewhat) better for seven- than five-point scales.

Added Value: Our results indicate that researchers should keep the direction of rating scales in mind because it can affect response behavior of respondents. This similarly applies to the scale length. Overall, there is a trade-off between direction effects and equidistance when it comes to five- and seven-point rating scales.



Serious Tinder Research: Click vs. Swipe mechanism in mobile implicit research

Holger Lütters1, Steffen Schmidt2, Malte Friedrich-Freksa3, Oskar Küsgen4

1HTW Berlin, Germany; 2LINK Marketing Services AG, Switzerland; 3GapFish GmbH, Germany; 4pangea labs GmbH, Germany

Relevance & Research Question:

Implicit Association Testing (IAT) after Greenwald et al. is established for decades now. The first experimental designs using the keyboard to track respondent's answers are still in practice (see project implicit.harvard.edu). Some companies transferred the mechanism from the desktop into the mobile environment without specific adaptation respecting the opportunities of touch screen interactions.

The idea of this new approach is to adapt the established swiping mechanism inspired by the dating app Tinder together with a background time measurement as a means of implicit measurement in brand research.

Method & Data:

The work of C.G. Jung's archetypes serves as a framework to measure the brand relationship strength towards several pharmaceutical vaccine brands related to the fight against COVID-19 on an implicit level with an implicit single association test (SAT).

The online representative sample (n>1.000) drawn from a professional panel in Germany allows the manipulation of several experimental conditions in the mobile only survey approach.

The data collection approach aims to compare the established mechanism of clicking with the approach of swiping answers (Tinder style answers). Contentwise the study is dealing with COVID-19 vaccination brands.

Results:

The analysis shows differences in the answer patterns of those technically deviant approaches. The authors discuss the question of validity of the data collection on mobile devices. Additionally paradata about respondent's behaviour is discussed, as the swipe approach may be a good option to keep respondent's motivation up during an intense interview, resulting in lower cost and effort for the digital researcher.

Added Value:

The study is meant to inspire researchers to adopt their established methodological setting to the world of mobile research. The very serious measurement approach turns out to be even fun for some of the respondents. In an overfished environment of respondents this seems to open a door to even more sustainable research with less fatigue and a higher willingsness to participate. The constribution shows that Serious Tinder Research is more than just a joke (even though it started as a fun experiment).



The effects of the number of items per screen in mixed-device web surveys

Tobias Baier, Marek Fuchs

TU Darmstadt, Germany

Background:

When applying multi-item rating scales in web surveys, a key design choice is to decide the number of items that are presented on a single screen. Research suggests that it may be preferable to restrict the number of items that are presented on a single screen and instead increase the number of pages (Grady, Greenspan & Liu 2018, Roßmann, Gummer, & Silber, 2017, Toepoel et al., 2009). In the case of mixed-device web survey, multi-item rating scales are typically presented in a matrix format for large screens such as PCs and a vertical item-by-item format for small screens such as smartphones (Revilla, Toninelli & Ochoa, 2017). For PC respondents, splitting up a matrix over several pages is expected to counteract respondents using cognitive shortcuts (satisficing behaviour) due to a lower visual load as compared with a one large matrix on a single screen. Smartphone respondents who receive the item-by-item format do not experience a high visual load even if all items are on a single screen as only a few items are visible at the same time. However, they have to undergo more extensive scrolling that is supposed to come with a higher amount of fatigue as compared to the presentation of fever items on more screens.

Method:

To investigate the effects of the number of items per screen we will field a survey panel members of the non-probability online panel of respondi in the spring of 2021. Respondents will be randomly assigned to a device type to use for survey completion three experimental conditions that vary the presentation of several rating scales.

Results:

Results will be reported for response times, drop-out rates, item missing data, straightlining, and non-differentiation.

Added value:

This paper contributes to the research on the optimal presentation of rating scales with multiple items in mixed-device web surveys. The results will inform as to whether decreasing the number of items per screen at the expense of more survey pages is beneficial for both the matrix format on a PC and the item-by-item format on a smartphone.

 
11:00 - 12:00 CESTB4: Social Media Data
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

Accessing in-app social media advertising data: Measuring deployment and success of ads with real participant’s data on smartphones

Qais Kasem1, Ionut Andone1,2, Konrad Blaszkiewicz1,2, Felix Metzger1,2, Isabelle Halscheid1,4, Alexander Markowetz1,3

1Murmuras, Germany; 2University of Bonn, Germany; 3Philipps-Universität Marburg, Germany; 4TH Köln, Germany

Relevance & Research Question:

Ad spending in social media is projected to reach US$110,628m in 2021. In this context, the smartphone is by far the tool with which people spend the most time on social media. In Germany, the top social media smartphone apps for 2020 were Instagram (23min), YouTube (22min) and Facebook (14min). However, getting access to real and independent performance data for ads shown to specific target groups is technically, and from a data privacy point of view, a huge challenge

Methods & Data:

We have built a new method to access in-app social media advertising data and interaction data on smartphones. By voluntarily installing an app for study purposes, participants passively provide information for all in-app advertisements they see and interact with on Facebook, YouTube, Instagram. To detect and process the data we use machine learning methods and smartphone-sensing technology. Data is only used for study purposes, in compliance with GDPR and German Market and Social Research standards. In a first test study with respondi, we have looked at 50 Facebook-app users who participated for 45 days on average in Feb-May 2021. We saw over 91.000 Facebook ads in total from more than 8.000 publishers – top ad publishers were Amazon and Wish.

Results:

Our methods provide granular data about deployment and success of social media ads from all industries and competitors. They also reveal which target groups are exposed to which ads, e.g. by company and product category. With natural language processing and machine learning algorithms it is possible to improve ad-targeting and ad-content based on real-world ad-performance data: What are most successful ads (i.e. language, text length, emojis), which target group(s) are they served to, and in which frequency. Interaction data from participants (e.g. ad clicks) reveals the viral potential of individual ad campaigns.

Added Value:

Our method offers an easy to use, GDPR-compliant way to analyze real social media ads on smartphones. The app is easy to download and install from the Google Playstore. After installation, it runs in the background without any need for further user-interaction, which minimizes attention bias.



Public attitudes to linking survey and Twitter data

Curtis Jessop1, Natasha Phillips1, Mehul Kotecha1, Tarek Al Baghal2, Luke Sloan3

1NatCen Social Research, United Kingdom; 2Cardiff University, United Kingdom; 3University of Essex, United Kingdom

Keywords: Surveys, Social media, Twitter, Data linkage, Consent, Ethics, Cognitive testing

Relevance & Research Question:

Linking survey and social media data can enhance both. For example, survey data can benefit from additional data covering areas not included in the original questionnaire, while social media data can benefit from survey data’s structure and direction.

A key methodological challenge is collecting informed consent. Striking a balance between providing enough information that consent is ‘informed’ while not overwhelming participants is difficult. In addition, previous research has found consent rates to be low, particularly in web surveys, potentially reducing the usefulness of a linked dataset.

Consulting the public can help to ensure protocols developed for asking consent are ethical and effective. This study looks at how can we encourage informed consent to link social media, specifically Twitter, and survey data.

Methods & Data:

This study develops methods previously used for understanding consent to link survey data and administrative records. A total of 25 interviews will be conducted with a purposive sample of British adults using a mixture of cognitive and depth interviewing techniques. Participants will initially be asked to complete a short questionnaire, including a question asking for their consent to link their survey and Twitter data, during which they will be encouraged to ‘think aloud’. Following this, cognitive probes will be used to explore the participants’ decision making process and understanding of the consent question, before opening up into a wider discussion of their attitudes to data linkage of survey and social media data.

Results:

Fieldwork is underway at the time of submission. We expect results to provide insight into people’s understanding of the consent question (and therefore the extent to which any consent decision is informed), and what may be encouraging or discouraging people from consenting.

Added Value:

Findings from this study will help to inform the future design of consent questions, with the goal of improving informed consent rates and therefore data quality. It will also provide evidence of the public acceptability of this approach and how protocols developed for collecting, analysing, archiving and sharing data can best address any concerns.



Estimating Individual Socioeconomic Status of Twitter Users

Yuanmo He, Milena Tsvetkova

The London School of Economics and Political Science, United Kingdom

Relevance & Research Question: Computational social science research on socioeconomic inequality has been constrained by the lack of individual-level socioeconomic status (SES) measures in digital trace data. Even for the most researched social media platform, Twitter, there is an inadequate number of existing studies on estimating the SES of individual users, and most of them have methodological limitations. To fill the gap, we propose an unsupervised learning method that is firmly embedded in sociological theory.

Methods & Data: Following Bourdieu, we argue that the commercial and entertainment brands that Twitter users follow reflect their economic and cultural capital and hence, these followings can be used to infer the users’ SES. Our method parallels an established political science approach to estimate Twitter users’ political ideology from the political actors they follow. We start with the official Twitter accounts of popular brands and employ correspondence analysis to project the brands and their followers onto a linear SES scale. Using this method, we estimate the SES of 3,484,521 Twitter users who follow the Twitter accounts of 342 brands in the United States.

Results: The results show reasonable correlations between our SES estimates and the standard proxies for SES. We validate the measure for 50 common job titles, identifying 61,091 users who state one of the titles in their profile description and find significant correlations between median estimated SES and income (ρ = 0.668, p < 0.001) and median estimated SES and occupational class (ρ = 0.653, p < 0.001). We further use audience estimation data from the Facebook Marketing API to verify that the brands’ estimated SES is significantly associated with their audience’s educational level.

Added Value: Compared to the existing approaches, our method requires less data, fewer steps, and simpler statistical procedures while, at the same time, returns estimates for a larger set of users. The method provides SES estimates on a continuous scale that are operationally easy to use and theoretically interpretable. Social scientists could combine these SES estimates with digital trace data on behaviours, communication patterns, and social interactions to study inequality, health, and political engagement, among other topics.

 
11:00 - 12:00 CESTC4: Web Tracking of News Exposure
Session Chair: Pirmin Stöckle, University of Mannheim, Germany
 
 

Post post-broadcast democracy? News exposure in the age of online intermediaries

Sebastian Stier1, Michael Scharkow2, Frank Mangold3, Johannes Breuer1

1GESIS – Leibniz Institute for the Social Sciences, Germany; 2Johannes Gutenberg University Mainz; 3University of Hohenheim

Relevance & Research Question: Online intermediaries such as social network sites (e.g., Facebook) or search engines (e.g., Google) are playing an increasingly important role in citizens' information diets. With their algorithmically and socially driven recommender systems, these platforms are assumed to cater to the predispositions of users who are - by and large - not primarily interested in news and politics. Yet recent research indicates that intermediaries also foster incidental, i.e., non-intentional exposure to news. We therefore ask: do online intermediaries indeed drive away citizens from news? Or do they actually foster - non-political and political - news exposure? And what is the role of personal characteristics such as education and political interest?

Methods & Data: We recruited 7,775 study participants from online access panels with a continuous web tracking in six countries: France, Germany, Italy, Spain, UK and US. We combine observed data on web browsing behavior for 3 months with the complementary advantages of surveys of the same set of participants. A machine learning model trained on the crawled text of newspaper articles is used to automatically identify political news articles.

Results: The results from random-effects within-between models that separate daily variation from stable behavior show that across countries and personal characteristics, using online intermediaries increases the number of newspaper articles and sources of news consumption. These effects are stable across personal characteristics and countries as well as political and non-political news.

Added Value: The big online platforms counteract societal fragmentation tendencies and news avoidance. As such, the findings have implications for scholarly and popular debates on the dangers to democracy posed by digital high-choice media environments.



Populist Alternative News Use during Election Times in Germany

Ruben Bach, Philipp Müller

University of Mannheim, Germany

Relevance & Research Question: We examine the prevalence and origins of populist alternative news use and the relationship with voting for populist parties in Germany. Empirical evidence of exposure to populist alternative news use in Germany is scarce and is mostly based on inaccurate self-reported survey data.

Methods & Data: We draw from two combined data sets of web-tracking and survey data which were collected during the 2017 German Bundestag campaign (1,523 participants) and the 2019 European Parliamentary election campaign in Germany (1,009 participants). We investigate the relationships between exposure to populist alternative news and political preferences using two-component count data regression modeling.

Results: Results indicate that while populist alternative news draw more interest during first-order election campaigns (Bundestagswahl), they do not reach large user groups. Moreover, most users visit their websites rather seldom. Nonetheless, our data suggest alternative news exposure is strongly linked to voting for populist parties. Our data also shed light on the role of platforms in referring users to populist alternative news. About 40% of website visits originated from Facebook alone in both data sets, another third of visits from search engines.

Added Value: We provide novel insights into the prevalence and origins of populist alternative news use in Germany using fine-granular web tracking data. The large share of populist alternative news use originating from social media platforms fuels debates of algorithmic accountability.



Explaining voting intention through online news consumption

François Erner1, Denis Bonnay2

1respondi SAS, France; 2respondi SAS, France; université paris-nanterre, France

Relevance & Research Question:

Political polls are at the same time questionable and irreplaceable. Elections after elections they show their limits but no other method has yet been proven to be more accurate or reliable.

In this paper, we would like to present the experiment we are conducting about the 2021 german federal elections whose objective is trying to improve opinion monitoring thanks to web navigation data. More precisely, our goal is to enrich insights and improve predictions about voting intention thanks to a combination of survey results and news consumption on the internet.

Methods & Data:

For now more than 5 years, respondi has been involved in combining survey data with passive data. In Germany we operate a nat rep panel (sample size obviously changes every month, as we have to deal with churn, cleansing operation, but we keep it close to n=2500 over time. Panel size is currently n=2541) whose members have accepted to equip (at least) one connected device of theirs with a software which monitors (among other things) which website they visit.

The design of the experiment is the following: we survey these people about their voting intention every week (we plan to conduct 8 waves of interrogation, each of them collecting around 350 completes), and in the mean time we collect all the news (related to the elections or not) they read online (based on our previous observations, we collect around 30k articles per month in Germany).

News articles are classified and summarized using a deep learning language model based on Google’s BERT and fine-tuned for topic detection. We will thus be able to identify patterns of news consumption which are associated with changes in opinion.

Results:

Displayed on a live dashboard powered by Tableau.

Obviously no results are available yet. Our intention is to associate change in opinion with the content read : which message did trigger a change in opinion for which audience ?

Added Value:

Ultimately, if this experiment works, it leads to a new type of election monitoring: real time measurement of change in opinion, and granular explanations of the changes.

 
11:00 - 12:00 CESTD4: Podiumsdiskussion "16 Tage vor der Bundestagswahl – Die Rolle der Demoskopie für Wahlen"
Session Chair: Holger Geißler, marktforschung.de, Germany

(in German)

Programmpartner: marktforschung.de

Teilnehmer*innen:

Prof. Dr. Carsten Reinemann, LMU München

Dr. Yvonne Schroth, Mitglied des Vorstands der Forschungsgruppe Wahlen e.V.

Prof. Dr. Oliver Strijbis, SNF Förderungsprofessor am Institut für Politikwissenschaft, Universität Zürich Wahlen e.V.
 
12:00 - 12:10 CESTBreak
 
12:10 - 1:10 CESTKeynote 2
 
 

Analytics at its Limit: How the Pandemic Challenges Data Journalism, Forces New Formats and Reveals Blind Spots

Christina Elmer

Der Spiegel, Germany

For data journalism, the covid pandemic is both an enormous challenge and an encouragement. Hardly ever before has the analysis of current data sets been so relevant for readers, but at the same time sources are far from optimal. Much of the data is not collected in the required depth, is made available in unwieldy formats and is, moreover, only of limited value for a comprehensive assessment of the current situation. Data journalists have responded to this - with innovative formats, new processes and investigations that shed light into the black box that is the pandemic. In this lecture, these developments will be introduced, illustrated with examples and discussed in a broader context.

 
1:10 - 1:30 CESTBreak
 
1:30 - 2:30 CESTA5.1: Respondent Behavior and Data Quality II
Session Chair: Otto Hellwig, respondi/DGOF, Germany
 
 

Looking up answers to political knowledge questions: the use of different instructions and measures for respondent behavior

Tobias Gummer1, Tanja Kunz1, Tobias Rettig2, Jan Karem Höhne3,4

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Mannheim; 3University of Duisburg-Essen; 4RECSM-Universitat Pompeu Fabra

Relevance & Research Question: Measures of political knowledge are crucial in various fields to determine and explain public and political phenomena. Depending on the research question, researchers are interested in capturing declarative (knowing information) and/or procedural memory (knowing where and how to find information). In web surveys, respondents can look up information easily, thus, confounding a measure of declarative memory with procedural memory. Our study advances existing research on looking up answers to political knowledge questions in important aspects. First, we investigate whether instructions can be used to discourage or even encourage looking up answers. Second, we compare the use respondents’ self-reports of looking up answers and paradata on window switching behavior.

Methods & Data: We implemented a survey experiment in wave 51 of the probability-based German Internet Panel which was fielded in January 2021. We used a between-subject design and randomly assigned respondents to four experimental groups. Group 1 (control group) received three political knowledge questions. Group 2 received an additional instruction encouraging them to look up answers. Group 3 received an instruction discouraging them to look up answers. Group 4 were asked for their commitment to not looking up answers. We captured lookups via self-reports by respondents, paradata on window switching, and a combined measure integrating self-report and paradata.

Results: Preliminary analyses show that providing respondents with instructions significantly affects their behavior. Encouraging instructions resulted in a higher share of lookups compared to the control group. Similar, discouraging them and asking for their commitment reduced the share of lookups compared to the control group. We found these effects across all three measures of looking up answers. Yet, we also found significant differences between the three measures with self-reports indicating the lowest number of lookups and the combined measure indicating the highest number.

Added Value: Our study provides evidence on the use of instructions to encourage or discourage respondents from looking up answers to political knowledge questions. Consequently, instructions can be used to reduce bias. Moreover, we provide insights on the use of paradata to supplement self-reported measures of looking up answers.



Better late than not at all? A systematic review on late responding in (web) surveys

Ellen Laupper1, Esther Kaufmann2, Ulf-Dietrich Reips2

1Swiss Federal Institute for Vocational Education and Training SFIVET, Switzerland; 2University of Konstanz

Relevance & Research Question: Using reminders is an established practice in survey methodology to increase response rates. Nevertheless, concern is widespread that "late respondents" are less motivated to provide survey data of high quality, e.g., item nonresponse, satisficing. There is evidence that late and early respondents differ in sociodemographic characteristics as well as relevant study outcomes (e.g., attitudinal or behavioural measures). In the continuum resistance model it is assumed that late respondents are similar to nonrespondents, hence, serving as a proxy for nonrespondents. Because the last review on time of responding by Olson (2013) did not address mode differences systematically and because web surveys were not included, we here provide an up-to-date systematic review. With this review we want to answer the question whether late responding varies for the different self-administered survey modes.

Methods & Data: After a comprehensive literature search our preliminary sample consists of 122 published and non-published studies, covering several fields, e.g., health, marketing, political science. We considered studies in English and German from 1980 to 2021. All studies included a comparison between early and late respondents in mail or web surveys and reported either sociodemographic or data quality or study outcome differences. We collected for each study features of publication (e.g., year, type of publication) and study (e.g., sample size, effect sizes, response rate, operationalization of late respondents, number of reminders) via two independent coders.

Results: Our systematic review describes late responding in detail in relation to publication and study features. Hence, our review provides results on the relevance of late responding and different study features with a special focus on the survey mode and its impact on data quality.

Added Value: Our review provides deeper insights into which (web) survey practices lead to which consequences in the trade-off between measurement error and nonresponse bias and on the effect of late responding on data quality.

Literature

Olson, K. (2013). Do non-response follow-ups improve or reduce data quality? A review of the existing literature. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(1), 129–145. https://doi.org/10.1111/j.1467-985X.2012.01042.



The impact of perceived and actual respondent burden on response quality: Findings from a randomized web survey

Tanja Kunz, Tobias Gummer

GESIS - Leibniz-Institute for the Social Sciences, Germany

Relevance & Research Question: Questionnaire length has been identified as a key factor affecting response quality. A larger number of questions and the associated respondent burden are deemed to lower the respondents’ motivation to thoroughly process the questions. Thus, respondent burden increasing with each additional question respondents have to work through is likely to lower response quality. However, only little is known so far about the relationship between actual and perceived respondent burden, how this relationship may change over the course of questionnaire completion, and how response quality is affected by this depending on the relative position of the question within the questionnaire.

Methods & Data: A web survey was conducted among respondents of an online access panel using a questionnaire of 25-29 minutes length. The question order was fully randomized, allowing the effects of question position on response quality to be disentangled from the effects of content, format, and difficulty of individual questions. Among these randomly ordered survey questions, a block of evaluation questions on self-reported burden was asked several times. Due to complete randomization of the survey questions and by repeatedly asking the evaluation questions, changes in the actual and perceived respondent burden over the course of questionnaire completion and its effect on response quality could systematically be examined. Several indicators of response quality were taken into account; among others, don’t know responses, nondifferentiation, attention check failure, and length of answers to open-ended questions.

Results: We found only minor effects of actual respondent burden on response quality, whereas higher perceived respondent burden is associated with poorer response quality in a variety of different question types.

Added Value: This study provides evidence of how the actual and perceived respondent burden evolves over the course of the questionnaire and how both affect response quality in web surveys. In this respect, the present study contributes to a better understanding of previous evidence on lower data quality in later parts of questionnaires.

 
1:30 - 2:30 CESTA5.2: Survey Invitation Methodology
Session Chair: Florian Keusch, University of Mannheim, Germany
 
 

Comparing SMS and postcard reminders

Joanna Barry, Rachel Williams, Eileen Irvin

Ipsos MORI, United Kingdom

Relevance & Research Question:

The GP Patient Survey (GPPS) is a large-scale, non-incentivised, postal survey with an online completion option. As with other surveys, GPPS has experienced declining response rates and increasing postage costs. A new sampling frame provided access to mobile numbers in 2020, allowing us to experimentally test several push-to-web strategies with multi-mode contact. One experiment tested the feasibility of replacing a postcard reminder with SMS contact.

Methods & Data:

GPPS uses stratified random sampling of all adult patients registered with a GP practice in England. A control group was obtained from the existing sample frame, before selecting an experiment group from the new sample frame using the same criteria. Fieldwork took place simultaneously and tested the following:

1. Control (n=2,257,809) received three mailings with online survey log-in details and paper questionnaires, and a postcard reminder after mailing one.

2. Experiment 1 (n=5,982) received three mailings with online survey log-in details and paper questionnaires, and an SMS after mailing one. Experiment 2 (n=5,978) also received a second SMS after mailing two. Both SMS reminders included unique online survey links.

Results:

Where one SMS replaced the postcard (experiment 1), participants were pushed online compared with the control (27.2% vs. 19.4%) but the response rate was lower (30.4% vs. 31.9%). Sending two SMS reminders (experiment 2) pushed participants online (29.2%) with no significant impact on response rate (31.6%).

Neither demographics nor survey responses were impacted for the experiment group, suggesting no impact on trends. There was some evidence of impact on data quality: non-response increased for questions with long response scales for those completing via SMS (compared with via the GPPS website or letter URL).

The experiment also provided significant cost savings: SMS is cheaper than postal contact, and maintaining the response rate with more online completes reduced return postage and scanning costs.

Added Value:

Although previous studies have trialled SMS reminders, this provides direct comparability between postcard and SMS contact using a large-scale, non-incentivised, general population survey. The results provide insight into the impact on online completion, response rate, trends, non-response bias and cost-effectiveness.



Evaluating probability-based Text message panel survey methodology

Chintan Turakhia1, Jennifer Su2

1SSRS, United States of America; 2SSRS, United States of America

Relevance & Research Question:

With increasing cost of data collection for phone, mail and in-person modes, the need for robust on-line data collection methodologies has never been greater. Text message surveys have a unique advantage in conducting short, quick turn-around surveys in a cost-effective manner. Text message surveys can also be quite effective in reaching harder-to-reach populations. To-date, the use of this methodology has been limited due to concerns of low participation rates and representativeness of text message-based surveys. Also, majority of Text message-based survey research to-date has been conducted via opt-in panels. SSRS has launched the first TCPA-compliant nationally representative probability-based text message-based panel. This paper explores the effectiveness of probability-based Text message survey panel as a data collection methodology.

Methods & Data:

Data collection was conducted via an interactive text message survey (as opposed to sending a web survey link via Text message). The advantage of this methodology is that the survey can be administered via smart phone or other phones thereby improving coverage. No internet service is required as the Text messages are sent via mobile service. To evaluate the effectiveness of Text message-based survey methodology in generating projectable population-based estimates, we conducted Text message survey and a parallel survey fielded via RDD phone.

Results:

In this paper, we provide demographic and substantive comparison of RDD phone and text message-based survey methodology. Our findings suggest that Text message surveys produce very similar results to time-tested RDD phone methodology.

Added Value:

In addition to providing methodological guidance on implementing Text message surveys, this paper also provides best practices guidance in implementation of Text message-based surveys.



Expansion of an Australian probability-based online panel using ABS, IVR and SMS push-to-web

Benjamin Phillips, Charles Dove, Paul Myers, Dina Neiger

The Social Research Centre, Australia

Life in Australia™ is Australia’s only probability-based online panel, in operation since 2017. The panel was initially recruited in 2016 using dual-frame random digit dialling (RDD), topped up in 2018 using cell phone RDD as a single frame, expanded in 2019 using address-based sampling (ABS), and topped up in late 2020 using a combination of ABS, interactive voice response (IVR) calls to cell phones, and SMS push-to-web (i.e. invitations using only SMS), noting that a different regulatory regime to the TCPA prevails in Australia, which allows for automated dialling of cell phones and sending SMS without prior consent.

We present our findings with respect to recruitment and profile rates, retention, and completion rates. We also present the demographic profile of panel members and compare it to Census 2016 benchmarks with respect to age, gender, education, and nativity. We supplement our respondent profile findings with results of trials we ran on IVR and SMS as modes of invitation.

The yields from IVR and SMS push-to-web sample were below that of ABS, however the costs for IVR and SMS push-to-web were well below those of ABS and the less expensive modes actually delivered a more desirable panel member profile with respect to age and nativity, though not education. Our research raises interesting questions as to the trade-off between bias, cost and face validity in the form of response rates.

This paper contributes to the international body of research on recruitment methods for probability-based online panels (see, e.g., Bertoni 2019; Bilgen, Dennis, and Ganesh 2018; Blom, Gathmann, and Krieger 2015; Bosnjak et al. 2018; Jessop 2018; Knoef and de Vos 2009; Meekins, Fries and Fink 2019; Pedlow and Zhao 2016; Pew Research Center 2015, 2019; Pollard and Baird 2017; Scherpenzeel and Toepoel 2012; Stern 2015; Vaithianathan 2017; Ventura et al. 2017).

 
1:30 - 2:30 CESTB5: Turning Unstructured Data into Insight (with Machine Learning)
Session Chair: Stefan Oglesby, data IQ AG, Switzerland
 
 

The Economics of Superstars: Inequalities of Visibility in the World of Online-Communication

Frank Heublein1, Reimund Homann2

1Beck et al. GmbH, Germany; 2IMWF Institut für Management- und Wirtschaftsforschung GmbH

Relevance & Research Question: In 1981, Sherwin Rosen theorized that some

markets, that are showing extreme inequalities in their distributions of income, do this

due to technologies allowing joint consumption and due to poor substitutabilities of

high-quality services by low-quality services. In 2021, artificial intelligence allows us to

investigate if this theory also applies to online-communication and if the reasons for

inequality are the ones described by Rosen. The research question of the present

article is therefore twofold: In a first part we will check if the superstar-phenomen is

also present digitally. In a second step we will see if the reaons for the existence of

the superstar-effect Rosen has given can be confirmed.

Methods & Data: Using a big data-technology called „Social Listening“, roughly 30

million text fragments regarding more than 5,000 german companies were collected

online. Using artificial intelligence, this data was categorized into different event types

and different tonalities (negative, neutral, positive). Gini coefficients as measures of

concentration were then used to get an overview of the inequalites of online-

communication. After that regression analysis was conducted to find evidence to

support or disprove Rosen’s theory.

Results: The data quite clearly show that online-communication is characterized by

quite strong inequalities. This statement is valid for the total number of fragments, all

five topics that are discussed and all tonalities (corrected Gini-coefficient > 0,9). Also,

there is limited evidence supporting Rosen’s theory of superstars (in particular his

explanation for the reasons of superstardom) in the world of online-communication.

Added Value: The results are important for market researchers and marketing

managers as they show the strength of superstardom in online communication. They

also somewhat show the validity and to some degree the limitations of Rosen’s theory.

In addition to that, the study can serve as an example of how big data can be used to

empirically verify the validity of theoretical work. It also hints at the fact that the debate

about the meaning of extreme inequalities of online-communication still needs to be

made.



Data Fusion for Better Insights: A medley of Conjoint and Time Series data

Julia Görnandt

SKIM, Germany

Relevance & Research Question: Before making changes to a product portfolio or pricing strategy, the brightest minds in any business put effort in assessing the expected impact of such changes on profit or market share. One of the methods in assessing these changes is conjoint. The resulting simulation tool can identify the most optimal product / pricing scenario which promises to maximize value / volume. However, due to certain limitations of the methodology, conjoint gives directional information about the market share only but struggles to consider certain ‘real-life’ circumstances. On the other hand, time series forecasting can be used to predict market share using past ‘real-life’ data such as sales, distribution, and promotion. However, due to its dependency on history, this technique also has its shortcomings: it cannot predict any changes in the market or to a product that never happened before. The problem of using each method in isolation is that one cannot rely only on stated preferences or only on historical data to make an accurate prediction on sales. Can the insights be elevated when combining both data in one model?

Methods & Data: We show an approach to perform a data fusion between the key results of conjoint analysis and time series forecasting. We built one model that is fed with the switching matrix and price elasticities from a conjoint and complemented by time series data of sales, price and distribution. Through parallel optimization a period-based market simulator engine was built.

Results: We can show that this ‘new’ simulator is more suitable for planning yearly pricing strategies since its predictions are more accurate than looking at conjoint or time-series data in isolation. By adding historical data, the impact of promotions and seasonality become visible and lead to more accurate outcomes and insights.

Added Value: A model that takes the best of both worlds – conjoint results and time-series data – provides companies with the possibility to play with all relevant factors in one tool while having a more stable model. In consequence business decisions can be made with greater certainty and decrease the risk of making a wrong decision.



Contextualizing word embeddings with semi-structured interviews

Stefan Knauff

Bielefeld University, Germany

Relevance & Research Question: Within the last decade, research on natural language processing has seen great progress, mainly through the introduction and extension of the word embedding framework. Recently the introduction of models like BERT have led to even greater improvements in the field (Devlin et al. 2018). However, these advancements come at a cost: Word embedding models store the biases present within the training corpora (cf. Bender et al. 2021, Bolukbasi et al. 2016). Other researchers have shown that these biases can also be harnessed to generate valuable e.g., sociological, or psychological insights (e.g., Garg et al. 2018, Kozlowski et al. 2019, Charlesworth et al. 2021). I argue that there are even greater benefits if the contextualization of word embeddings is grounded in triangulation with other data types. I use word embedding models, contextualized with semi-structured interviews, to analyze how street renaming initiatives are perceived as a form of historic reappraisal of Germanys colonial past.

Methods & Data: For this project, two Skip-gram models were trained. The first one on 8.5 years of German national weekly newspaper articles (about 60,000 articles from about 450 issues), the second one was trained on approximately 730 million German tweets posted between October 8th, 2018 and August 15th, 2020. Additionally, semi-structured interviews were conducted and used during the method triangulation. The method developed by Kozlowski et al. (2019) was used to project terms within Skip-gram models onto socially constructed dimensions.

Results: Similar definitions of how colonialism is understood within the research field can be found in both data types. Most interview participants saw value in street renaming initiatives as tool to initiate a public discourse about Germany’s colonial past, to collectively process and reflect on Germany’s colonial heritage. The analysis of the text corpora in conjunction with word embeddings has shown that such a discourse is continuous, but not very prevalent and mostly negatively connotated.

Added Value: Triangulation of Skip-gram model analysis with semi-structured interviews offers additional insights that one of these methods alone would not. If both data types are interpreted with a coherent methodology, this enables new research perspectives and insights.

 
1:30 - 2:30 CESTC5: Inequalities and Political Participation
Session Chair: Anna Rysina, Kantar GmbH, Germany
 
 

Representativeness in Research: How Well Do Online Samples Represent People of Color in the US?

Frances M. Barlas, Randall K. Thomas, Beatrice Abiero

Ipsos Public Affairs, United States of America

Relevance & Research Question: In 2020, we saw a broader awakening to the continued systemic racism throughout all aspects of our society and heard renewed calls for racial justice. For the survey and market research industries, this has raised questions about how well our industry does to ensure that our public opinion research captures the full set of diverse voices that make up the United States. These questions were reinforced in the wake of the 2020 election with the scrutiny faced by the polling industry and the role that voters of color played in the election. Given the differential impact of COVID on people of color in the US and the volume of surveys working to understand vaccine hesitancy, the stakes could not be higher for us as an industry to get this right.

Methods & Data: We conducted a study to assess how well online samples represent communities of color and their diversity. While past studies have found lower bias in probability-based samples with online panels compared to opt-in samples (MacInnis et al., 2018; Yeager et al., 2011) there has been little investigation into representativeness among subgroups of interest. In Sept. 2020, we fielded parallel studies on Ipsos’ probability-based KnowledgePanel which is designed to be representative of the US and on opt-in nonprobability online sample with approximately 3,000 completes from each sample source. The questionnaire included a number of measures that could be benchmarked against gold standard surveys such at the Current Population Survey, the American Community Survey, and the National Health Interview Survey.

Results: We found that across all race/ethnicity groups KnowledgePanel had lower bias than opt-in sample. However, in both sample sources, we found that bias was lowest among white respondents and higher among Black and Hispanic respondents. We highlight areas where it appears online samples underrepresent some of the diversity within communities of color.

Added Value: We provide recommendations to improve representativeness with online samples.



Does context matter? Exploring inequality patterns of youth political participation in Greece

Stefania Kalogeraki

University of Crete, Greece

Relevance & Research Question: The paper aims at exploring inequality patterns in electoral participation and in different modes of non-institutionalized political participation among young adults in Greece. The main research question is whether youth political participation inequality patterns are shaped by both individual level determinants and the wider socio-economic conditions prevailing in Greece during the recent recession.

Methods & Data: The data derive from the EU-funded Horizon 2020 research project “Reinventing Democracy in Europe: Youth Doing Politics in Times of Increasing Inequalities” (EURYKA). The analysis uses youth-over sampled CAWI survey data of respondents under the age of 35 in Greece. Binary logistic regressions are used to predict Greek young adults’ electoral participation, non-institutionalized protest oriented participation (including demonstrations, strikes and occupations) and non-institutionalized individualized political participation (including boycotting, buycotting and signing petitions).

Results: The inequalities in young adults’ political engagement become most evident for non-institutionalized individualized acts and are less clear for non-institutionalized protest oriented acts and electoral participation. The non-institutionalized individualized modes of political involvement are not so widespread among diverse socio-economic groups among the Greek young population. However, social class determinants are less clearly related to young adults’ protest oriented political participation. Young adults from a broad range of social strata engage in the massive protests, demonstrations and the occupation movements emerged during the recent Greek recession. Similarly, a heterogeneous segment of young adults voted in the parliamentary elections of 2015, which was a landmark in the political scene, as the bipolar political system which dominated the country since the restoration of democracy in 1974, collapsed.

Added Value: Research on youth political participation inequalities is of great importance as young generations represent the emerging political and civic cultures in modern democracies. Political behaviors heavily depend on both the characteristics of individuals as well as of the environments in which they live. Although individual determinants are important in understanding potential inequalities in youth political participation, contextual conditions associated with the recent economic crisis might be decisive in mobilizing a more heterogeneous young population to claim their rights through non-institutionalized protest oriented acts and electoral politics in Greece.



Mobile Device Dependency in Everyday Life: Internet Use and Outcomes

Grant Blank1, Darja Groselj2

1University of Oxford, United Kingdom; 2University of Ljubljana, Slovenia

Relevance and research question: In the last decade, internet-enabled mobile devices have become nearly universal, domesticated, and habitual. This paper examines how smartphone use influences broader patterns of internet use and outcomes. Combining domestication theory with media system dependency theory

Methods and data: We use the 2019 Oxford Internet Survey (OxIS), a random sample of the British population (N = 1818), collected using face-to-face in-home interviews. We use principal components analysis to derive three types of dependency on smartphones. Factor scores are used as dependent variables in OLS to identify the characteristics of people who are dependent in each of these three ways. Other regressions show how the three types contribute to ability to benefit from Internet use

Results: We identify three ways in which people have domesticated smartphones: orientation, play, and escape dependency. Each type of dependency has been developed by users with different characteristics. Orientation dependency is characteristic of people who are highly skilled and use their phone for instrumental purposes. Play dependency occurs among people who are less educated. Escape dependency is strong in people who are non-white, in large households and live in urban areas. All three dependencies are major contributors to amount and variety of use. All three also shape internet outcomes: orientation dependency has a positive influence and play and escape dependencies a negative influence on the extent to which they benefit from Internet use.

Added value: The results show that, in addition to demographic and internet skills variables, the ways in which people incorporate mobile devices into their lives has a strong influence on how they use the entire internet and whether they enjoy its benefits. The negative effect of play and escape dependency demonstrates a digital division in which socially excluded individuals tend to domesticate internet technologies in ways that do not give them certain key benefits of the internet. Depending on the internet for play or escape does not improve their ability to find a job, participate in the political system, save money or find information.

 
1:30 - 2:30 CESTD5: Qualität in der Online-Forschung
Session Chair: Alexandra Wachenfeld-Schell, GIM Gesellschaft für Innovative Marktforschung mbH, Germany
Session Chair: Cathleen M. Stützer, TU Dresden, Germany

(in German)
 
 

Qualität und (nicht-)probabilistische Stichproben: "Über 'Repräsentativität' und 'Fitness-for-Purpose' in Online Panel Daten"

Carina Cornesse

University of Mannheim, Germany

Daten aus probabilistischen und nicht-probabilistischen Online Panel-Stichproben sind in der aktuellen Zeit omnipräsent und haben unter anderem eine prominente Rolle in der Erforschung der Auswirkungen der COVID-19-Pandemie eingenommen. Die Daten aus diesen Online Panel Stichproben werden häufig als „(bevölkerungs-)repräsentativ“ und/oder „fit-for-purpose“ bezeichnet. Aber was bedeutet das eigentlich? Und unter welchen Voraussetzungen kann man dies als zutreffend erachten, bzw. annehmen? Und wie kann man „Repräsentativität“ und „Fitness-for-purpose“ eigentlich messen und kommunizieren? Basierend auf existierenden theoretischen Konzepten und dem aktuellen Stand der empirischen Evidenz soll dieser Vortrag dazu beitragen, neue Impulse in die Diskussion um Datenqualität in (nicht-)probabilistischen Online Panel Daten zu geben, und eine transparente und (teil-)standardisierte Kommunikation über Datengüte zu fördern.



Qualität und Social Media: "Potenziale und Herausforderungen der Survey-Rekrutierung seltener Populationen über soziale Medien"

Simon Kühne, Zaza Zindel

Bielefeld University, Germany

In vielen Ländern und Kontexten sehen sich Umfrageforscher mit sinkenden Antwortquoten und steigenden Umfragekosten konfrontiert. Die Datenerhebung ist sogar noch komplexer und teurer, wenn seltene oder schwer zu erreichende Bevölkerungsgruppen befragt werden sollen. In diesen Fällen sind in der Regel alternative Stichproben- und Rekrutierungsverfahren erforderlich, darunter Non-Probability- und Online-Convenience-Stichproben. Ein recht neuer Ansatz zur Rekrutierung seltener Bevölkerungsgruppen für die Teilnahme an Online- und mobilen Umfragen ist die Werbung in sozialen Medien. Social-Media-Plattformen bieten einen relativ kostengünstigen Zugang zu einer großen Anzahl potenzieller Befragter und ermöglichen es, ansonsten schwer zu erreichende Bevölkerungsgruppen zu identifizieren und anzusprechen. Diese Rekrutierung birgt jedoch eine Reihe von Herausforderungen, darunter Untererfassung und Selbstselektion, Betrug und gefälschte Interviews sowie Probleme bei der Gewichtung der Umfragedaten, um unverzerrte Schätzungen zu ermöglichen.

Dieser Vortrag gibt Einblicke in die Möglichkeiten und Hürden, die Social Media Plattformen für die Umfrageforschung bieten. Es werden zwei Social-Media-Stichproben von seltenen Bevölkerungsgruppen vorgestellt und diskutiert. Darüber hinaus werden durch den Vergleich einer Social-Media-Stichprobe mit einer gleichzeitig erhobenen Face-to-Face-Wahrscheinlichkeitsumfrage die Möglichkeiten, die Angemessenheit und die Grenzen der Social-Media-Rekrutierung im Vergleich zu traditionellen Stichprobenverfahren bewertet.



Qualität und Erfolgsmessung: "Aufmerksamkeit in der Informationssystem-Erfolgsmessung in professionellen Praxisgemeinschaften"

Ralf Klamma

RWTH Aachen, Germany

Das Modell zur Informationssystem-Erfolgsmessung von DeLone und McLean ist die dominante theoretische Grundlage der Literatur zum Thema. In das Modell fließen System-, Dienst- und Informationsqualität als abhängige Variablen ein. Weiterhin werden quantitative Nutzung und qualitative Daten in der Erfolgsmessung zusammen und konzertiert eingesetzt. Für professionelle Praxisgemeinschaften, wie sie vor allem durch Etienne Wenger einführt wurden, wird über die Festlegung von Erfolgsfaktoren und Messung der Erfolgskriterien ein Erfolgsmodell erzeugt, wobei auf die kontinuierliche, langfristige und möglichst automatische Erhebung von Daten fokussiert wird. Zu diesem Zweck haben wir mit MobSOS ein Rahmenwerk und eine technische Plattform geschaffen, die es uns erlauben, Erfolgsmodelle kollaborativ zu erzeugen, zu pflegen, Erfolgsfaktoren aus Katalogen zu wählen, mit Messungen zu instrumentieren, Ergebnisse zu visualisieren und durch die Analyse im zugrundeliegenden Informationssystem und der Erfolgsmodellierung gegebenenfalls zu intervenieren. Durch die Lenkung der Aufmerksamkeit von professionellen Praxisgemeinschaften auf Erfolgsmodelle und die dadurch ermöglichte Reflexion werden die Grundlagen geschaffen für das soziale Lernen der Gemeinschaft zur Erhaltung oder gar Steigerung der Handlungsmacht trotz sich wandelnder Praxen und Systeme. Beispiele aus realen Praxen und laufenden Forschungsprojekten werden den Vortrag illustrieren.

 
2:30 - 2:40 CESTBreak
 
2:40 - 3:00 CESTGOR Award Ceremony
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The

This Years Awards Sponsors:
GOR Best Practice Award 2021 - respondi
GOR Poster Award 2021 - GIM
GOR Thesis Award 2021 - Tivian
DGOF Best Paper Award 2021 - Leibniz Institute for Psychology (ZPID)
 
3:00 - 3:10 CESTBreak
 
3:10 - 4:20 CESTA6.1: Social Media Sampling
Session Chair: Otto Hellwig, respondi/DGOF, Germany
 
 

Using Facebook for Comparative Survey Research: Customizing Facebook Tools and Advertisement Content

Anja Neundorf, Aykut Ozturk

University of Glasgow, United Kingdom

Relevance & Research Question: Paid advertisements running on platforms such as Facebook and Instagram offer a unique opportunity for researchers, who need quick and cost-effective access to a pool of online survey participants. However, scholars using Facebook paid advertisements need to pay special attention to the issues of sample biases and cost-effectiveness. Our research explores how Facebook tools and advertisement content can be customized to reach cost-effective and balanced samples across the world.

Methods & Data: In this paper, we are presenting the findings of three online surveys conducted in the United Kingdom, Turkey, and Spain during February and March 2021. In these studies, we explored how two tools offered by Facebook, the choice of campaign objectives and the use of demographic targeting, affected the recruitment process. Campaign objectives affect the menu of optimization strategies available to the advertiser. We compare the performances of three campaign objectives in this study: traffic, reach, and conversion. Facebook also allows researchers to target specific demographic groups for their advertisements. We compare the effects of two ways of demographic targeting, targeting several demographic characteristics at once and targeting only one demographic property at each of our advertisements, along with no targeting.

Results: Our studies reveal a series of important findings. First of all, we were able to collect high-quality samples in each of these countries with low costs. Secondly, we found that while traffic campaigns produce more clicks to our Facebook advertisements, it is conversion campaigns that recruit higher quality surveys for a cheaper price. Our study also demonstrated that the use of demographic targeting is necessary to produce balanced samples, although it might cause smaller sample sizes overall.

Added Value: We believe that our study will help researchers planning to use online surveys for comparative research. Most social scientists conventionally use traffic in their Facebook campaigns. We demonstrate that it is actually conversion campaigns that return cheaper and more high-quality samples. We demonstrate the benefits of demographic targeting, and we also discuss under what conditions demographic targeting becomes most effective for researchers.



Trolls, bots, and fake interviews in online survey research: Lessons learned from recruitment via social media

Zaza Zindel

Bielefeld University, Germany

Relevance & Research Question: The rise of social media platforms and the increasing proportion of people active on such platforms provides researchers with new opportunities to recruit study participants. Targeted advertisements can be used to quickly and cost-effectively reach large numbers of potential survey participants – even if these are considered rare population members. Although a growing number of researchers use these new methods, so far, the particular danger points for sample validity and data quality associated with the recruitment of participants via social media have largely remained unaddressed. This presentation addresses a problem that regularly arises when recruiting research participants via social media: fake interviews by trolls and bots.

Methods & Data: To address this issue, data from a number of social science surveys – each to recruit rare population members into an online, convenience sample with the help of ads on the social media platforms Facebook and Instagram – are compiled. Using previous findings from the field of online survey research (e.g., Teichert et al. 2015; Bauermeister et al. 2012) as well as extensions for the specific field of social media generated samples, the completed surveys were reviewed for evidence of fraudulent indications. Fraudulent or at least implausible indications, as well as suspicious information in the metadata, were flagged, and thus a fraud index was formed for each survey participation.

Results: Preliminary results indicate that more than 20 percent of the completed interviews could be classified as at least suspicious. Particularly in the case of socially polarizing survey topics, there appears to be a comparatively high proportion of people who deliberately provide false information in order to falsify study results.

Added Value: All insights derived from the various social media fieldwork campaigns are condensed into a best practice guide to handle and minimize issues due to trolls, bots, and fake interviews in social media recruited samples. This guide adds to the knowledge of how to improve the data quality of survey data generated via social media recruitment.



Using Social Networks to Recruit Health Professionals for a Web Survey

Henning Silber, Christoph Beuthner, Steffen Pötzschke, Bernd Weiß, Jessica Daikeler

GESIS - Leibniz Institute for the Social Sciences, Mannheim, Germany

Relevance, Research Question:

Recruiting respondents by placing ads on social networks sites (SNS) such as Facebook or Instagram is a fairly new, non-probabilistic method that provides cost advantages and offers a larger, more targeted sampling frame than existing convenience access panels employ. By using social networks, hard-to-reach groups, such as migrants (Pötzschke & Weiß 2020), LGBTQ individuals (Kühne 2020) can be reached. However, self-recruitment via ads might lead to systematic sample bias. In our study, we employ SNS advertisements to recruit individuals working in the health care sector into an online survey on SARS-CoV-2. This group is difficult to reach with established recruitment methods due to their small number in the overall population. To test the effects of different targeting strategies, three ad campaign designs are compared in an experimental way. The subject of the research is (1) the detailed analyses of self-selection bias and (2) the evaluation of different methodological choices within SNS-based recruitment.

Methods, Data:

To test how well health sector workers can be targeted using the database and algorithm provided by Facebook/Instagram, three recruitment variants will be tested (about 500 respondents per variant): Variant 1: Specifying the industry "health" in the Facebook/Instagram profile; Variant 2: Specifying health as an "interest" in the profile; Variant 3: Recruiting from the total population as a control group. The control group is a critical reference variable to test whether recruitment via statements in the profile is beneficial.

Results:

The study will be fielded in March/April 2021. We will compare the different recruitment strategies and other methodological aspects (e.g., effect of different pictures in the ads) against each other. Further, we will compare the characteristics of respondents recruited with the different recruitment variants against benchmarks from the general population (e.g., gender and age distribution).

Added Value:

The results will add to the sparse empirical evidence and provide recommendations regarding this relatively new methodology. Specifically, three ways of targeting respondents will be experimentally compared. In addition, we will provide evidence on selection bias and compare five different add versions with respect to effectivity of recruiting respondents of the target population.

 
3:10 - 4:20 CESTA6.2: Web Probing and Survey Design
Session Chair: Florian Keusch, University of Mannheim, Germany
 
 

What is the optimal design of multiple probes implemented in web surveys?

Cornelia Neuert, Timo Lenzner

GESIS, Germany

The method of web probing integrates open-ended questions (probes) into online surveys to evaluate questions. When asking multiple probes, they can either be asked on one subsequent survey page (scrolling design) or on separate subsequent pages (paging design). The first design requires respondents to scroll down the page to see and answer all questions, but they are presented together and independently of the survey question. The latter design presents each probe separately and respondents only see how many and what sorts of probes they will receive by navigating successive survey pages. A third alternative is to implement the probes on the same page as the question being tested (embedded design). This might have the advantage that the probes are directly related to the survey question and the answer process is still available in respondents’ memory. On the negative side, this makes the response task more complex and might affect how respondents answer the survey question presented on the same page.

In this paper, we examine whether multiple probes should be presented on the same page as the question being tested, on a subsequent page that requires respondents to scroll down, or on separate, consecutive questionnaire pages.

Based on a sample of 2,200 German panelists from an online access panel, we conducted a web experiment in which we varied both presentation format and probe order to investigate which format produced the highest data quality and the lowest drop-out rate. Respondents were randomly assigned to three conditions: an embedded design, a paging design, a scrolling design. The study was fielded in November 2020.

We expect the embedded design and the scrolling design to make the response task more complex, resulting in lower data quality compared to the paging design.

We will use the following data-quality indicators: amount of probe nonresponse, number of uninterpretable answers, number of dropouts, number of words per probe, and survey satisfaction. However, research is still work in progress, and therefore results are not available, yet.

The results will provide information on how (multiple) open-ended questions should be implemented to achieve the best possible response quality.



Analysis of Open-text Time Reference Web Probes on a COVID-19 Survey

Kristen L Cibelli Hibben, Valerie Ryan, Hoppe Travis, Scanlon Paul, Miller Kristen

National Center for Health Statistics

Relevance & Research Question: There is debate about using “since the Coronavirus pandemic began” as a time reference for survey questions. We present an analysis of three open-ended web probes to examine the timeframe respondents had in mind when presented with this phrase, as well as “when the Coronavirus pandemic first began to affect” their lives and why. The following research questions are addressed: How consistently do people understand when “the Coronavirus pandemic began”? To what extent does this align with when the pandemic began affecting their lives? Methodologically, what is the quality of responses to the open-ended probes and how might this differ by key socio-demographics?

Methods & Data: Data are from Round 1 of the Research and Development Survey (RANDS)during Covid-19 developed by researchers at the United States’ National Center for Health Statistics (NCHS). The National Opinion Research Center (NORC) at the University of Chicago collected the data on behalf of NCHS from June 9, 2020 to July 6, 2020 using their AmeriSpeak® Panel. AmeriSpeak® is a probability-based panel representative of the US adult English-speaking non-institutionalized, household population. The data for all three probes is open text. A rules-based machine learning approach was developed to automate the data cleaning for the two probes about timeframes. In combination with hand review, topic modeling and other computer-assisted approaches were used to examine the content and quality of responses to the third probe.

Results: Results show respondents do not have a uniform understanding of when the pandemic began and there is little alignment between when people think the pandemic began and when it began affecting their lives. Preliminary data quality findings indicate most respondents gave valid answers to the two date probes, but a wider range in response quality and variation among key population subgroups is observed for the third probe.

Added Value: This analysis sheds light on use of the phrase “since the Coronavirus pandemic began” as a time reference and helps us understand when and how the pandemic began affecting peoples’ lives. Methodologically, we implemented new and innovative data science approaches for the analysis of open-ended web probes.



Reducing Respondent Burden with Efficient Survey Invitation Design

Hafsteinn Einarsson, Alexandru Cernat, Natalie Shlomo

University of Manchester, United Kingdom

Relevance & Research Questions:

Increasing costs of data collection and the issue of non-response in social surveys has led to a proliferation of mixed-mode and self-administered web surveys. In this context, understanding how the design and content of survey invitations influences propensities to participate could prove beneficial to survey organisations. Reducing respondent burden with efficient invitation design may increase the number of early responders, the number of overall responses and reduce non-response bias.

Methods & Data:

This study implemented a randomised experiment where two design features thought to be associated with respondent burden were randomly manipulated: the length of the text and the location of the survey invitation link. The experiment was carried out in a sequential mixed-mode survey among young adults (18-35-year-old) in Iceland.

Results:

Results show that participants were more likely to participate in the initial web survey when they receive shorter survey invitation letters and when the survey link is in the middle of the letter, although further contacts by other modes mitigate these differences for the full survey results. Additionally, short letters with links in the middle perform well compared to other letter types in terms of non-response bias and mean squared error for those characteristics available in the National Register.

Added Value:

These findings indicate that the concept of respondent burden can be extended to mailed survey invitations to web surveys. Design choices for survey invitations, such as length and placement of participation instructions, can affect propensities to respond to the web survey, resulting in cost-savings for survey organisations.



Recruitment to a probability-based panel: question positioning, staggering information, and allowing people to say they’re ‘not sure’

Curtis Jessop, Marta Mezzanzanica

NatCen, United Kingdom

Key words: Surveys, Online panels, Recruitment

Relevance & Research Question:

The recruitment stage is a key step in the set-up of a probability-based panel study: a lower recruitment rate risks introducing bias and limits what subsequent interventions to minimise non-response can achieve. This paper looks at how positioning the recruitment question relative to the offer of an incentive for participating in the recruitment survey, and how staggering information about joining a Panel and allowing participants to say they are ‘not sure’ affects recruitment and participation rates.

Methods & Data:

A split-sample experiment was implemented in the 2020 British Social Attitudes survey, a probability-based push-to-web survey in which participants were invited to join the NatCen Panel. Of 3,964 participants a random half were asked if they would like to join the Panel immediately before being asked what type of incentive they would like, and the other half were asked immediately after.

In addition, a random half were presented with all information about joining the panel up-front, while the other half were presented basic information, but given the option to ask for more information. This group was then provided with more information and asked again but were allowed to say they were still unsure.

Results:

There was no significant difference in the proportion of people agreeing to join the panel or taking part in the first panel survey by the positioning of the recruitment question. In contrast, participants that were allowed to say they were ‘not sure’ were more likely to agree to join the panel, although this difference was no longer significant when looking at the proportion that took part in the first survey wave.

Added Value:

Findings from this study will inform the future design of recruitment questions for panel studies. More generally, it provides evidence on the use of an ‘unsure’ option in consent questions, and how moving away from a binary, ‘in the moment’, approach might affect data collection.

 
3:10 - 4:20 CESTA6.3: Voice Recording in Surveys
Session Chair: Bella Struminskaya, Utrecht University, Netherlands, The
 
 

Willingness to provide voice-recordings in the LISS panel

Katharina Meitinger1, Matthias Schonlau2

1Utrecht University, Netherlands; 2University of Waterloo, Canada

Relevance & Research Question: Technological advancements now allow exploring the potential of voice recordings for open-ended questions in smartphone surveys (e.g., Revilla & Couper 2019). Voice-recordings may also be useful for web surveys covering the general population. It is unclear whether and which respondents show a preference to provide voice-recordings and which respondents prefer to type responses to open-ended questions.

Methods & Data: We report on an experiment that was implemented in the LISS panel in December 2020. Respondents in this experiment were randomly assigned to a voice-recording only, a text-recording only, or a group in which they could select between voice and text recording. We will report who shows preferences for voice-recordings and which factors influence these preferences (e.g., perception of anonymity of data, potential by-standers during data collection).

Results: Preliminary analyses indicate that respondents show strong preferences to provide written instead of voice-recordings. We expect that respondents who are concerned about the anonymity of their data and who had by-standers during data collection are even less willing to provide voice-recordings.

Added Value: This research provides important insights whether voice-recording is a viable alternative for data collection of open-ended questions in general social surveys. The results also reveals factors that need to be addressed to increase the willingness of respondents to provide such data.



Audio and voice inputs in mobile surveys: Who prefers these communication channels, and why?

Timo Lenzner1, Jan Karem Höhne2,3

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Duisburg-Essen, Germany; 3Universitat Pompeu Fabra, Research and Expertise Centre for Survey Methodology, Barcelona, Spain

Relevance & Research Question: Technological advancements and changes in online survey participation pave the way for new ways of data collection. Particularly, the increasing smartphone rate in online surveys facilitates a re-consideration of prevailing communication channels to naturalize the communication process between researchers and respondents and to collect high-quality data. For example, if respondents participate in online surveys via a smartphone, it is possible to employ pre-recorded audio files and allow respondents to have the questions read out loud to them (audio channel). Moreover, in this survey setting, respondents’ answers can be collected using the voice recording function of smartphones (voice channel). So far, there is a lack of information on whether respondents are willing to undergo this kind of change in communication channels. In this study, we therefore investigate respondents’ willingness to participate in online surveys with a smartphone to have the survey questions read aloud and to give oral answers via voice input functions.

Methods & Data: We conducted a survey with 2,146 respondents recruited from an online access panel. Respondents received two willingness questions – one on the audio channel and one on the voice channel – each followed-up by an open question asking for the reasons of respondents’ (non)willingness to use these communication channels. The study was fielded in Germany in November 2020.

Results: The data are currently still being analyzed. The results of this study will be reported as follows: we first analyze how many respondents reported to be (un)willing to use the audio and/or voice channel when answering a survey. Next, we examine the reasons they provided for their (non)willingness. Finally, we examine which respondent characteristics (e.g., gender, age, educational level, professional qualification, usage of internet-enabled devices, self-reported internet and smartphone skills, and affinity for technology) are associated with higher levels of willingness.

Added Value: This study adds to the scarce literature on respondents’ (non)willingness to answer surveys using the audio play and voice recording functions of their smartphones. To our knowledge, it is the first study to examine the reasons of respondents’ (non)willingness by means of open-ended questions.



Effect of Explicit Voice-to-Text Instructions on Unit Nonresponse and Measurement Errors in a General Population Web Survey

Z. Tuba Suzer-Gurtekin, Yingjia Fu, Peter Sparks, Richard Curtin

University of Michigan, United States of America

Relevance & Research Question: Under the web survey design principles, one of the most cited considerations is reducing respondent burden. This consideration is mostly due to self-administration characteristic of web surveys and respondent owned technology. Often reduced respondent burden is hypothesized to be related to lower nonresponse and measurement errors. One of the operationalizations of reducing respondent burden is adapting widely used technology for other tasks to survey taking. Recently, a widely used technology is noted as digital voice assistants and its adaptation has a potential to improve nonresponse and measurement qualities in web survey data. Pew Research Center published that 42% of the U.S. adults use digital voice assistants on the smartphones (Pew Research Center, 2021).

Methods & Data: This study presents results from a randomized experiment that notes respondents can use voice-to-text instead of typing in 6 open-ended follow-ups in the experimental arm. The application is only presented in the smartphone version of the layout of an address based sampling web survey of the U.S. adult population.

Results: We will report (1) completion rates by device and initiation type (typing, QR code, email link), (2) item nonresponse rates, (3) codeable and noncodeable response rates, and (4) mean number of words in open-ended responses by two experimental arms.

Added Value: Our monthly data since January 2017 show an increase in the completion rates by smartphones and this study will be a baseline study to further understand the general population’s survey taking behavior in smartphones.

 
3:10 - 4:20 CESTA6.4: Representativity in Online Panels
Session Chair: Ines Schaurer, City of Mannheim, Germany
 
 

Investigating self-selection bias of online surveys on COVID-19 pandemic-related outcomes and health characteristics

Bernd Weiß

GESIS - Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question: The coronavirus SARS-CoV-2 outbreak has stimulated numerous online surveys that are mainly based on online convenience samples where participants select themselves. The results are, nevertheless, often generalized to the general population. Based upon a probability-based sample that includes online and mail-mode respondents, we will tackle the following research questions assuming that the sample of online respondents mimics respondents of an online convenience survey: (1) Do online (CAWI) respondents systematically differ from offline (PAPI) respondents with respect to COVID-19-related outcomes (e.g., pandemic-related attitudes or behavior) and health characteristics (e.g., preconditions, risk group)? (2) Do internet users (in the CAWI and the PAPI mode) systematically differ from non-internet users with respect to COVID-19-related outcomes and health characteristics?

Methods & Data: The analyses utilize data from the German GESIS Panel, a probability-based mixed-mode access panel that includes about 5,000 online and mail-mode respondents. Upon recruitment, respondents’ preferred mode, i.e., CAWI or PAPI, was determined via a sequential mixed-mode design. The GESIS Panel was among the first surveys in Germany that started in March 2020, collecting data on the coronavirus outbreak. Since then, five additional waves have been fielded, allowing cross-sectional and longitudinal comparisons between the two survey modes (CAWI vs. PAPI) and groups (internet vs. non-internet users), respectively. Statistical analyses address mode and group comparisons regarding COVID-19-related outcomes such as pandemic-related attitudes or behavior as well as health characteristics.

Results: Preliminary analyses reveal only small differences with respect to some behavioral and attitudinal pandemic-related outcomes among the two modes/groups. However, larger systematic differences regarding mode can be reported for health characteristics (e.g., “belong to a risk group”). Further analyses will be conducted focusing on differences among internet vs. non-internet users.

Added Value: With a focus on the current COVID-19 pandemic, the results of this study add to the existing literature that cautions against the use of self-selected online surveys for population inference and policy measures.



Relationships between variables in probability-based and nonprobability online panels

Carina Cornesse, Tobias Rettig, Annelies G. Blom

University of Mannheim, Germany

Relevance & Research Question:

Commercial nonprobability online panels have grown in popularity in recent years due to their relatively low cost and easy availability. However, a number of studies have shown that probability-based surveys lead to more accurate univariate estimates than nonprobability surveys. Some researchers claim that while they do not produce accurate univariate estimates, nonprobability surveys are “fit for purpose” when conducting bivariate and multivariate analyses. Very little research to date has investigated these claims, which is an important gap we aim to fill with this study.

Methods & Data:

We investigate the accuracy of bivariate and multivariate estimates in probability-based and nonprobability online panels using data from a large-scale comparison study that included data collection in two academic probability-based online panels and eight commercial nonprobability online panels in Germany with identical questionnaires and field periods. For each of the online panels, we calculate bivariate associations as well as multivariate models and compare the results to the expected outcomes based on theory and gold-standard benchmarks, examining whether the direction and statistical significance of the coefficients accurately reflect the expected outcomes.

Results:

Preliminary results on key political variables (e.g., voter turnout) indicate a high variability in the findings gained from the different online panels. While the results from some panels are mostly in line with the expected results based on theory and findings from gold-standard survey benchmarks, others diverge a lot. For example, contrary to expectations, some panel results indicate that older people are less likely to vote conservative than younger people. Further analyses will extend these comparisons to health-related items (subjective health, BMI) and psychological indicators (Big 5, need for cognition).

Added Value:

Research on the accuracy of bivariate and multivariate estimates in probability-based and nonprobability online panels is so far very sparse. However, the growing popularity of online panels as a whole and nonprobability online access panels in particular warrant deeper investigation into the accuracy of the results obtained from these panels and into the question of whether nonprobability panels are indeed “fit for purpose” for such analyses.



Sampling in Online Surveys in Latin America: Assessing Matching vs. "Black Box" Approaches

Oscar Castorena1, Noam Lupu1, Maitagorri H Schade2, Elizabeth J Zechmeister1

1Vanderbilt University; 2Agora Verkehrswende

Relevance & Research Question: Online surveys, a comparatively low-cost and low-effort medium, have become more and more common in international survey research projects as internet access continues to expand. At the same time, conventional probabilistic sample design is often impossible when utilizing commercial online panels. Especially in regions with comparatively low internet penetration, this poses the question of how well nonprobabilistic approaches can approximate best practice offline methodologies, and what a best practice for online sampling should look like when parts of the population are excluded by default from the sampling frame.

Methods & Data: For this study, we investigated one well-established approach to generating as-good-as-possible nonprobability samples from online panels, sample-matching, in three Latin American countries. In each country, we collected samples of at least 1000 responses each through the standard commercial “black box” approach as well as an original sample-matching approach. This experiment-based approach permits a comparison of matched samples to samples resulting from a panel provider's standard approach, as well as to census extracts and representative population surveys. To assess the quality of each sample, we assess mean average errors for the categories of benchmark questions and of standard demographic indicators, calculated between samples and reference populations.

Results: The results show that the sample-matching approach yields better reproduction of benchmark questions not employed in sample design, compared to the standard one. To conclude, the paper discusses the benefits and drawbacks of choosing a custom sampling approach as opposed to a standard one.

Added Value: We demonstrate that fully transparent and reproducible sampling approaches are possible, if not common, in nonprobabilistic commercial online surveys, and that they can measurably improve the quality of online samples. We also illuminate the possible practical drawbacks in deploying such a custom-made sampling method, adding a useful reference for those wishing to apply such an “outside the black box” approach to drawing samples from online panels provided to the survey research community by commercial firms.

 
4:20 - 5:00 CESTFare Well Drinks
 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 21
Conference Software - ConfTool Pro 2.6.135
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany