Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Session Chair: Ines Schaurer, City of Mannheim, Germany
Presentations
Participation of household panel members in daily burst measurement using a mobile app
Annette Jäckle1, Jonathan Burton1, Mick Couper2, Brienna Perelli-Harris3, Jim Vine1
1University of Essex, United Kingdom; 2University of Michigan, USA; 3University of Southampton, United Kingdom
Relevance:
Mobile applications offer exciting new opportunities to collect data, either passively using inbuilt sensors, or actively with respondents entering data into an app. However, in general population studies using mobile apps, participation rates have to date been very low, ranging between 10 and 20 percent. In this paper we experimentally test the effects of different protocols for implementing mobile apps on participation rates and biases.
Methods:
We used the Understanding Society Innovation Panel, a probability sample of households in Great Britain that interviews all household members aged 16+ annually. During the 2020 annual interview, respondents were asked to download an app and use it every evening for 14 days to answer questions about their experiences and wellbeing that day. We experimentally varied: i) at what point in the annual interview we asked respondents to participate in the wellbeing study (early vs. late), ii) the length of the daily questionnaire (2 vs 10 mins), iii) the incentive offered for the annual interview (ranging from £10 to £30), and iv) the incentives for completing the app study (in addition to £1 a day: no bonus; £10 bonus for completing all days; £2.50 bonus a day on four random days).
Results:
Of the 2,270 Innovation Panel respondents, 978 used the app at least once (43%). The length of the daily questionnaire, the incentives for the annual interview, and the incentives for the app study had no effects on whether respondents downloaded the app during the interview, whether they used the app at least once, or the number of days they used the app. However, respondents who were invited to the app study early in the annual interview were 8 percentage points more likely to participate than those invited late in the interview (47% vs 39%, p<0.001) and respondents who completed the annual interview online were 28 percentage points more likely to participate than those who completed the interview by phone (48% vs 20%, p<0.001). Further analyses will examine the reasons for non-participation and resulting biases.
Value:
This study provides empirically based guidance on best practice for data collection using mobile apps.
App-Diaries – What works, what doesn’t? Results from an in-depth pretest for the German Time-Use-Survey
Daniel Knapp, Johannes Volk, Karen Blanke
Federal Statistical Office Germany (Destatis)
Relevance & Research Question:
The last official German Time-Use-Survey (TUS) in 2012/2013 was based mainly on paper mode. In order to modernize the German TUS for 2022, two new modes were added – an app and a web instrument. As the literature on how to design specific elements of a diary-based TUS App is still scarce, our goal was to derive best-practice guidelines on what works and what doesn’t when it comes to designing and implementing such an App-Diary (e.g. whether and how to implement hierarchical vs. open text activity search functionalities).
Methods & Data:
Results are based on an in-depth qualitative pretest with 30 test persons in Germany. Test persons were asked to 1. Fill out a detailed time-use diary app for two days, 2. Document first impressions, issues and bugs on a short questionnaire, 3. Participate in individual follow-up cognitive interviews. Combining this data allowed us to evaluate various functionalities and implementations in detail.
Results:
Final results of the pretest are still work in progress and will be handed in at a later date. The presentation will also include a brief overview of the upcoming federal German Time-Use-Survey 2022 and its transformation towards Online First.
Added Value:
New insights to further expand the literature on how to design a diary-based time-use-app in the context of the harmonized European Time-Use-Survey. This study expands on literature by focusing on specific elements of a diary-based app and proposing best-practice guidelines on several detailed aspects, such as app structure, diary overview, and activity search functionality.
Using text analytics to identify safeguarding concerns within free-text comments
Ipsos MORI conducts the Adult Inpatient and Maternity surveys on behalf of the Care Quality Commission (CQC). Both surveys collect patient feedback on recent healthcare experiences via a mixture of multiple choice and free-text questions. As the unstructured free-text comments could potentially disclose harm, all comments are manually reviewed and allocated a flag indicating whether any safeguarding concerns are disclosed. Flagged comments are escalated to the CQC for investigation. We piloted an approach that uses machine learning to make this process more efficient.
Methods & Data:
IBM SPSS modeler was used to construct a model which was developed through multiple stages. We aimed to use the model to separate safeguarding concerns (which require review and escalation) from non-safeguarding (which may require spot-checking of a random sample).
1. 2019 Adult Inpatient and Maternity pilot comments (n=9,862), that had previously been manually reviewed for safeguarding issues, were used to train the model to identify potential safeguarding comments. The model identified a relatively small pool of comments.
2. The model output was compared with the previous manual review to assess accuracy. Where the model failed to identify safeguarding comments correctly, a qualitative review was conducted to identify how the model should be revised to increase accuracy.
3. 2019 Adult Inpatient and Maternity mainstage comments (n=60,754) were analysed by the model. This sample was independent of the pilot sample, ensuring the model's accuracy was generalisable across all survey comments.
Results:
On average, the model identified 44% of comments as non-safeguarding with high accuracy. Given the scale of the surveys, this could equate to around 27,000 fewer comments that need manual review each year. This would provide cost savings and enable safeguarding comments to be escalated to the CQC quicker. We are currently exploring how the model will be used for the 2020/2021 surveys.
Added Value:
Text analytics uses machine learning to assist in the translation of large volumes of unstructured text into structured data. This is an innovative application of the approach which has resulted in huge efficiencies and could be developed and implemented on other surveys.