TTC Labs - Consumer Journeys in Becoming Privacy Conscious
Research

Consumer Journeys in Becoming Privacy Conscious

Meta Placeholder Avatar

Jim Hudson, Ph.D.

Meta

Author’s Note: Product teams at Facebook rely on research along with other external factors to design and build products. This article discusses research conducted by Facebook's Research Team to better understand consumers’ privacy experiences.

Abstract

To better understand how consumers respond to privacy-threatening situations, we interviewed consumers who had recently attempted to make conscious changes to their messaging behavior in response to a perceived privacy threat.

Looking across these privacy journeys, we highlight commonalities in the experiences, behaviors, and technological changes that participants made and any stumbling blocks that left lingering questions.

We discuss implications for teams that are building and shaping private messaging services.

Before digging into this article, try a thought experiment. Ask yourself these questions:

1. Do I feel safe right now? Do I feel like someone might misuse information about what I’m doing to cause me harm?

Hopefully, you’re in a place where you feel perfectly safe and you’re not at risk of someone misusing information about you. Now, ask yourself one more question:

2. Was I thinking about my personal safety or privacy before I started reading this article?

Most likely, you weren’t. Over my time studying safety and privacy, I’ve come to see them as latent topics. People often aren’t conscious of their thoughts and feelings around safety and privacy unless something triggers them to consciously assess safety and privacy. In this research, we wanted to better understand the lived journeys that people have when they’re triggered to consciously think about safety and privacy in relation to messaging apps and then go on to take action as a result of those thoughts. Insights into these types of journeys are critical to understanding how companies like Facebook can best support consumers’ privacy needs for messaging features.

What we did

We interviewed consumers who reported (a) experiencing a triggering event that brought private messaging to the forefront of their minds and (b) actively attempted to make changes to how they message others in order to address these concerns. We conducted this research in the US during the summer of 2020. At that time, people were protesting across the country in support of the Black Lives Matters movement and a contentious presidential election was entering its final months.

Despite this social context, we faced challenges finding participants who had recently experienced a conscious private messaging journey. As research on the privacy paradox might predict (e.g., Kokolakis, 2017), we found that many people had general privacy concerns, but few could point to specific actions that they had taken as a result of those concerns. In the end, we spoke with 17 US consumers who shared concrete journeys around how they re-evaluated their messaging activities based on specific triggering events. We didn’t limit this research to experiences with Facebook products, and participants spoke about a wide variety of messaging apps during the interviews. For additional details about our sample and methods, see the Appendix at the bottom of this article.

private messaging journeys img 01

Triggering Event

In our interviews, the specific triggering events varied widely. Some participants cited the enactment of security laws in Hong Kong and the Philippines as potentially threatening family and friends located in those regions. Some participants cited personal involvement in protests against the US government. Some spoke about breaches of trust with their closest contacts. And some spoke about coincidental ad targeting that raised questions about the privacy of their messages.

Despite the variety of triggers, all participants reported an underlying uneasiness regarding their online privacy prior to the triggering event. For these participants, there was a baseline of doubt, and the trigger served to push them over the edge to take action. Privacy triggers generally highlighted some form of potential harm, underscoring the relationship between privacy and safety concerns. In all cases, the next steps after experiencing the trigger involved self-education and changing specific behaviors to address the perceived threats.

Self-education

Unsurprisingly, self-education relied heavily on Google searches. Participants frequently started with fairly generic searches, which became increasingly targeted as they learned more.

Social circles mattered as well. Participants often sought out that one friend who knew more about the underlying technology. This friend could cut through the technical jargon, assess the merits of various technical arguments, and provide recommendations. Because they trusted this friend’s knowledge of the technical space, they were able to accept the recommendations without needing to deeply analyze them.

Finally, participants who experienced privacy triggers that were related to social causes (e.g., Black Lives Matter) often learned about ways to increase their messaging privacy from the leaders of those causes. They typically followed those leaders on social media platforms, so they saw privacy and safety recommendations when those leaders posted them.

Benefits and limitations of end-to-end encryption

During their self-education journeys, many participants encountered the concept of end-to-end encryption, which is a technical feature prominently discussed in relation to private messaging. It was interesting to learn that almost all of our participants encountered this concept, and upon further probing during our interviews, it became clear that, unlike the majority of the US population, these participants accurately understood the details of end-to-end encryption at a conceptual level. They understood it, and they wanted it to be part of their messaging experiences. These participants pointed to the lack of end-to-end encryption in certain messaging apps as a compelling reason to switch to a different messaging platform.

Despite the perceived benefits of end-to-end encryption, participants were also quick to point out that they didn’t consider it sufficient for their privacy needs. When participants critically assessed the ways their messages might be used to harm them, they understood that messages in transit are less likely to be inappropriately accessed than stored messages. They understood that message recipients could take action--intentionally or not--that could leak message content outside of the sender’s control. They also understood that human error and limited tech literacy presented additional privacy risks.

Change and advocacy

Unfortunately, participants’ journeys to private messaging weren’t always smooth and few felt truly comfortable with where the journey took them. While many participants wanted to switch to different messaging apps that they perceived as more secure than what they had been using, they found that their friends held them back. Since these friends hadn’t experienced privacy triggers, they were reluctant to change apps. Those friends who did download the participant’s app of choice were slow to respond on that app, making the experience frustrating for our participants.

Additionally, participants found it challenging to understand and compare each messaging app’s privacy features and commitments. Sometimes this had to do with different apps positioning similar features in different ways. Other times, they felt that they had to wade through complex legalese to understand how data might be used; this led several participants to assume negative, adversarial intent when approaching messaging apps’ privacy commitments. Underlying this, many privacy and security features are practically invisible to users (e.g., data encryption or spam suppression algorithms). The average consumer needs to have a certain amount of trust in their messaging app provider because they might not personally have the technical tools to verify the app’s privacy claims.

Furthermore, these highly privacy conscious users—users who were actively seeking ways to make their messaging experience more secure—admitted that they often fall back to less secure approaches simply because they’re more convenient. Two factor authentication, for example, offers valuable security and privacy improvements, but it can become an unacceptable barrier in a messaging context where the messages are perceived to be quite innocuous.

Opportunities for Continued Privacy Improvements

In this study, we saw that some people will actively seek to adjust their behaviors when privacy is a salient concept with perceived risk. However, the challenges faced by these motivated individuals offer opportunities to continue innovating and improving privacy for all of our users:

Table stakes security. Participants in this study struggled to get their family and friends to join them on their privacy-focused journey. This suggests that top-notch safety and security (e.g., end-to-end encryption, two-factor authentication, robust blocking and reporting mechanisms, etc.) should be table stakes for all messaging apps, so consumers can easily take advantage of built-in protections while exercising individual, personal control as appropriate. As in previous studies (e.g., Hepler & Rutherford, 2021), we continue to see a need for industry-wide privacy and safety standards.

Ongoing Innovation. In this study, we saw that participants needed a range of features that worked in multi-faceted ways to protect their privacy. As a community, this suggests that we must plan for continued investment in privacy and safety features. There are no silver bullets, and even if there were, they’d lose their shine as technology, online behaviors, and potential threats evolve.

Questioning permanence. Permanence remains an issue for consumers and offers the privacy community an opportunity to develop more solutions. Permanence remains an issue for our community to address. Even when doing nothing wrong, participants in this study feared that their messages might somehow surface in unexpected and potentially harmful ways in the future. To meet user privacy needs, we need to be thoughtful about when information needs to be stored and when it should disappear.

In these users’ stories, their privacy concerns represented a variety of potential harms that might arise if communications were revealed to third parties. While these harms aren’t salient for the average messaging user on a typical day, they exist for some consumers in all messaging apps at some times, and they have the potential to arise for any consumer. As a community developing new technologies, we have a responsibility to thoughtfully address these concerns in both broader product strategies and day-to-day product decisions.

Appendix: Research Methods

In our recruiting, we focused our inclusion criteria on self-stated privacy awareness (i.e., “​​On a scale from 1 to 5 how aware are you about privacy and security concerns when communicating in messaging apps?”) and self-reporting of an event triggering behavioral change (i.e., “Would you say that you have intentionally changed or tried to change how you use messaging apps due to privacy and security concerns in the last few months?”). When we started the interviews, however, we discovered that few of our recruited participants had actually experienced an event that caused them to take concrete action to change their messaging behaviors. As a result, these participants had not taken any journeys that we could explore in the interviews.

Knowing that we wanted to focus on understanding concrete customer journeys based on triggering privacy experiences, we had to pivot our recruiting. We had our recruiting agency collect more information about potential participants’ triggering experiences (i.e., “How have you changed your use of messaging apps due to privacy and security concerns? Could you tell me in a couple of sentences what you changed / tried to change? Which factors have led to your changed awareness of privacy and security issues in messaging apps?”), and we selected only participants who could point to a concrete trigger and specific actions to respond to that trigger. As long as the trigger was specific and concrete, we did not place any limitations on the type of triggering event.

For the interview protocol, we sequenced the conversation around four key themes. In the first section, we acknowledged the triggering event that participants shared during the recruitment process, and sought additional details about what exactly happened and what specific harms they foresaw based on the triggering event. In the second section, we asked them to compare their messaging behaviors today with their messaging behaviors before the triggering event. We sought to understand what had changed and why. In the third section, we asked participants to recount the transition from their previous behaviors into their current behavior. We probed to understand both successful and unsuccessful attempts to change behavior. Finally, we wrapped up with a discussion about how they were feeling about the changes they had made and what they imagined in the future for their messaging activities.

In the analysis, we identified commonalities in the sequence of experiences between participants. For each stage of the journey, we focused on documenting the participants’ goals, actions, and mindset.

References

Hepler, J., & Rutherford, M. (2021). Privacy concerns are similar across different apps. TTC Labs.

Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & security, 64, 122-134.

Meta Placeholder Avatar

Jim Hudson, Ph.D.

UX Researcher, Meta

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy.

Initiated and supported by Meta, and built on collaboration, the movement has grown to include hundreds of organisations, including major global businesses, startups, civic organisations and academic institutions.