Consumers may be willing to part with data and personal information but, like an increasing array of regulators, they are far more sensitive about how their details are used and by whom. As a result, companies need a coherent plan to even stand a chance of staying in the game.
Not all trust is created equal. It’s shaped by history, personal experience, reputation, media commentary and responses to crisis, by the influence of friends, family and social networks, national culture and generational outlook. Trust takes a lifetime to build and a moment to breach, often with long-term or irreparable damage.
In years gone by, corporate scandals might emerge once or twice a year. Headlines around the world would converge on scandals from financial misselling to phone hacking, improprieties among politicians to unauthorised market trading and more. The intent and integrity of public-facing institutions would be brought under scrutiny as an exceptional moment in time. Yet scandals are now occurring with ever greater force and pervasiveness.
Fake news, hacking, corruption and malfeasance have become a part of everyday life, to the point of being almost inescapable in daily discourse or our morning news and social media digest. Increasingly, scandals have extended far beyond mere political, economic or environmental misdemeanour to something arguably more pertinent and more real to us all: our personal data and a very personal breach of trust. And while data is being touted as an asset, it also has the potential to become a significant liability, if not handled in the right way.
As technology has permeated seemingly all aspects of our lives, so too have the data trails that accompany it and offer such potential value to the organizations that can harness, decipher or capitalize on it. Whereas consumers were once arguably blissfully unaware of the breadth and depth of data available to or held by organizations about them, beyond mere transactions, recent events have given the issue much greater prominence.
Moreover, concerns don’t just reside with the data trails we create as consumers with those organizations and institutions with whom we knowingly impart our information. As companies are increasingly able to purchase our personal information and data from third-party sources, often collected, aggregated and sold without our knowledge or explicit consent, we’re facing a wider set of privacy and regulatory questions.
So, should companies be purchasing from third parties at all? If they do, how do they manage permissions and consent with data that wasn’t gifted to them to start with? What are the requirements to inform or disclose the acquisition of such information to consumers? And what do each of these questions mean for the ability to create value and personalized experiences, even with the best and most honorable intents?
We see this as a transparency gap: the difference between the art of what’s possible with data science and analytics, and how much consumers are cognizant of it. Historically, the majority might have been blissfully ignorant about the nature and scope of data about us that institutions, governments and organizations are capturing, mining and analyzing. But now this transparency gap is closing.
In the past, we might have been primarily concerned with the increase in the volume or breadth of our data being created, whereas public focus is now shifting to how much of our data has been digitized, aggregated, tracked and monetized by organizations and governments, without our full knowledge or permission. And as this transparency gap narrows further, and more information enters the public domain, companies are at risk of increased exposure and vulnerability.
Trust under fire
Consider the assault on public trust and confidence over the last 10 years alone, and just a few of the more prominent incidents. From the 2008 financial crisis, banks betting against their own clients and the Occupy Wall Street movement that followed. The 2013 horsemeat scandal across Europe and the infant milk incident in China. Product recalls ranging from pharmaceuticals to food to cars.
The falsified diesel engine emissions testing scandal. Countless cyberattacks and data thefts from diverse businesses. Data collection and sharing policies. Leaks containing extensive personal and biometric data of millions of citizens. Social media manipulation, fake news and alleged foreign interference in the 2016 US election and the 2016 UK Brexit vote. Bots, fake social media profiles, inflated follower counts and the opaque world of social influencers.
Unsurprisingly, our survey revealed a general level of disquiet about how data could be accessed, used or abused online. Some 38 percent of consumers said they felt high levels of anxiety about unauthorized tracking of their online habits by companies, governments or criminals; almost half (48 percent) reported high anxiety at the prospect of hacking of financial, medical or other personal information online; and 51 percent said the same about identity theft, with China and Brazil most concerned at 62 percent and 68 percent respectively, in contrast to lesser concerned UK and Canadian consumers scoring 37 percent and 39 percent, respectively.
In spite of this general level of anxiety, consumers are relatively resilient when parting with their data — for now. More than 75 percent of consumers surveyed were generally happy to part with some level of personal information in exchange for greater personalization, better products and services, better security or better value.
While this resilience suggests consumers are generally more aware of companies using their data and are indeed willing to continue to share it for some form of value exchange, we would contend that the average consumer is unaware of just how extensive their digital footprint and personal data trail really is.
The contract: explicit or implicit?
Companies and executives need to be mindful of both the implicit and explicit expectations of customers when it comes to their data. Complacency is simply too big a risk to take when every organization is just one tweet or one news cycle away from being the next hacking victim, or having questionable or opaque data practices exposed.
As our research shows, consumers are generally comfortable when it comes to trusting their information to an organization or institution with whom they have a relationship. But the implicit contract is that this data goes no further. It’s fine for the company to whom we knowingly provide our data to use this in ways we expect or acknowledge, but it’s simply not acceptable for this to be misused, manipulated, shared, sold or exposed. This could be, and often is, viewed as a clear breach of trust.
The problem with an implicit contract, however, is that it can be something of a gray area. Do consumers truly understand the totality of personal, financial, transactional and behavioral data an organization holds on them? Do they truly understand where and how companies can collect thirdparty data and how their digital footprint can follow them around online? Do they understand and accept how different aspects of data and personal information can be compiled to build a picture of their life, and predict or influence their future behavior and choices?
Our research shows they aren’t entirely happy about it. In France, for example, consumers weren’t clear what was being done with their data, but were hopeful it was being used responsibly. In the US, our survey respondents were specific about not wanting companies to sell their data indiscriminately. And nobody likes the idea of companies tracking or listening to them via their mobile devices.
To illustrate the sheer extent of our data trails, comprising both short-wave and long-wave signals, we have mapped the data that we each create on a day-to-day basis, as well as the data created as a result of key life events.
This article is excerpted from the second edition of Me, my life, my wallet, an exploration of the multidimensional customer. For the complete report, visit kpmg.com/knowyourcustomer.