According to SAP research, when the benefits and value of collecting data are made clear to consumers, more than 70% of us are willing to share our data with brands. But if that trust is broken, if brands are found to have collected data without our knowledge, 79% will never use that brand again.
Over the past few years, high-profile cases such as that of Cambridge Analytica have led regulators to investigate big tech firms and their relationship with consumer data. And we’ve seen laws passed that mean organisations that collect data for business purposes need to comply with strict rules on privacy – laws that include Europe’s General Data Protection Regulation and the Californian Consumer Privacy Act.
For Jamie Barnard, compliance with regulation is not enough. “Compliance is a baseline, protecting people’s fundamental human rights on the one hand and shielding companies from the sharp end of the law on the other,” he told Google’s Vice-President of the Global Client & Agency Solutions, Pedro Pina in an interview for Think with Google.
“But it’s of limited value in the court of public opinion: if people think a company’s data practices are unethical, then a demonstration of legal compliance will not protect their reputation. This is why brands have to set and follow a code of ethics,” he says.
It’s time for courage, not just compliance
Companies are waking up to the idea that they have a moral responsibility as well as a legal one, that they have a responsibility to respect people’s rights and expectations.
As Chair of the Data Ethics Board at the World Federation of Advertisers (WFA), Jamie set about creating the world’s first guide for brands on data ethics – an industry-wide code for marketers that looks to “design a digital future that enhances people’s lives and protects them in equal measure”. The guide is based on four key principles.
When you download a pizza delivery or ride-hailing app and agree to its terms, the app learns a lot about you: the fact that you have the latest iPhone, for example, who you bank with, your network of friends and family, where you live and possibly where you work too.
With every interaction, the algorithm gets to know you a little better; it won’t be long before it learns that you like to go out or eat when you get back. When you hail a cab, it knows it’s 2:00 am. With access to other data, it could work out that you’re in a dangerous part of the city, far from home and in the rain.
It is the company’s approach to data ethics that will determine how it chooses to use this information. On the one hand, if the company’s priority is safety, it could use this data to protect you (making sure you are picked up first). On the other hand, it could raise its charges instead, since the algorithm knows that, in situations like these, you are statistically more likely to accept them.
With this regard, the guide’s first principle is that data usage should respect the people behind the data. It urges companies to strive to understand the interests of all parties and use consumer data to improve people’s lives.
Data has equal capacity for good and evil. It can be used to promote inclusivity, create diversity and eliminate bias but it can also be used to exclude, divide and stigmatise. Segregating people into custom audiences for precision and performance marketing, for example, can avoid showing hamburger ads to vegans or casino games to children.
However, it can be unethical too, such as a landlord restricting certain ethnic groups from seeing its ads, or an employer limiting job opportunities to a male audience.
It’s also a double-edged sword – blocking ad targeting by sexual orientation stops people from excluding the LGBT+ community, but it also stops people offering their services directly to the LGBT+ community.
Fairness should see data usage aim to be inclusive, acknowledge diversity and eliminate bias rather than dividing groups. The second of the guide’s four principles encourages brands to examine their data sets, mindsets and governance and see avoiding harmful discrimination as a shared responsibility between advertisers, platforms and publishers.
People are increasingly demanding to be shown how their data is being used and how it is being looked after. They want to know that their personal data is in safe hands, and that organisations have put in place mechanisms to protect their information.
The Consumer Goods Forum Futerra study describes Gen Z as the ‘honest generation’ who don’t expect brands to be perfect but expect them to be truthful. Expectations like this are driving greater demand for openness as companies are held to account not just for their use of data, but for their suppliers’ and partners’ use too.
“We should think of ethics in the same way we think about sportsmanship. A good sport has scruples – they will do the right thing even if it requires sacrifice. And building trust is also a team sport,” Jamie says. “We are all accountable for collecting and using data in a safe, ethical, and transparent manner,” he told the Think with Google newsletter. “Embedding data ethics across the industry requires commitment, co-operation, and responsible leadership from advertisers, technology platforms, publishers, developers, and tech vendors alike. If we work together, we all win together.”
Ensuring what and how data is being collected and processed as well as social, ethical and societal consequences is key to transparency.
“Some years ago, I downloaded a social media app. On registering, a pop-up appeared asking for access to my contacts,” explains Jamie. “Ordinarily, alarm bells would ring, but they made their intentions crystal clear — my contacts would be encrypted and only used to connect me to existing friends using the app. Then my data would be deleted permanently. This was all the reassurance I needed. I’ve never forgotten their respect for privacy and their open, transparent approach,” he says.
Transparency is also acutely important in the field of AI. In some cases, machine-learnt decisions can have a profound impact on a person’s life (diagnosing disease, securing credit or gaining employment), so it is imperative that AI’s decision-making rationale can be interrogated.
From data first to people first: “In a world where privacy has become a byword for the exact opposite, data ethics allows us to breathe the fresh air of higher purpose and think, act and behave in ways that revive trust in data and technology,” says Jamie. “The guide’s four principles offer a framework to encourage brands and organisations to move from data first to people first, and create a digital age we’re all happy to be part of.”