Balancing the risks and benefits of data has become part of everyday life–for individuals and businesses alike. While the “data explosion” has done wonders for personalization, it has also opened the doors to exploitation, discrimination and the invasion of privacy.
When looking at data from a corporate governance perspective, it is easy to forget the faces behind the data. Every “data subject” is a person who likely has no idea how to navigate their ever-changing data rights. Even if they’ve read up on the subject, most people lack the time and knowledge to exercise their right to delete for the hundreds (and sometimes thousands) of companies that have their data.
The Federal Reserve Bank of San Francisco (SF Fed) has published a report on the role of the individual in the data ecosystem. The report considers how giving individuals “control” over their data will not be enough to address the full spectrum of issues in the data ecosystem. ADCG sat down with the report’s author, Kaitlin Asrow, who is a Fintech Policy Advisor at the SF Fed, to talk about some of the paper’s key takeaways.
The Burden Should Not Be on the Consumer
Data rights–as defined by most current legislation–are all about giving consumers agency and offering them an active role in deciding what happens to their data. The SF Fed has structured data rights into several tiers. Certain rights, like data protection and consent revocation, are foundational. The upper tier of rights include things like granular consent and compensation for data–generally data activities that are high risk, have intellectual property at stake, or lack a legitimate purpose. The right to delete would fall somewhere in the middle.
In fact, Asrow’s paper argues that giving consumers too much control is actually a burden, warning against laws that give consumers such rights as data portability and deletion, because these rights call on individuals to be proactive: “When we’re implementing policies, we have to be conscious of not giving too much responsibility to consumers. Even notice-and-choice policies put a large burden on consumers to read those consent policies and understand what they mean,” Asrow says; “And then, there isn’t actually an opportunity to take much action besides not using the service.”
The Failure of Informed Consent
Consent has become a pillar of recent governance legislation. In order to collect or store data, companies must obtain consent.
“A changing landscape that gives individuals more direct agency over data seems like a positive thing,” says Asrow. “With issues like consent, we need to avoid overburdening the individual by expecting them to make detailed trade-offs around governance that they have no insight into.”
As a replacement for consent requirements for all data collection and usage, Asrow proposes a concept known as legitimate purpose. Data activities would have a “legitimate purpose” under two conditions:
- they are necessary for the product or service
- they pose no threat to the individual.
Ideally, this would apply to any data activity and would be at the center of any data protection framework. It’s already a small aspect of some data privacy laws like CCPA and GDPR. But what type of activity would be deemed necessary? For one, any action that is essential to fulfill customers’ expectations of a product or service. Activities required by law would also create a legitimate purpose. Beyond that, in order to monetize consumer data or use it to build new products, entities would need to communicate the purpose for doing so to the consumer.
This is different from GDPR’s “legitimate interest” requirement, which focuses on the needs of the data collector. Legitimate purpose, in this sense, shifts the focus to the safety of the individual. Under a legitimate purpose requirement, consent alone would not be a lawful basis for data activity. This takes the burden for data protection off of the individual.
Under GDPR, an activity like selling consumer data would meet the criteria to qualify as serving a legitimate interest. However, since selling customer data isn’t integral to the product itself, such an activity would not be lawful under legitimate purpose. Using data for activities like fraud prevention, on the other hand, would qualify as they protect the individual.
Instead of weighing the consumer’s needs against the collector’s, centering policy around consumer interests removes the need for active consent, and avoids burdening consumers with decisions they aren’t prepared to make.
Do Individuals Own Their Data?
A lot of data privacy discourse is centered around the idea that individuals should “own” their data.
After all, companies are already treating data as an asset, cashing in on its processing, collection and general usage. But if it’s such a valuable commodity, why aren’t its “owners” sharing in the profits?
Data is not a physical object, which makes it hard to enforce any of the rights traditionally associated with ownership. Additionally, ownership implies a level of management and care that is not feasible with data on the individual level–yet. Advances in technology could solve that problem, but for now, we must deal with what’s in front of us.
Asrow’s paper which takes a neutral stance on the role of individuals versus entities in data protection and data rights, acknowledges that there are no easy answers to the intersecting policy questions created by the data ecosystem. At the same time, it contends that treating data “primarily as a commodity, or in monetary terms, could have negative consequences for other policy goals.”
Instead, the paper suggests taking a multidisciplinary approach to policy-making. Instead of focusing on data rights, data protection, and data ownership individually, the paper recommends that policymakers focus on the best outcome for everyone involved.