Use of AI in Data Privacy
With the myriad of international and state privacy regulations, many organizations are turning to the use of artificial intelligence (AI) to achieve compliance with a myriad of state and national privacy laws, including Europe’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA)—which expanded the CCPA—and legislation in several other states, like Virginia and Colorado.
VentureBeat reports that organizations are using on-site search platforms to sort through consumers’ internet history, as opposed to relying on the personal consumer data that they collect, use, and transfer, allowing those organizations to ensure confidentiality.
AI website search technology can also be utilized by consumers to enhance their searching experience and enable them to easily obtain their online objective by “dynamically curating both search results and category pages for each online visitor based on their unique search, browsing, and purchase history.”
Though there are reportedly many other benefits associated with AI—like enhanced effectiveness in an organization’s influence over a consumer and curation of the consumer experience, there are several regulatory hurdles to consider when using these technologies.
According to this report, on June 16, 2022, the Digital Charter Implementation Act (DCIA or “Bill C-27”) was introduced in Canada. The DCIA will enact the Consumer Privacy Protection Act (CPPA), Personal Information and Data Protection Tribunal Act (PIDPTA), and Artificial Intelligence and Data Act (AIDA).
The CPPA reportedly will “govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities[.]”
The draft CPPA has provisions governing the collection, use, and disclosure of consumer’s personal information and dictates the purposes which are appropriate the engage in said activities. This includes the accountability of a data controller; organizations’ responsibility to implement and maintain a “privacy management program,” and; record keeping, consent, retention, and transfer requirements. Additionally, the draft CPPA outlines parameters for the use of AI.
Under the draft CPPA, if an organization utilizes “automated decision systems” for making predictions, recommendations, or decisions that could have a significant impact on an individual, then the organization must provide said individual with an explanation of the outcome of the use of these systems and the principal factors that led to said outcome.
Plus, the draft CPPA addresses an organization’s collection, use, or disclosure of “de-identified information.” Under the draft, this is defined as “the process of ensuring that information does not identify an individual or could not be used in reasonably foreseeable circumstances, alone or in combination with other information, to identify an individual.” Under these draft provisions, an organization would be permitted to use an individual’s personal information, without their knowledge or consent, to conduct internal operations, so long as the information is de-identified before it is used.
Although these draft provisions are merely in place to govern the use of AI, organizations should consider the implications of using these technologies, as they could violate the CPPA and result in a penalty assessment of $10 million or 3 percent of gross global revenue, whichever is higher—as well as a potential private lawsuit, for a case of non-compliance.
Likewise, the AIDA will govern the use of AI by regulating “international and inter provincial trade and commerce in artificial intelligence systems.” The AIDA will achieve this objective by establishing generally applicable requirements for the design, development processes, and permissible and prohibited uses for these systems.
Specifically, under the AIDA, if an organization utilizes an AI system, they must establish measures to “identify, assess, and mitigate” the risks posed to consumers by potentially biased output of information outputs. As with the CPPA, an organization’s maintenance of an AI information system that does not comply with the AIDA’s requirements could result in a fine of up to $10,000,000 or 3 percent of gross global revenues, whichever is higher.
In addition to these Canadian-based regulatory efforts surrounding AI, this article analyzes the recent Federal Trade Commission (FTC) efforts to engage in an advanced notice of preliminary rule making to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” Meanwhile, the European parliament has discussed initiating legislation to govern the concept of “fairness” and bias mitigation in the AI context.
As such, although there are many benefits surrounding the use of AI for organizations across many industries, there are also many regulatory considerations that should be made.
* * * * * * *
For ADCG’s Breach Report and more news updates discussing: American Data Privacy and Protection Act moves forward; the Biden Administration passed three bills focused on cybersecurity last week; Government Accountability Office orders insurance audit; and Dept. of Defense memo identifies penalties for noncompliance with cyber incident reporting, click here.
To browse through our previously published articles and news alerts, please visit our website, and don’t forget to subscribe to receive free weekly Data and Cyber Governance news and Breach Reports directly to your email.
Our Podcasts are released every Thursday, here. They can also be enjoyed on Spotify and Apple Podcasts. Don’t forget to subscribe!