Week Ending December 28 ICO Warns SolarWinds Victims to Report Breaches The United Kingdom’s Information Commissioner’s Office (ICO) has issued a warning to organizations compromised by the SolarWinds breach. The breach, which was carried out by Russian hackers, affected more than 18000 organizations worldwide. ICO requires UK data controllers subject to NIS regulations to report…
H5, an ediscovery technology company, recently convened a panel of experts to discuss the intersection of artificial intelligence technology, data privacy and governance.
JD Supra reports that the panel featured Nia M. Jenkins, Senior Associate General Counsel, Data, Technology, Digital Health & Cybersecurity at Optum (UnitedHealth Group); Eric Pender, Engagement Manager at H5; Kimberly Pack, Associate General Counsel, Compliance at Anheuser-Busch; and was moderated by H5’s Managing Director, Corporate Segment, Sheila Mackay.
The panel sought to understand how compliance, governance, and cybersecurity intersect–and how AI can help or hurt those objectives. The panelists discussed the importance of building relationships across different teams in order to manage data challenges, the necessity of learning about the technology and data management needs of other teams, building artificial intelligence into workflows and understanding its inherent risks and technology, and the importance of ongoing professional development and training via engagement with experts and thought leaders.
The key takeaways reveal a common thread: the entangled nature of cybersecurity, data management, and governance–and the importance of gaining clear insight into an organizations needs and practices before engaging with artificial intelligence: “Jennifer Beckage highlighted the need for an organization to develop a plan, build a strong team, and understand the type and quality of the data it collects before adopting AI. Businesses have to address data retention, cybersecurity, intellectual property, and many other potential risks before taking full advantage of AI technology.”
But assessing and addressing risk isn’t the finish line; it’s just the beginning of an ongoing education in governance. On a leadership level, that means keeping up with thought leaders like law firms and professional organizations–and even connecting with privacy professionals on LinkedIn. And then that knowledge must be passed down throughout the organization, says Kimberly Pack: “Really try to tailor the training so that it makes sense for people. Also, try to have tools and infographics, so people can use it, pass it along. Record all your trainings because everyone’s not going to show up.”
Once these educational and knowledge-seeking workflows are in place, then it’s time to assess where AI can be a benefit and/or risk to an organization. It may not be the right tool for every department, but it should ultimately be used to remain competitive and meet consumer expectations. Each department and use case must be assessed individually. And compliance professionals should remember that AI can solve just as many problems as it could create.
That said, AI can be remarkably useful for navigating the intersection of data privacy, governance and cybersecurity. Eric Pender identified the increasingly common use of AI in cybersecurity breach response, document protection and privilege control, and identifying sensitive Personally Identifiable Information. AI makes sense in these cases because it provides efficiency and accuracy in a tight timeline.