According to a study from a leading global consulting firm, less than a year after many of the new Gen AI tools were launched, one-third of organizations say they’re using Gen AI regularly in at least one business function. But only 21 percent of these organizations say they have established policies governing employees’ use of Gen AI technologies, and only 38 percent say they have processes in place to mitigate cybersecurity risks associated with Gen AI.
Just as regulators are trying to stay ahead of Gen AI – learning from their years of playing catch-up with rapidly evolving privacy issues – organizations also are racing to plug the AI governance vacuum. Research from the International Association of Privacy Professionals (IAPP) reveals one promising approach: More than half of companies surveyed said they’re building new AI governance approaches on top of existing privacy programs.
Privacy professionals are not only well positioned to assume oversight for AI governance, but they are already organically taking over the function. We already see this happening across organizations in-house, as well as with trade associations, including the Privacy + Security Forum run by Professors Dan Solove and Paul Schwartz, as well as the IAPP hosting conferences
This organic takeover of AI governance by privacy professionals is taking place largely because of the overlap in privacy and AI principles, including the need for consent, fairness and transparency, and frameworks to achieve compliance with new and proposed AI regulations, directives and Executive Orders in the EU, U.S., Canada and other countries. With these expanded responsibilities, is it time to change the title of the Chief Privacy Officer?
Making the Case for CPO Oversight of AI
There are five key reasons why AI governance should fall under the auspices of the Chief Privacy Office and its staff of professionals.
Ethical AI Development
CPOs are well versed in navigating complex ethical and compliance landscapes, particularly issues of bias, fairness and transparency – all of which are inherent issues in the use of AI. CPOs can ensure that privacy principles are integrated into AI development, and that AI models and algorithms adhere to data privacy regulations and are ethically sound.
Without data, there is no AI. And without data protection, there is no possibility of responsible use of AI or regulatory compliance. CPOs are experienced in data handling, ensuring that data is collected, processed and stored securely and in compliance with relevant laws. When AI systems process data, the CPO can ensure the organization’s data protection policies are extended to AI applications, reducing the risk of data breaches or privacy violations.
Privacy regulations impose strict rules on the collection and processing of personal data, and regulators are now turning their attention to AI regulation. The EU has proposed the Artificial Intelligence (AI) Act to strengthen rules around data quality, transparency, human oversight, and accountability, and in the U.S., President Biden issued an Executive Order that establishes standards for AI safety and security, protects individual privacy, and promotes responsible innovation. CPOs are highly skilled at navigating complex privacy regulations, a skill that transfers to management of AI regulations.
AI introduces new risks to an organization, ranging from model bias that can lead to discrimination, to unauthorized access to sensitive data through AI-driven processes. CPOs are experts in assessing and mitigating risks associated with data, and their involvement in AI governance is useful in developing a comprehensive risk management approach that includes data privacy considerations.
Accountability and Transparency
CPOs are well placed to establish clear accountability for AI systems, similar to the accountability they build in to privacy programs. They can create guidelines and policies that outline how AI should be used and monitored within an organization, ensuring the decision-making processes of AI systems are transparent and well documented.
When CPOs are involved in AI governance, it demonstrates an organization’s commitment to protecting customer data and privacy, building and maintaining trust with customers and regulators.
Toward a New Inclusive Title
With privacy, data protection and now AI gaining greater visibility at the most senior levels of organizations – and at the board level, in many companies – Chief Privacy Officers can play a crucial role in bridging the gap between privacy and AI governance. A new, more appropriate title for this position might be Chief Information Officer, Chief Trust Officer, Chief Ethics Officer or Chief Governance Officer.
The expertise of privacy leaders in data protection, ethics, compliance, risk management and accountability positions them as the most appropriate candidates to ensure responsible and ethical AI development and use. By giving oversight of AI governance to CPOs – with a new, more inclusive title – organizations can navigate the complex landscape of AI while safeguarding the privacy of their customers and adhering to regulatory requirements, building a trustworthy and sustainable AI-powered future.
What are your thoughts about a new title for the CPO? I’d love to hear your ideas. Connect with me at LinkedIn and let’s keep the discussion going.