AI was a hot topic at this year’s International Association of Privacy Professionals’ (“IAPP”) Global Privacy Summit, ranging from fine-tuning AI and algorithms with real-live data to best practices in AI governance. The IAPP’s Summit offered privacy professionals insights from policy makers, tech companies and start-ups, authors, and entrepreneurs.

As it relates to AI governance, sessions covered how companies can assess AI models so that they align with company policies. To ensure compliance with emerging frameworks such as the recently enacted EU AI Act, Colorado AI Act, FTC enforcement and guidance, and the NIST Risk Management Framework, companies can take note of the below takeaways.

  • Early cross-functional engagement can help mitigate risks. Companies should coordinate across privacy, legal, product, and engineering teams early in the development and/or procurement process to support more deliberate and informed decision-making. Leadership buy-in can make cross-functional collaboration all the more effective.
  • Transparency. Many regulatory frameworks highlight transparency, explainability, and user choice. Evaluating how these concepts are addressed in current practices may support compliance and user engagement goals.
  • Education. Being clear with employees about the organization’s policies on AI, and the associated risk, can reduce the frequency and severity of AI-related incidents, especially with respect to intellectual property and data privacy. AI enabled technologies require a deeper operational and technical understanding than typical software based solutions. While human error is impossible to avoid, it can be mitigated with proper training.
  • Organization and awareness. The key to effective AI governance is awareness of the company’s digital infrastructure. Organizations must know what data and digital assets it has, where that data and those assets are stored, whether each is (or should be) firewalled from AI, and how AI might enter and explore the environment. Inventorying AI inputs not only supports legal compliance but ensures accuracy in AI outputs.
  • Impact. Many regulatory frameworks encourage organizations to examine how AI-driven tools might affect individuals, particularly where personalization or automated decision-making is involved. This is especially important where organizations deploy AI agents that autonomously take actions on behalf of organizations.
  • Do it again. And again. And again. AI monitoring must be an ongoing and iterative process. Human monitoring is also an ongoing process – files within the digital environment can and do easily end up on the wrong side of firewalls and permissioning. Organizations should consider automating this process as AI agents scale within the environment.
  • AI compliance requires ongoing attention. As legal requirements related to AI continue to evolve, organizations should continue to monitor developments and assessing their relevance based on specific use cases and jurisdiction. Even as AI regulations and attenuating penalties multiply, organizations should also remember existing penalties that could be triggered in, for example, the data privacy context.

What companies can start doing

  1. Effective AI governance relies on early and ongoing collaboration across departments, supported by leadership, and requires continuous, automated monitoring to manage risks as AI systems evolve.
  2. Transparency requirements vary across U.S. state laws. Ensuring transparency, educating employees, and maintaining awareness of digital assets are key to building trust, supporting compliance, and reducing incidents related to AI use.
  3. Organizations must proactively monitor and adapt to rapidly changing AI regulations, regularly reassessing their compliance strategies to address new legal requirements and enforcement actions.