Blog

The Next Chapter of AI Governance: IAPP AI Governance Global North America 2025 Recap

September 26, 2025
3 min. Read
Nishant Shah
Nishant Shah
Head of Product, AI

The Next Chapter of AI Governance: IAPP AI Governance Global North America 2025 Recap

September 26, 2025
3 min. Read

At IAPP AI Governance Global North America 2025, the tone around AI governance has shifted significantly. Enterprises came with concrete AI pain points: shadow AI creating unmanaged risk, EU AI Act regulatory pressure, and growing concern over AI agents. Leaders know they must show progress fast, but many are still struggling to align teams and tools to operationalize it.

Speaking Session Highlights

Our session, “AI Behaves, Intent Evolves: Rethinking Governance in the Age of Agentic AI”, drew a packed room. The audience poll results and engagement revealed just how widespread these AI governance challenges have become.

  • 90% of the audience said Generative AI was already in production, and 70%+ confirmed AI agents were being actively adopted. AI Governance must keep pace. 
  • Lineage as evidence. Attendees resonated strongly with data lineage as core evidence to govern AI responsibly.
  • Our 10-Stage Agentic AI Governance Playbook was requested by 50+ attendees. Clear proof that leaders are hungry for practical guidance.

What We Heard from the Crowd

Traffic across the board was diverse, CISOs, privacy leaders, and GRC professionals. But across all roles was the same blind spot: how AI touches privileged data and how that data flows into SaaS vendors and AI models. 

Everyone came ready to share pain points and ask hard questions about how to govern AI in practice. The excitement wasn’t just about seeing new features; it was about finding ways to operationalize AI governance live.

Out of these rich conversations, a few themes stood out:

  • Comprehensive AI Visibility: Leaders admitted they don’t have a clear picture of their full AI footprint, often relying on spreadsheets, procurement forms, or quarterly reviews, while shadow AI goes unnoticed creating regulatory and reputational risk. 
  • Third-Party AI Risk: SaaS vendors are switching on AI features overnight and teams are embedding external models into workflows. Leaders want to know what privileged data is flowing into these systems.
  • Regulatory Readiness: With the EU AI Data Act moving toward enforcement, teams said they need to demonstrate ongoing oversight, not audit scrambles. One attendee put it plainly: “We don’t want compliance to be an annual fire drill; it has to be continuous.”
  • Practical Execution Guidance: Beyond tools, attendees wanted battle-tested operating models or real examples of what works in practice, not theory.
  • Cross-Functional Alignment: Security, engineering, data science, and product teams must align with privacy, legal, and GRC. Without this alignment surprises only surface once systems are live.

Where Relyance Fits In

Enterprises told us in Boston: they don’t need another dashboard, they need AI governance that delivers on the three most critical jobs. 

Here’s how Relyance can help: 

  1. See the Entire AI Footprint: Relyance builds an always-live inventory across code, pipelines, cloud, SSO, and SaaS vendors, surfacing shadow AI automatically with ownership and lineage and giving governance teams a single source of truth. 
  2. Control Third-Party AI Risk: From SaaS tools to embedded GPT models, Relyance illuminates data flows end-to-end and continuously evaluates risk against policies, stopping unapproved third-party AI before violations occur. 
  3. Prove Continuous AI Compliance: Relyance maps systems to EU AI Act, ISO 42001, and NIST RMF in real time, flagging gaps with recommended actions and exportable evidence.

The biggest lesson from IAPP AI Governance Global North America 2025: AI governance is no longer optional. With shadow AI rising, enterprises need to act fast.

At Relyance AI, we’re proud to help answer that question by making AI governance continuous, evidence-driven, and real. Contact us for a personalized discussion. 

At IAPP AI Governance Global North America 2025, the tone around AI governance has shifted significantly. Enterprises came with concrete AI pain points: shadow AI creating unmanaged risk, EU AI Act regulatory pressure, and growing concern over AI agents. Leaders know they must show progress fast, but many are still struggling to align teams and tools to operationalize it.

Speaking Session Highlights

Our session, “AI Behaves, Intent Evolves: Rethinking Governance in the Age of Agentic AI”, drew a packed room. The audience poll results and engagement revealed just how widespread these AI governance challenges have become.

  • 90% of the audience said Generative AI was already in production, and 70%+ confirmed AI agents were being actively adopted. AI Governance must keep pace. 
  • Lineage as evidence. Attendees resonated strongly with data lineage as core evidence to govern AI responsibly.
  • Our 10-Stage Agentic AI Governance Playbook was requested by 50+ attendees. Clear proof that leaders are hungry for practical guidance.

What We Heard from the Crowd

Traffic across the board was diverse, CISOs, privacy leaders, and GRC professionals. But across all roles was the same blind spot: how AI touches privileged data and how that data flows into SaaS vendors and AI models. 

Everyone came ready to share pain points and ask hard questions about how to govern AI in practice. The excitement wasn’t just about seeing new features; it was about finding ways to operationalize AI governance live.

Out of these rich conversations, a few themes stood out:

  • Comprehensive AI Visibility: Leaders admitted they don’t have a clear picture of their full AI footprint, often relying on spreadsheets, procurement forms, or quarterly reviews, while shadow AI goes unnoticed creating regulatory and reputational risk. 
  • Third-Party AI Risk: SaaS vendors are switching on AI features overnight and teams are embedding external models into workflows. Leaders want to know what privileged data is flowing into these systems.
  • Regulatory Readiness: With the EU AI Data Act moving toward enforcement, teams said they need to demonstrate ongoing oversight, not audit scrambles. One attendee put it plainly: “We don’t want compliance to be an annual fire drill; it has to be continuous.”
  • Practical Execution Guidance: Beyond tools, attendees wanted battle-tested operating models or real examples of what works in practice, not theory.
  • Cross-Functional Alignment: Security, engineering, data science, and product teams must align with privacy, legal, and GRC. Without this alignment surprises only surface once systems are live.

Where Relyance Fits In

Enterprises told us in Boston: they don’t need another dashboard, they need AI governance that delivers on the three most critical jobs. 

Here’s how Relyance can help: 

  1. See the Entire AI Footprint: Relyance builds an always-live inventory across code, pipelines, cloud, SSO, and SaaS vendors, surfacing shadow AI automatically with ownership and lineage and giving governance teams a single source of truth. 
  2. Control Third-Party AI Risk: From SaaS tools to embedded GPT models, Relyance illuminates data flows end-to-end and continuously evaluates risk against policies, stopping unapproved third-party AI before violations occur. 
  3. Prove Continuous AI Compliance: Relyance maps systems to EU AI Act, ISO 42001, and NIST RMF in real time, flagging gaps with recommended actions and exportable evidence.

The biggest lesson from IAPP AI Governance Global North America 2025: AI governance is no longer optional. With shadow AI rising, enterprises need to act fast.

At Relyance AI, we’re proud to help answer that question by making AI governance continuous, evidence-driven, and real. Contact us for a personalized discussion. 

You may also like

Join Relyance AI at Gartner IT Symposium/Xpo™ 2025

September 24, 2025
Join Relyance AI at Gartner IT Symposium/Xpo™ 2025

Join Relyance AI at IAPP Privacy. Security. Risk. 2025

September 17, 2025
Join Relyance AI at IAPP Privacy. Security. Risk. 2025

Distributed data scanning: How we built scalable privacy-first processing

September 16, 2025
Distributed data scanning: How we built scalable privacy-first processing
No items found.
No items found.