There’s often no one factor at fault when severe weather appears—and similarly, many factors are driving the current storm of AI development workflows, tools, and software that’s currently sweeping through modern organizations.
The sudden storm of demands that results from data governance for AI are thrusting data leaders into the role of emergency managers. Many are now in over their heads as they’re scrambling to triage damage, manage resources, and prevent cascading failures.

To orient those who are facing that challenge, the below sections will outline the core forces behind today’s AI surge, unpack the most pressing governance challenges that organizations are now confronting, and identify practical strategies that data leaders can use to regain clarity, coordination, and control.
Key challenges for data governance for AI in modern organizations
AI adoption is surging—experts project it to grow at a compound annual rate of 35.9% between 2025 and 2030. As a result, many organizations are rapidly embedding AI technologies into their core workflows.
But that speed of adoption often outpaces the systems and processes that should govern it. As a result, data leaders now face a host of new challenges that traditional data governance frameworks never expected to handle.
Here are the most pressing of these challenges:
Shadow AI and ungoverned implementation
Shadow IT—the unsanctioned, unmonitored use of hardware, software, cloud services, or applications—pre-dates AI’s current commoditization by decades. But just as the growing affordability and accessibility of personal computers and software began creating issues for tech leaders in the late ’80s and early ’90s, shadow AI is producing strikingly similar issues for data leaders today.
This problem is now even more insidious—organizations are quietly (and often innocently) adopting tools that leverage AI-powered self-learning models and predictive analytics, which employees can easily embed in approved platforms and internal workflows. While this functionality is increasingly native to AI, it circumvents the outliers and anomalies that typically tip tech leaders off that their employees are using products or solutions outside of their purview.
For data leaders who are responsible for their organization’s data governance efforts, this means that the shadow AI that’s proliferating throughout their organizations may actively be influencing critical business processes and data assets without essential validation, oversight, and documentation. This increases risks related to data quality issues, data protection, security breaches, and regulatory compliance efforts.
Explainability and transparency deficits
Unlike traditional software and systems, many AI systems require development through cultivation, training, and re-enforcement learning. As a result, these AI tools often function as “black boxes,” which are technically and functionally opaque—not just to outsiders but to the tools’ creators as well.
This opacity creates tangible challenges for data leaders, especially as they work to navigate increasingly sophisticated compliance landscapes. In addition to the audit and bias issues this presents, the inability for AI designers and users to explain the decisions that their AI-driven systems go on to make can significantly undermine stakeholder trust.
Data quality and metadata management complexities
Somewhat ironically, while AI can degrade data quality across an organization, the systems themselves depend on massive volumes of clean, structured, and well-documented data to function properly. That tension puts data leaders in a tough spot: the more they embed AI into everyday workflows, the more pressure there is to produce high-quality, context-rich data on demand. And when internal sources fall short, as they so often do, teams start reaching for third-party datasets, often without looping in governance.
Modern AI training sets can easily include terabytes of semi-structured or unstructured data that teams pull from outside vendors. At that scale, it becomes exponentially harder to confirm that no sensitive data or inappropriate material is hiding in the mix. But once something problematic—like personally identifiable information—finds its way into a model’s neural network, it’s almost impossible to detect through standard audits or basic security reviews.
Operational drift and model risk
Finally, AI systems introduce unique operational risks like model performance degradation and suboptimal monitoring infrastructures, which traditional governance frameworks may be ill-equipped to handle.
When they’re unmanaged (or unmanagable), AI systems can also exhibit model drift. This occurs when external factors related to business operations—like shifts in user behavior, market or regulatory changes, and shifting patterns due to seasonal or temporal changes—differ from an AI system’s training data. Without proper governance, these data incongruencies can degrade performance and reliability, especially for predictive models that teams use for tasks like capacity planning or fraud detection.
Additionally, many organizations lack clearly defined policies that dictate who in the organization is responsible for verifying, validating, or intervening when tools like large language models (LLMs) fail (assuming that leadership is aware that their teams are using them in the first place).
In the face of these escalating challenges, it’s clear that data leaders can’t treat governing AI as an add-on to legacy frameworks. Instead, they need to ensure that their governance efforts are effective for AI, which requires new thinking, practical guidance, and a flexible, forward-looking approach to data change management.
5 ways to adapt data governance best practices for AI
Inevitably, there will be some organizations that decide they need to create entirely new governance programs to manage AI separately. But in many cases, the more urgent need is to ensure that existing data governance practices can handle AI’s demands: its scale, opacity, speed, and unpredictability.
The five steps below are practical entry points for doing exactly that:
- Double down on organizational culture and change management
Despite the new challenges that AI use is creating for data leaders, one cardinal aspect of data governance remains unchanged: an organization’s data literacy will greatly determine its success or failure to modernize key pillars of data quality management. And data governance is no exception.
According to Deloitte’s Q4 2024 findings, organizations that invest in organizational readiness—specifically in data, governance, compliance, and workforce capability—are far better positioned to realize success from their AI efforts, regardless of how quickly the technology evolves.
For data leaders, this points to the need for a cultural assessment that surfaces the organization’s current level of data maturity, risk tolerance, and readiness for change. Based on what they find, it may be necessary to launch pilot programs that show data governance’s value in real AI workflows.
To reinforce that momentum, leaders should establish clear feedback loops to capture employee concerns, surface blind spots, and spotlight where they’re using AI without proper oversight. And to keep governance efforts ahead of the curve—not just chasing it—regular surveys and behavioral assessments can help them track how well the culture is evolving.
Success indicators:
- Executive sponsorship and visible commitment to responsible AI are publicly sustained and reinforced.
- Transparent communication about AI’s impact on jobs and processes is normalized across departments.
- Ongoing data and AI literacy programs for all staff are maintained, resourced, and well-attended.
- Employee feedback and engagement in governance are demonstrably effective and visibly shaping outcomes.
- Begin prioritizing governance efficacy over expansiveness
When re-assessing their approach to data governance, data leaders should also ensure that their data governance strategy hasn’t overindexed on the expanse of their efforts at the expense of their specificity and effectiveness.
This is a risk, mainly due to AI models’ complexity—their ability to make decisions, along with their potential to act unpredictably, creates unique governance challenges that one-size-fits-all data management approaches aren’t capable of fully addressing.
Data leaders can ensure that their governance efforts are as effective as they are expansive by implementing RACI matrices for all governance activities. Doing so creates an accountability foundation that eliminates ambiguity across teams, business functions, and departments.
From there, leaders should establish governance metrics that track both compliance and business value, then build regular reviews around those metrics to adapt as necessary. For this purpose, outcome-based models are often more effective than top-heavy approaches that slow innovation or discourage teams from being open about the tools they rely on. After all, governance should enable progress, not punish it.
Success indicators:
- Centralized governance bodies with clear authority are established and consistently consulted.
- Hybrid models that balance central policy with local execution are implemented and scaled.
- Role-based access control is actively enforced to safeguard data access, protect sensitive systems, and enforce policy.
- Escalation and exception management processes are defined, known, and successfully used.
- Expand framework and policy foundations
In addition to ensuring that governance on the whole is more accountable, data leaders must also contend with the fact that their existing governance frameworks won’t inherently be capable of adequately addressing the unique challenges that AI systems create. What’s more, legacy frameworks and data governance policies may lack provisions for handling the unstructured data, real-time decision making, and black box functionality that are inherent in many of today’s AI models.
Therefore, data leaders need to challenge data governance norms by expanding its domain to include model behavior, system architecture, and algorithmic accountability for any AI that their organizations use. Doing so may require a gap analysis using AI-specific governance and risk management frameworks like ISO 42001 or NIST, followed by phased updates that prioritize the most critical gaps first.
Leaders should also develop backward compatibility strategies that avoid disrupting existing data operations. From there, they can establish a forward-looking roadmap and continuous review process that evolves alongside AI use while reinforcing core principles like data integrity across increasingly complex environments.
Success indicators:
- Frameworks are regularly audited and updated to address AI-specific risks like bias, drift, and explainability.
- Data classification has been expanded to include unstructured and sensitive data across systems.
- A centralized data catalog has been integrated to support discovery, lineage, and compliance tracking.
- Data lineage tracking and model documentation are complete, current, and accessible.
- Compliance is actively mapped to current and emerging AI regulations.
- Establish specialized controls for LLMs and generative AI
Unlike conventional software, LLMs offer no reliable way to delete specific outputs or “unlearn” proprietary information. And increasingly, malicious users, curious employees, and automated prompts are circumventing their internal guardrails using tactics like prompt injection, adversarial inputs, and context window manipulation.
These methods can cause LLMs to bypass intended safety mechanisms, generate unauthorized or harmful outputs, or inadvertently disclose sensitive information, all of which pose significant data security risks. Therefore, data leaders should continuously monitor generative AI outputs to detect drift and bias, in addition to harmful content generation.
As a baseline, leaders should also conduct red team exercises, either annually or bi-annually, to regularly stress test their governance processes’ and frameworks’ ability to mitigate AI specific threats. As part of adopting and implementing the findings from these exercises, they should regularly examine and update incident response procedures to ensure that their organizations are truly prepared to react to generative AI failures or misuse—particularly in high-risk areas like data privacy and content integrity.
Success indicators:
- Risk assessments for generative AI—such as prompt injection and data leakage—are conducted regularly.
- Policies for content generation, IP, and output monitoring are documented and enforced.
- Human-in-the-loop and output filtering controls are implemented and demonstrably effective.
- Data provenance for generated content is consistently tracked and validated.
- Normalize continuous auditing, monitoring, and process evolution
The greatest challenge that AI use in organizations presents for data leaders comes more from the rate at which it’s evolving, not any one facet of its existing functionality.
Since the AI landscape shows no signs of slowing down, the new normal for data leaders must encompass more regular audits and updates in order to ensure that governance remains relevant and effective and is able to minimize risk exposure while maintaining optimal data quality. For many leaders, this will require establishing governance dashboards that provide real-time visibility into compliance status and policy effectiveness. The information that these dashboards provide can then inform regular governance communications that keep all stakeholders informed of updates and changes occurring within the organization.
To complement this more robust, real-time stream of communication, data leaders may also need to implement versioning systems for governance documents that can track changes over tim, in addition to developing change management procedures that ensure smooth transitions when they need to update governance policies.
Success indicators:
- A consistent schedule of ongoing governance reviews and audits is followed across all teams.
- New technology solutions and regulatory developments are monitored and acted upon.
- Training and best practices are updated regularly as threats and opportunities evolve.
- Governance efforts are benchmarked against industry peers and standards.
- Tools that support data discovery are actively used to improve visibility and identify gaps in AI model training data.
While no one approach represents a full solution alone, they together provide a durable foundation for adapting governance practices to the realities of AI in the enterprise. And as the pace of AI integration accelerates, data leaders who embed these capabilities now will be far better equipped to meet tomorrow’s challenges with confidence, not chaos.
Weathering future storms: Strengthening data governance for AI at scale
The current explosion in AI development is less a passing trend and more a sustained storm system that disrupts established norms concerning workflows, accountability, and risk. As traditional governance frameworks strain under that surge, data leaders are increasingly finding themselves in crisis mode and scrambling to adapt in real time, often without the tools or clarity they need to stay ahead.
This is precisely why a growing number of forward-looking leaders are exploring shift-left data thinking—a mindset that embeds quality, ownership, and governance upstream in the form of data contracts long before problems can take root. In practice, shift-left data doesn’t discard existing frameworks. It instead strengthens them by helping organizations anticipate risks, streamline oversight, and scale more effectively.
Fortunately, it’s never been easier to dive into this conversation. To start, begin by reading The Shift Left Data Manifesto by Gable CEO and co-founder Chad Sanderson. It outlines the key principles, cultural shifts, and practical strategies you need to build resilient data governance from the ground up—so you can be ready for whatever the next AI-fueled storm will bring.