It's been over a decade since BCBS 239 was published. Only 2 of 31 G-SIBs have been assessed as fully compliant. If your bank is one of the other 29, or one of the hundreds of large institutions facing equivalent scrutiny from US and European supervisors, you already know the status quo isn't working. The question is why, and what to do differently.

The regulatory frame: this isn't optional, and it isn't foreign

US banks sometimes treat BCBS 239 as a Basel Committee aspiration, important but non-binding. That's a misreading. The substance of BCBS 239 is already embedded in US domestic law and supervisory practice. The labels are different; the expectations are not.

OCC Heightened Standards (12 CFR 30, Appendix D) apply to every OCC-supervised bank with $50 billion or more in total assets. Section II.J requires "data architecture and information technology infrastructure that support the covered bank's risk aggregation and reporting needs during normal times and during times of stress" (§II.J.1). That's Principles 2–6 of BCBS 239 restated in domestic regulatory language. Your OCC examiner evaluates your risk data aggregation against Appendix D, and the bar is rising.

The Fed's LFI Rating Framework (SR 19-3) covers data governance under the Governance & Controls component. A poor rating restricts activities (mergers, capital distributions, new products) until the deficiency is remediated. The Fed doesn't cite BCBS 239 by name, but the substance is identical: can you aggregate risk data accurately, completely, and on time, under stress?

The ECB has been the most transparent supervisor about what good looks like in practice. Their published guides and expectations are the best public articulation of what US examiners are also evaluating, which is why the ECB sources cited throughout this page are directly useful even for US-only institutions. Think of the ECB's RDARR Guide as the most detailed playbook available for meeting expectations your domestic supervisor already holds.

The consequences are structural and lasting

Supervisory consequences for risk data failures aren't fines you pay and move on from. They reshape what your institution can do, for years.

Wells Fargo's 2018 Fed asset cap is the clearest illustration: a $1.95 trillion ceiling imposed for broad risk management framework failures, including deficiencies in data and controls. The cap lasted over seven years, constraining growth across every business line. Wells Fargo closed 13 consent orders between 2019 and mid-2025 and still had enforcement actions outstanding. When the cap was finally lifted in June 2025, the remediation effort had consumed the better part of a decade of institutional focus.

This is why supervisors scrutinize remediation plans so carefully. A plan built on vague milestones, untested self-assessments, and recurring date slips signals that the institution hasn't internalized the scope of the problem. And the regulatory response, as Wells Fargo demonstrated, is to constrain the institution until the underlying capability is genuinely rebuilt.

Why progress keeps stalling

The instinct is to blame execution: missed deadlines, under-resourced programs, competing priorities. There's truth in all of that. But supervisors are describing something more fundamental.

The Basel Committee's November 2023 stocktake found "a material gap between reported self-assessment ratings and the Committee's view". Banks are telling supervisors they're further along than they actually are, and supervisors know it. That gap isn't a communication problem. It's a measurement problem, and it's corrosive: if your maturity scorecards aren't grounded in testable evidence, every decision built on them is wrong.

The ECB has been even more direct. In March 2024, Sharon Donnery and Frank Elderson wrote that "adequate RDARR capabilities… are still the exception" and that "many banks have failed to fully address the weaknesses identified". By February 2025, the ECB's supervisory newsletter catalogued the recurring failure modes: many banks lack proper recent or regular gap analyses, repeatedly fail to provide credible target end dates, and still rely too heavily on weakly controlled manual workarounds.

Three root causes show up again and again.

The scope is wrong. Most programs focus on the reporting layer: the dashboards, the risk reports, the aggregation engines. But BCBS 239 applies to the entire data aggregation process, from capture to final output. The ECB has stated plainly that many frameworks do not cover the entire data aggregation process from data capture to final reporting. The OCC's Appendix D makes the same point domestically: the required infrastructure must support aggregation and reporting "needs," plural, end-to-end. If you've optimized your downstream reporting while leaving upstream sourcing, transformations, and controls out of scope, you've built compliance on a partial foundation. Supervisors will find the gaps.

Governance runs ahead of engineering. BCBS 239 requires both. Principle 1 demands strong governance arrangements: "A bank's risk data aggregation capabilities and risk reporting practices should be subject to strong governance arrangements, strong internal controls and independent validation and review" (p. 8). But Principle 3 is equally non-negotiable: "A bank should be able to generate accurate and reliable risk data to meet normal/stress reporting accuracy requirements on a largely automated basis…" (p. 12). Many programs matured the committee structure, the policy library, and the attestation process while the actual data pipelines still depend on manual handoffs, email-driven reconciliations, and spreadsheet bridges. Governance without automation is a castle built on sand.

The problems are structural, not tactical. Industry analysis from BearingPoint traces the pattern back to the Basel Committee's 2018 review, which found "serious deficiencies in RDARR practices", deficiencies that "remained unresolved" years later. Fragmented data ownership, incompatible taxonomies across business lines, duplicated controls that no one trusts. These aren't project management failures. They're architecture failures. You can't sprint your way out of them.

Who needs to be in the room

If BCBS 239 lives in the CDO's office and nowhere else, it will fail. The OCC, Fed, and ECB have all made multi-level accountability explicit, and building the right coalition is half the battle.

The board and C-suite own trajectory, not just oversight. The ECB RDARR Guide states that "the management body is responsible for approving and overseeing the implementation of the risk data aggregation and risk reporting framework" (p. 10). In practice, the ECB has asked banks to appoint a specific board member to monitor and report regularly, backed by a clear action plan containing intermediate and measurable milestones. The February 2025 newsletter makes the point concrete: 105 banks are now covered in the Management Report process, and management body members personally sign RDARR metrics. On the US side, the OCC's Heightened Standards require the board to "ensure" the risk governance framework is appropriately designed and effectively implemented, including the data infrastructure that supports it. This isn't delegable anymore. You need a board sponsor who understands what they're signing.

CDO/CRO and risk-data leadership translate principles into enforceable standards. Your job is to turn supervisory expectations into operating rules: data definitions, control taxonomies, lineage requirements, quality thresholds, and remediation SLAs. Deloitte's 2024 analysis emphasizes that BCBS 239 compliance is a firm-wide responsibility and that expectations evolve over time. Static control libraries won't cut it. Standards must be continuously recalibrated as supervisory expectations sharpen and your architecture changes.

Engineering and architecture teams own the "largely automated" outcome. This is where many programs go wrong: treating engineering as a downstream implementer rather than a core compliance function. When supervisors flag weakly controlled manual workarounds and incomplete process coverage, they're describing engineering problems. If your platform teams can't provide stable, controlled, traceable data movement and transformation, no amount of governance structure will compensate.

Internal audit and independent validation challenge and verify. The ECB has been explicit that internal audit departments should carry out a review. The RDARR Guide goes further, calling for a dedicated independent validation function:

"Institutions should have an independent validation function to review and challenge the adequacy and effectiveness of their risk data aggregation capabilities and risk reporting practices." (ECB RDARR Guide, p. 23)

US examiners expect the same. The OCC's Heightened Standards require independent risk management to provide "an objective assessment" of data quality and aggregation capabilities. This function should test declared maturity against reproducible evidence (lineage depth, control effectiveness, timeliness, reconciliation outcomes), not just policy conformance. If your independent validation is reviewing documents instead of testing systems, it's not independent validation.

What actually works

The programs that make real progress share a few characteristics. None of them are surprising. All of them are hard.

They define scope end-to-end before phasing delivery. Start from the critical risk reports and trace backward: what systems source this data, what transformations touch it, what controls gate it, where does manual intervention happen? Prioritize by materiality (capital, liquidity, credit concentrations, counterparty risk) but preserve the end-to-end design principle even when phasing. This directly addresses the ECB's concern about incomplete process coverage and the OCC's requirement for infrastructure that supports aggregation "needs" across the board. It gives you a defensible story when any supervisor asks for status by risk domain.

They build milestone plans that supervisors find credible. The ECB has specifically flagged repeated failure to provide credible target end dates. US examiners are equally unimpressed by aspirational timelines. A credible plan has a baseline of current-state control effectiveness and automation rates, milestone outcomes expressed as observable capabilities (not activity counts), dependency maps that acknowledge cross-team and cross-platform constraints, and explicit entry/exit criteria for each phase. When a date slips (and dates will slip), show the reason, the impact, and the compensating controls. Rebaselining without transparent rationale erodes credibility fast. The Wells Fargo experience is a reminder: regulators will constrain your institution for as long as it takes if your remediation execution doesn't match your remediation narrative.

They make gap analysis a quarterly operating discipline, not an annual exercise. Reassess principle alignment. Compare self-ratings with independently validated evidence. Test lineage completeness and control effectiveness. Update the remediation backlog. Escalate blockers through executive governance. The key is recency plus evidence. "Last assessed 12 months ago" is increasingly indefensible.

They treat data lineage as a control system. The ECB RDARR Guide sets a high bar: "Institutions should establish and maintain comprehensive data lineage at data attribute level…" (p. 16) and "Data lineage should be end-to-end…" (p. 17). Attribute-level, end-to-end lineage isn't documentation. It's infrastructure. It powers break detection, impact analysis, control attestation, and change governance. If your lineage lives in static spreadsheets or slide decks, it will not survive supervisory challenge.

They automate first. Because Principle 3 requires a "largely automated basis," every remediation item should be evaluated for whether it reduces manual dependency. Remove email-and-spreadsheet handoffs. Enforce standardized data contracts between producers and consumers. Add automated quality checks with threshold-based alerts.

Integrate reconciliation and exception workflows. Log evidence for audit trail. Formally catalogue every remaining manual workaround with an owner, a risk rating, an expiry date, and a replacement plan.

They embed RDARR controls into everyday change processes. This is how compliance shifts from a remediation project to a durable operating model. Architecture review gates require lineage and control impact assessments. Release governance includes control testing for affected data attributes. Model and reporting changes trigger upstream dependency checks. Incident management classifies RDARR control failures and tracks recurrence. When BCBS work is detached from how the bank actually delivers change, it atrophies.

The patterns to watch for

If you're leading this program, you'll recognize these traps. Your bank may already be in one.

"We completed the catalog" ≠ compliance. A populated data catalog is useful, but supervisors are testing whether controls are effective in production and whether lineage is complete at attribute level across the full chain. Cataloging is a starting point, not a finish line.

Maturity averages hide critical gaps. High scores in selected domains can mask low control coverage elsewhere. Aggregate dashboards that average across risk domains will dilute the exceptions that actually matter to supervisors.

Self-assessment inflation is now a known pattern. The Basel Committee named it. If your internal ratings don't have evidence-based scoring and independent challenge, assume they're optimistic.

Governance without engineering uplift is a dead end. Committees, policies, and reporting packs do not satisfy Principle 3 if the underlying process still depends on weakly controlled manual steps. Governance structure and platform modernization have to move together.

The path forward

As the ECB put it: "Now is a particularly opportune moment for them to be investing". US supervisors are saying the same thing through MRAs, MRIAs, and enforcement actions. The window for voluntary action is narrowing on both sides of the Atlantic.

The banks that will get there are the ones that reset accountability at the board level, baseline their actual state with evidence rather than self-assessments, define scope from data capture through final reporting, and execute with an automation-first bias. For the institution that gets it right, BCBS 239 compliance becomes a competitive advantage in decision speed, risk resilience, and regulatory trust.