How can artificial intelligence be scaled responsibly in the financial sector without compromising regulatory certainty and organizational stability? In their article, Dagmar-Elena Markworth and Markus Trost share selected insights from their conversations with Christian Rhino, CIO of the Private Bank of Deutsche Bank AG.
Whether global universal bank, specialized asset manager, or ambitious fintech, generative artificial intelligence is increasingly emerging as a core building block of the next wave of transformation in the financial sector.
Strategically, the topic has long since moved beyond the stage of isolated experiments. The central question today is how AI can be used to create sustainable and responsible value over time. Operationally, however, a different picture emerges. When it comes to large scale implementation, restraint still prevails. The current study “State of Generative AI in the Financial Services Industry” by the Deloitte AI Institute confirms this tension: high expectations and numerous pilot projects, but so far only limited impact at scale.
From Vision to Impact
Sixty nine percent of the financial institutions surveyed expect generative AI to substantially change their organization within the next one to three years. This makes the sector more optimistic than many other industries. At the same time, actual usage often remains fragmented. In many institutions, AI is still understood as a collection of isolated use cases rather than a structural transformation of value creation, decision making logic, and organizational design.
That the bottleneck is rarely technological is also confirmed by Christian Rhino, CIO of the Private Bank of Deutsche Bank AG. In his view, the central obstacles lie much deeper within the organizations themselves.
The real challenges for AI are certainly internal processes, organizational structures, and culture.
Complex decision making pathways, silos, regulation, and a deeply ingrained security mindset make it difficult to consistently embed new technologies into day to day operations.
Large financial institutions, in particular, therefore face the task of not only introducing AI from a technical perspective, but fundamentally rethinking their operating models, responsibilities, and leadership logic.
Technology Follows Strategy, Not the Other Way Around
One surprising finding of the Deloitte study is that generative AI is already more advanced in marketing, sales, and customer service than in many traditional core functions. Around 15 percent of financial services providers are already using AI here at scale, for example for personalized customer communication, intelligent offer logic, or call center applications.
These areas benefit from clearly defined use cases and fast feedback loops. Yet Rhino also warns against a common misconception.
AI is often seen as a universal solution, regardless of the actual problem. But success does not come from using as many AI tools as possible. It comes from first understanding which problem needs to be solved and why.
This brings a fundamental question into focus that is often neglected in transformation programs. Not every challenge is an AI problem, and not every AI application automatically creates value.
From this perspective, priorities become clear. Only a precise understanding of the business and operational problem determines whether, and which, AI solution makes sense. Technology is a means to an end, not the starting point.
What is decisive are clean data architectures, powerful data lakes, and secure, proprietary AI environments, along with the ability to operate these systems responsibly.
The Human Factor
Just as critical as technology and data is the human factor. The Deloitte study shows that only 21 percent of financial institutions feel well positioned in terms of AI talent. The shortage of qualified profiles is only part of the problem. Equally important is the question under what conditions employees and leaders are willing to actually use AI and take responsibility for it.
In principle, there is no lack of interest or openness. Curiosity about AI is high in many institutions, both among employees and at management level. The restraint stems more from uncertainty. Unclear regulatory guardrails, inconsistent governance structures, and missing technical and data foundations make it difficult to push AI applications forward with sufficient decision confidence.
This uncertainty is particularly pronounced at the individual level. In highly regulated organizations, potential compliance risks are not abstract but personally relevant. A lack of model explainability, unclear data provenance, or methodological gray areas can have real consequences, from negative performance reviews and bonus implications to labor law issues. The perceived personal downside therefore often materializes sooner and more tangibly than the potential benefits of an AI application. This encourages defensive behavior and reinforces the tendency to confine innovation to pilot projects.
At the same time, AI holds a unique lever. As a comparatively democratic technology, it can be used by individuals as well as entire organizations. The prerequisite is a clearly defined governance framework and deliberately created spaces for experimentation. Such controlled sandboxes make it possible to test use cases, gather experience, and systematically assess potential before scaling them up or consciously discarding them.
Leadership plays a central role here. When boards, executive committees, and supervisory boards gain hands on experience with AI at an early stage, this not only builds technical understanding but also trust. Active engagement reduces abstraction, accelerates learning, and increases decision confidence, including in compliance and audit matters. Dagmar Elena Markworth captures this succinctly.
Whether AI can actually deliver impact depends less on the technology than on leadership, on whether responsibility is clearly assigned and consciously assumed.
For Odgers, this leads to a clear conclusion. What is needed are leaders who combine technological competence with regulatory judgment and who create security through clear frameworks, defined processes, and documented responsibilities.
A Practical Example: How the ECB Scales AI Responsibly
That artificial intelligence can also be deployed successfully and responsibly in highly regulated environments is demonstrated by the European Central Bank. With Athena, the ECB has developed its own natural language processing system to specifically support supervision and regulation. More than 1,000 supervisors can use it to analyze over five million documents from the Single Supervisory Mechanism, ranging from inspection reports and bank disclosures to external sources.
Athena classifies documents, identifies thematic focal points, detects trends, and enables consistent sentiment and risk analysis across institutions. Supervisors gain faster insight into where anomalies, emerging risks, or deviations are developing, both at the level of individual banks and across the sector. Analyses that were previously time consuming and fragmented can now be carried out far more efficiently and comparably.
What is remarkable here is less the technology itself than the operating model behind it.
The ECB deliberately follows a human in the loop approach. The AI makes suggestions and identifies patterns, but assessment and decision making remain with humans. Experts review contexts, interpret results, and provide feedback, which in turn flows back into the models. AI thus becomes not a replacement, but an amplifier of professional expertise.
At the same time, the ECB is making targeted investments in technological foundations. Proprietary computing capacity, secure environments, and close cooperation with other central banks, such as the Bank of England and the U.S. Federal Reserve, make it possible to develop best practices and jointly advance regulatory standards. Supervision thus becomes not only more efficient, but also more consistent and proactive.
That this approach is gaining recognition is underscored by a recent accolade. The ECB was named one of the winners of the Capital Best of AI Awards. Markus Trost puts this into perspective.
The ECB’s success shows that AI can be used not only safely but effectively in highly regulated organizations, when governance, technology, and leadership are consistently thought through together.
The Athena case therefore illustrates what matters most when deploying AI in financial services. Clear responsibilities, robust data structures, early involvement of control and audit functions, and leadership that is willing to use new technologies itself rather than merely regulate them.
Conclusion: AI Is Not a Technology Issue, It Is a Leadership Task
In the end, it is not the performance of the models that determines the success of AI in financial services, but the people. Leaders must lead by example. They need to understand customer needs, define future products and business models, and above all set clear objectives for their employees. This also involves developing the right talent and clearly allocating responsibilities. Organizations and their people must be willing to change themselves if AI is to fully unfold its true impact.