by Clint Boulton

Deutsche Bank CIO shows how to do software right in the digital era

Feature
Jun 18, 2018
Digital TransformationFinancial Services IndustrySoftware Development

Frédéric Véron extols the virtues of smart software execution, deploying machine learning and sound principles to ensure the bank’s software operates optimally in a sector where downtime can be deadly.

Most CIOs can talk all day about how they’re leveraging machine learning, blockchain and other emerging technologies to differentiate their businesses. But IT leaders rarely discuss how they’re improving the performance of their software, a critical requirement in the digital era, says Frédéric Véron, who doubles as CIO and head of safety and soundness for Deutsche Bank.

In fact, Véron says that IT departments are not aware of the state of their software, let alone whether end users are using it. Ensuring robust software from inception to deployment is Véron’s chief remit at Deutsche Bank, which he joined from Fannie Mae last August. For many CIOs, Véron’s IT production role may not rate highly. But he basks in it.

“We need to know our software much better than ever before,” Véron told CIO.com at the MIT Sloan CIO Symposium last month. “At the end of the day if the CIO doesn’t run the system, the CIO doesn’t have much chance to see tomorrow in that organization.” A hack is one thing, but bad code can also bring down a network. Véron noted how France’s trains were recently down for 36 hours because of a single, faulty line of code in the software controlling the transit system.

Bad software can cripple companies

The last thing any bank wants is a systems outage, but such an episode would come at a tough time for Deutsche Bank, which posted $612 million in losses in 2017. In April, the German bank fired CEO John Cryan after only three years on job for taking too long to revamp the company. The bank, which has pledged to cut 7,000 jobs, recently said it will likely report another quarter of declining revenue through June.

Véron employs several tactics to bolster software quality and hedge against outages. Chief among these is a risk-scoring system that uses 60 parameters — including incident history, associated problem tickets and information about the team managing the software — to analyze an application from the infrastructure layer all the way to its user interface.

The technology, which uses machine learning, calculates whether there is a greater than 60 percent chance that a piece of software will suffer an outage within a month. The proprietary tool also predicts the potential risks to an application when changes are made to it. “A lot of IT shops should invest more in predictive analytics,” for such tasks, Véron says.

As the bank moves further into agile development, Véron regularly measures the readiness of new software releases, and rejects changes that don’t score higher in quality than the previous version. Véron will also acquire software that monitors and analyzes application health, feeding this information into the bank’s risk-scoring analytics.

Two-minute fire drill

To help his team better respond to inevitable incidents, Véron is looking to hire IT staff with backgrounds in the military and first-responder roles to help manage crisis situations regarding software outages. Such staff would inject a sense of urgency into his staff, which could help cut down recovery time from 20 to 2 minutes.  

Some CIOs might look askance at this hair-on-fire approach to IT management. But consider this horror story, published by The Atlantic last September, about an impending “software apocalypse” in which poorly written code, not nation-state hackers, could cripple critical infrastructure, grinding societies to a halt.

Indeed, Véron said that while universities are teaching students how to develop software and manage projects, higher education largely ignores teaching IT production. Calling it a “second-class citizen” job that’s been left behind, Véron said, “We’re not putting enough investment in optimizing this.”

How to ensure high-quality software performance

Ultimately, Véron says there are three keys for CIOs to be aware of while ensuring quality software performance.

Be hyper-aware

Being hyper-aware requires CIOs to know how end users are using their software, including features they are or aren’t using, what they are using them for, and whether they’re using the software for purposes beyond the original intent. CIOs must continuously take baseline metrics to understand normal operating conditions for a system, including how it operates all of the way to the customer level. “It’s about making sure the systems you have run and are working with the business so they understand their accountability on those systems,” said Véron.

Ensure operational readiness

To ensure a system is operationally ready, Véron advises CIOs to get everyone involved earlier in the planning process. “All the different procedures that will be necessary to maintain the system in production mode need to be thought through ahead of time,” Véron said.

Fail fast, adapt fast

Adopting agile and DevOps, which are designed to allow iterative flexibility in software development and closer collaboration with customers, allows CIOs to fail fast with minimum viable products and adapt accordingly. At Fannie Mae, Véron helped migrate the company’s software development methodologies to 85 percent from 5 percent agile in two years.

Ultimately, for those CIOs raving about their innovation plans, Véron cautions that CIOs who are still struggling to stabilize systems will not last long enough to innovate for, and become a trusted partner to, their businesses. “If you’re still struggling with system stability, you’re not going to go much further,” Véron said. “Without software we don’t have much.”