Intelligent Automation (IA) is rapidly transforming the way organizations handle everything from finance and accounting to operations and human resources. Basic Robotics Process Automation (RPA) and enhanced process solutions such as artificial intelligence (AI) unlock the potential to do things faster, better, and cheaper relative to what used to be — and these technologies have become fairly easy to obtain and deploy, with quick ROI, according to Martin Sokalski, Principal, Emerging Technology Risk at KPMG.
However, new risk and governance considerations have also emerged along with expansive adoption and reliance on IA and the pursuit of deeper cognitive capabilities — especially as automation is introduced across different organizational functions subject to higher compliance or operational scrutiny.
“We’re not seeing people take advantage of economies of scale; they’re not effectively governing or fully optimizing those multiple instances of automation across the organization through a cohesive strategy,” says Sokalski. “Automation creates a lot of efficiency, cost take-out, and opportunities, but you’re not realizing all the benefits when you stand up redundant environments and don’t leverage best practices through governance.”
Certainly, no one wants wild or rogue bots to cause financial impacts or operational failures, potentially compromising critical processes. To take full advantage of the opportunities of IA and avoid the pitfalls, there are a variety of enterprise risk considerations to keep in mind when implementing either basic or enhanced RPA, or getting deeper into more complex cognitive tools:
- Well-defined risk appetite and compliance requirements. Without clear understanding of risk appetite and tolerance as well as compliance requirements, it is difficult to establish proper risk management and governance program that will effectively identify, evaluate, monitor, manage and mitigate risk.
- Identity management and security. There are a wide variety of “how” questions to answer when it comes to data security and access. How are you considering bots in the context of identity, authentication, and access provisioning? How do you incorporate controls, segregation of duties, and traceability, and accountability? How do you assign, change, monitor, and remove access and what systems they can actually gain access to?
- Controls integration, logging and traceability. Lack of controls around your automation program can prevent you from meeting security, privacy and compliance requirements. Being able to demonstrate what the bots have actually done is also important. But often, there isn’t a good way of doing that, since some of today’s platforms have varying levels of capabilities and maturity around identity and access management, segregate duties, and capturing activities, logs and audit trails.
- Business continuity. Bots can perform a mind-boggling array of tasks and activities without any breaks. However, inconsistent developer skills and training, as well as lack of change management processes and other controls, are just a few factors that could create an unstable bot environment and increased bot failure rate. What happens if there is a disruption or failure? How do you maintain business continuity and availability of your mission critical business processes?
Designing a Risk and Governance Function That Works
A well-designed Risk and Governance function is key to ensuring that IA is designed and implemented to effectively guide and inform overall automation program strategy, delivery and operations. “We view risk and governance to be really at the heart of it all,” says Sokalski.
The governance function needs to influence what the strategy for automation will be, including deciding what platforms will be chosen for automation, functional stakeholders who will steer the program, and what use cases will be approved and how will they be prioritized. “You have to think about what the roadmap is for growth and maturity of the program,” he says.
Once that’s defined, the organization needs to determine how to deliver the automated solutions. Governance and risk comes into play by providing training and tools to enable automation developers, whether as a central function or in different functional groups.
Controls integration has to be considered early in the process: For example, will the bots be performing key reconciliations? Providing approvals for expenses? Depending on what the automation bot is designed to do, segregation of duties, identity and access management, edit checks, logging and monitoring, approval requirements and notifications will need to be fully vetted out and considered. “Not every bot needs to be created equal,” Sokalski cautions. “Some bots will be basic bots, and others will perform complex and mission-critical activities or be subject to compliance reporting requirements.”
Within operations, he recommends organizations deploy key risk indicators (KRIs) in addition to the typical key performance indicators (KPIs). “You need to get data back from the bots that are out there and use it to monitor and optimize risk,” he explains. “There needs to be a mechanism to report that back into the governance function to enable effective risk management (or acceptance).”
Getting Started: Plan, Build, Run
To effectively integrate risk and governance into the overall automation program, organizations should incorporate risk and governance work stream into the broader program’s plan, build, and run implementation approach. In the planning stages, that means defining the risk appetite, and establishing risk management guidelines to identify, evaluate, and manage risk. “You need to be thinking about what could go wrong,” says Sokalski. “As you look at the pipeline of automation opportunities, there needs to be a mechanism in place to set the ground rules for how to go about automation.”
When it comes to the “build” process, a key recommendation is standing up or fully operationalizing a governance function. “Often we see our clients diving into deep water, which is great for taking immediate advantage of ROI opportunities, but they don’t always consider risk and governance and then they may find themselves in trouble,” he explains. One client, for example, ended up far along their automation journey with hundreds of automation solutions across different functions. Not until very late in the process did they address what this meant from a risk, controls and compliance perspective. “We assessed and uncovered a lot of issues,” he says. “They ended up having to basically disrupt the program because those considerations weren’t thought about early in the process.”
Finally, the “run” aspect is about supporting the ongoing execution of the automation program through a risk lens. This is a key function and objective of Governance and it can exist in different forms including Center of Excellence (CoE). Tactically, that means operationalizing a CoE, managing people and change, monitoring and optimizing the program and individual bots; utilizing risk templates, registers, and toolkits as part of program operation; and reporting on compliance metrics and KRIs to key stakeholders.
Future Challenges for Risk Controls in Automation
“I am encouraged that a lot of organizations ask questions about risk, governance, controls, and compliance” says Sokalski, who adds that as the technology and capabilities continue to mature, the trend will move toward having more focus and discipline around risk governance and oversight. Risk management and Internal Audit functions (as 2nd and 3rd lines of defense) have been raising their hands asking what automated processes and bots mean from risk, governance, controls and compliance perspective. There have been positive developments in the 1st line of defense, trying to address these concerns proactively.
However, as companies explore the art of the possible and identify new use cases enabled by cognitive, artificial intelligence (AI), or machine learning, the risk and governance perspective becomes more complex and challenging, he says, explaining that it is harder to incorporate traditional controls in the process and expect the same level of assurance. The risk landscape and ‘what could go wrong’ considerations are applied differently, given the inherent (black box) nature of the AI algorithms as they constantly learn and evolve. Organizations cannot necessarily look at the traditional means of assurance driving the need to expand capabilities and methodologies.
It’s all about how the organization maintains control, Sokalski concludes: “How do we control the automation, how do we govern it, how do we bring it all together as we continue to innovate and realize the benefits of the art of the possible?”