by Sarah Putt

AI presents governance questions for NZ IT leaders

May 26, 2021
Artificial IntelligenceChatbotsIT Governance

Issues include bias in recruitment, lack of transparency in algorithmic management, customer understanding, and inadvertently creating new cyberattack vectors.

ai robotics mathematics equation analytics virtual screen
Credit: Getty Images

Automation and AI may be in their infancy in enterprise IT but their use is expanding across New Zealand organisations, from use in human resources to customer service. So how should the technology that now does the work that people previously did be governed and named?

AI tools more prevalent in NZ organisations

In its 2021 CIO Agenda survey, analyst firm Gartner notes that in New Zealand and Australia, robotic process automation (RPA) is present in 27% of the organisations it surveyed, with a further 20% expecting to deploy this technology in the next 12 months. Meanwhile, “multi-experience development platforms” a term which includes chatbots, are in 12% of organisations, with a further 17% expected to deploy them in the coming year.

Gartner analyst Homan Farahmand describes software robots as “digital labourers” that either impersonate humans or independently automate manual tasks. “With their own characteristics, software robots represent a new type of digital worker entity that introduces new identity, governance, and administration requirements,” he says.

Bias and algorithmic management issues for IT governance

In their report “The impact of artificial intelligence on jobs and work in New Zealand”, Otago University academics Colin Gavaghan, Alistair Knott, and James Maclaurin raise several issues with AI, chatbots, and cobots (collaborative robots) in the workplace, from using AI in recruitment through organising the workforce—the latter a concept known as ‘algorithmic management’.

Bias in AI, which can come about when the tools are trained on historic employment data, is a key concern, especially in recruitment. AI tools can be effective in reducing a vast number of job applications to those that are most relevant to the role, but they have also been shown in overseas studies to reinforce racial and gender inequalities because they dismiss applications that don’t conform to stereotypes. Meanwhile, concerns about algorithmic management arise when it is used to make decisions about human workers, such as deployment, shift allocation, promotion, disciplinary action, or dismissal.

The report highlights examples of this occurring in the gig economy, with the authors noting issues with platform-based delivery companies such as Uber, although it is being used in other areas of the workforce, too. The report looks at the idea of a ‘competence alliance’, first mooted by researcher Michele Loi, whereby people who use AI in HR activities have some understanding about how the algorithms work and have “the adequate level of trust combined with critical attitudes.”

The report recommends that employers make task allocation algorithms explainable: “Workers should be able to ask why they have been allocated particular jobs or shifts and to receive a meaningful answer.”

Just as IT leaders are now involved in multidisciplinary groups in their organisations that govern risk, compliance, cybersecurity, and privacy issues, so too might AI become an area of interest. “We support the suggestion to form internal ethics committees, which would conduct impact assessments prior to the deployment of AI/algorithmic tools,” the Otago report authors note.

The Otago University report recommended that “when transitions occur between humans and chatbots, service providers should be transparent about how and when these will take place and what information will be passed between them.”

The authors recommend that mandatory ‘bot disclosure’ be introduced into New Zealand, noting that it would be a “relatively simple rule to devise and implement” compared to other requirements being mooted, such as AI transparency and the testing for bias.

Smoothing the path using relatable chatbot names

The governance requirements for introducing digital labourers into the workforce starts with what to call the robot, and whether it should be given a human name. It’s a question that Virtual Vet Nurse managing director Steve Merchant is considering, as his company provides chatbots to vet clinics in New Zealand, Australia, and South Africa.

A qualified veterinarian, Merchant saw a gap in the market for providing vet practices with technology such as chatbots, which might previously have been beyond their reach. The technology provides generic answers to commonly asked enquiries into vet clinics. When a vet clinic signs up, they must supply basic details such as their name, address, email contacts, and after-hours information, and then pick a name for the chatbot. This last task is, as Gartner points out, as much about branding as functionality.

Gartner recommends that organisations should create a distinctive character for the chatbot that customers can relate to and which expresses the ‘voice’ of the brand: “The importance of personality cannot be overstated. Gartner analysts find that clients put increasing importance on personality as their chatbot and virtual assistant development matures. This has got to the point where some high-end agencies and teams have hired dialogue writers out of the TV and gaming industries.”

Merchant says that in the majority of the cases vet practices opt for female human names, although sometimes the monikers Victor, Roger, and Bruce have been used. Bruce, for example, is a male chatbot that’s been adopted by an all-women vet practice. “Some of the names are a bit of a running joke, to encourage a bit of fun,” he says.

Merchant’s view is that it’s important for the information presented to be accurate, and that it’s made clear that customers are interacting with a software robot, not a person. He recently ran a poll on LinkedIn and, although a small sample, respondents were almost evenly split between those who thought it was okay to give a robot a human name and those who didn’t.

The risks of robots: security holes

Robots should also be considered a potential security risk in the same way that human behaviour, such as clicking on a rogue email, can cause a cybersecurity incident. While consideration needs to be given to managing human employees who may be prone to phishing emails, software robots can also land a business in trouble if there isn’t proper governance, says Gartner’s Farahmand:

Without proper identity and access management controls, software robot identities (as programmable objects) may introduce new vulnerabilities and expose organizations to new cyberattack vectors. Software robots can be easily commissioned, cloned, and decommissioned, making their identity life cycles potentially more challenging to manage.

He says the key challenge is to ensure clear access control policies for software robots, which limit their credentials to specific tasks, processes, systems, and environments. “When software robots operate as subjects, their identities should be managed like humans’ identities. That is, technical professionals should manage robots’ identities similar to people’s identities, but with consideration for the characteristics that are unique to software robots.”