Managing ethics and privacy in the age of AI

Many Australian organisations are experimenting with artificial intelligence and machine learning technologies to gain new insights from their information to create an advantage over their competitors. This is vital in a global economy that is being severely disrupted by the COVID-19 pandemic.

But there is a darker side to AI that organisations often do not consider and that is the legal and ethical implications of their activities. Last year, the Australian government released 8 AI Ethics principles related to human, social and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.

Australian technology executives gathered recently to discuss the ethics and privacy principles they have in place as they roll out artificial intelligence technologies. The discussion was sponsored by Adobe.

Jennifer Mulveny, director of government relations Asia-Pacific at Adobe, says the technology company’s values and foundations are built on providing trustworthy and innovative solutions. As Adobe’s technology becomes more sophisticated, products and features have the potential to impact customers and society in profound ways.

“We know, and reiterate to our customers globally, that it is important to not only focus on what we can do with technology but what we should do. We are committed to ensuring that our technology benefits society, which we refer to as ‘digital citzenship,’” she says.

As part of this, Adobe has formed a cross-functional global artificial intelligence ethics committee to help navigate complex AI issues. The committee spans research, engineering, marketing, product development, policy and legal teams internally to ensure the organisation reviews its data use and AI development from all perspectives and disciplines.

“The main principles that we focus on are responsibility, accountability and transparency for AI and we allocate our resources accordingly,” says Mulveny. “Many of Adobe’s customers have frameworks in place or the outlines of frameworks and look to us to supplement them and take them to the next step based on our global experience.”

Other customers are at very early stages and know they need to be cautious and rely on our guidance, she says.

“As we do in our own organisation, we encourage privacy impact assessments so that ‘privacy by design’ becomes integral to internal processes. Sometimes that requires a culture change which we have navigated here at Adobe.”

“Getting lawyers and engineers to in a room to discuss the impact of technology on customers and how critical decisions should be made isn’t easy, but it’s a process we have been dedicated to for quite some time. This collaboration is critical for achieving the right outcome for customers and society. Taking the long view is extremely important,” she says.

Chris Rathborne, manager, business information and technology at Frankston City Council, says the organisation is using artificial intelligence to identify and classify buildings, structures and natural data such as trees. These are used by the council geospatial information services staff to map out locations of all assets and key data in this space.

“We don’t yet classify personal information and the opportunity to map swimming pools in people’s houses was presented,” he says.

“Our team felt this overstepped the mark and started to map things that were personal to a customer and not something we should seek to find out without consent.”

Rathborne says there will come a day when this type of AI becomes ubiquitous, however, if the council maintains the principal that information is personal, then consent is required.

“We are exploring opportunities around customer service but none of these yet deal with specific customer data and use AI to direct customers to a specific service the council provides,” he says.

Fred Lusk, chief infornation officer, ICT at Justice Health & Forensic Mental Health Network, says his organisation is turning to internet-of-things (IoT) technologies as a way of connecting patients with the scarce resources of medical providers.

AI allows the organisation to digest and examine the information gathered, develop insights which hopefully improve patient outcomes, he says.

“We attempt to avoid the ‘creepy line’ of data analytics by developing hypotheses with an understanding of what information is likely to provide validation of those theories. We know that other information is available but if it isn’t considered to be germane to analysis, it is largely ignored,” he says.

Putting people first

As most companies know by now, trust takes a long time to build but is very easy to break, says Adobe’s Mulveny.

She says transparency for customers is paramount. Telling customers what you are going to do with their data, then doing what you say you are going to do with no surprises, is a pretty good rule to follow, she says.

“With artificial intelligence there can be unintentional biases because AI is based on data that is based on human behaviour. Understanding the bias and focusing on putting people first- recognising that the potential impact on life events such as employment, housing, credit ratings and healthcare – is essential.”

“We advise our customers, just as we do, to take a principled and ethically sound approach to ensure that company plans stay aligned with intended outcomes,” she says.

As a standard practice, Frankston City Council has a privacy officer and a freedom of information officer who work together. Council has policies regarding the use of, and access to, any information that is personal, which are overseen by the group, he says.

“The model enables staff to bring the team into any operations and processes to review and ensure that we comply with both the organisation’s policies and customer values. Protecting customer data is always the first and foremost priority,” he says.

Justice Health & Forensic Mental Health Network’s Lusk, adds that the organisation is highly regulated in terms of privacy.

“The protection of the patient’s privacy is a cultural truism. We have policies, procedures and legislation which requires this protection and we certainly adhere to those standards. But the success of these measures is attributable to the culture of the workplace,” he says.

Investing to reduce risk

It can be a challenge to determine if the right amount of time and money is being spent on artificial intelligence and machine learning projects to ensure everything is being done to reduce data privacy and security risks.

Frankston City Council’s Rathborne says investment is based on the activity, and if AI and ML is part of that and can be used, then it is something the organisation would seek to include.

“We are not in the process yet of specifically targeting AI/ML for investment just for the sake of it. Careful expenditure of council funds is taken seriously and given the due focus to ensure the expenditure is right. We would include AI and ML if there is a cost saving or significant benefit to our customers,” he says.

Meanwhile, Lusk adds that with every new initiative and existing capability, the organisation designs projects with cyber security and patient privacy as fundamental requirements.

“As in any enteprise, requirements drive solutions and solutions drive cost and time to execute. Those factors are considered when undertaking decisions to pursue proposed initiatives or not,” he says.

Copyright © 2020 IDG Communications, Inc.