As AI becomes increasingly used in decision-making, how can we ensure those decisions are free from bias? Here are some of things we learned from our webinar.
Artificial intelligence (AI) is very much about automation and processes, says Dr Emre Kazim, co-founder and COO of Holistic AI. It offers huge cost, efficiency and scale gains over human decision-making. Retail banking, insurance and HR technology are some of the sectors leading the adoption of AI. In the built environment, AI is at an early stage and on the cusp of being exploited on a much larger scale, he adds. Increasingly for companies to remain competitive, they need to automate, however, automation also carries existential risks, says Emre Kazim.
One risk is bias, which although isn’t always bad (for example, when it helps consumers tailor shopping choices), but it can lead to harmful or negative impacts. Three major types of bias exist: systemic, computational and statistical, and human cognitive bias, explains Reva Schwartz, Research Scientist at the Information Technology Laboratory of the National Institute of Standards and Technology (NIST).
Systemic bias results from institutional practices that in turn come from societal norms, for example, gender-based expectations.
Computational and statistical bias gets the most attention in AI because it's really about sampling factors, says Reva Schwartz.
Human cognitive bias is usually implicit or unconscious and relates to how humans take in information and make decisions. As Reva Schwartz points out: ‘human cognitive bias doesn't just happen at the very end of the system with the person who is making decisions based on the output. There are humans along the entire AI lifecycle.’ Problem formulation, the decision to employ AI, dataset selection, and curation of validation choices and optimisation are all functions performed by humans. ‘While humans are the ultimate opaque system, the problem is the speed and scale with which AI can perpetuate bias is much worse’, says Reva Schwartz. The criminal justice system, employment and lending are some of the key areas where there is potential for disparate impact when AI is biased.
Dr Emre Kazim, Co-founder and COO of Holistic AI
‘The massive opportunity is not necessarily in creating a utopia of algorithm adoption, but actually the use of algorithms or processes that we are able to analyse and surface assumptions to ensure they are used safely and ethically,’ says Emre Kazim. This will allow us to see, he adds, for example, whether one demographic is being disproportionally sentenced or job candidates are overwhelmingly male.
The unassailable combination is not AI or humans in isolation, but AI and humans. The concept of having meaningful human involvement in the loop is something that is likely to be reflected in downstream AI regulation, says John Buyers, Partner – Head of AI, Machine Learning at law firm Osborne Clarke. Having that common sense kill switch, self-check or appeal process, moderated by humans is going to be absolutely essential going forward, he adds. There is a need for strong governance in determining what the human’s role in the loop is – providing novel insights or system oversight, comments Reva Schwartz.
AI bias could potentially breach the GDPR’s fairness principle or the UK’s Equality Act (and similar legislation elsewhere) where discrimination has occurred. The Equality Act is technologically-neutral, whether a human or machine makes the decision is irrelevant, meaning service providers could find themselves liable. Furthermore, under the EU’s forthcoming Artificial Intelligence Act, which focuses predominantly on data governance, the prevailing standards as drafted are ‘absolute’ – i.e. 100% accuracy and zero bias.
In the real world, an individual will find it difficult to establish whether they are a victim of a biased AI decision and instances would have to be massively publicised before they realised they had been disadvantaged, explains John Buyers.
The concept of explainability is central to whether AI should be used or not. Whether an algorithm needs to be explainable at a non-scientific level depends on the tasks being carried out. It is unlikely algorithms used in laboratory research need to be, but algorithms used in the criminal justice system or for lending or hiring decisions need be explainable to ensure decisions are fair and free from bias, states Emre Kazim. This can create a dilemma in terms of how much of the algorithm is open to public scrutiny, especially in proprietary systems. When making stringent demands that a system remain private, companies should bear in mind it may come at the cost of explainability, he adds.
‘The last decade was defined by data, data governance and data legislation, this decade is going to be defined by AI,’ says Emre Kazim. The first half will be about putting good processes in place to manage the governance and technical risks, with codification of technical standards coming towards the second half of the decade, he adds.
US non-regulatory agency the National Institute of Standards and Technology (NIST) has developed a voluntary AI risk management framework to help organisations minimise AI risks, including bias. The seven characteristics that NIST identified as making an AI system trustworthy are:
The framework aims to enhance organisational culture and practices, so companies focus on risk across the enterprise and get better at measuring, assessing and managing risk.
AI as a technology is becoming desegregated; it’s moving away from large data centres to edge computing, meaning AI will become ever more ubiquitous, says John Buyers. As far as the built environment is concerned, the big challenge, which he hopes will be solved in the next five years, is to achieve standards interoperability.
Network with peers from across the region and be up to date with the latest industry thinking, trends and forecasts and find out about state-of-the-art solutions to industry-wide challenges.