In March 2016, Microsoft introduced Tay, an AI chatbot, on to Twitter. Its aim was to improve the bot's communications skills through interaction with the users of the social media platform. After only one day in a human community, Tay had mutated into a Hitler-loving, incest-promoting, "Bush did 9/11"-proclaiming liability.
In the intervening years, AI systems have become even more powerful. On the one hand, this opens up incredible opportunities to use technology to improve the way we live. Just think about the potential that AI can have in reducing pollution or improving diagnostics for preventable diseases.
On the other hand, the applications could be disastrous, if the ethical impacts of algorithms and AI decision-making are not carefully thought through. As Microsoft learned, because AI can learn from data gathered from humans, there is always a risk that some human biases are then reflected in a machine's decision-making.
Artificial intelligence technologies are not ethical or unethical per se. The real issue is the manner in which businesses use them, which should never undermine human ethical values. An example can be found in the US criminal justice system, which is increasingly resorting to the use of AI to ease the burden of managing such a large case load. After investigating the criminal risk assessment system used in sentencing and parole hearings across the US, New York-based non-profit ProPublica argued that it was misrepresenting the recidivism risks of convicts, suggesting that there is a systematic racial bias in the risk estimation.
It is essential that companies know the risks, the impacts and side effects that new technologies might have on their business and stakeholders. To this end, we have identified the founding values that form the cornerstone for an ethical framework of ARTIFICIAL Intelligence in business: Accuracy, Respect of privacy, Transparency, Interpretability, Fairness, Integrity, Control, Impact, Accountability and Learning.
These values and principles are intended to help guide decision-making, which is key to promoting ethical behaviour. They could be applied to questions such as: do we understand how these systems work; are we in control of this technology; and have the risks of its usage been considered, before adopting a new AI technology can help companies minimise the ethical risks of AI and maximise its benefits.
We, as users and designers of AI, have the power to decide what the future will look like. We can’t blame the machines for the potentially unethical impact they might have on society. Business, in particular, has an important role to play. Tackling the ethical implications of AI’s use is a complex field and will require a multi-stakeholder approach.
But there are some measures that individual organisations can adopt to minimise the risk of ethical lapses due to the improper use of AI. Introducing “ethics tests” for AI machines to measure how they respond to situations that present ethical dilemmas; or training people to use AI systems efficiently, effectively and ethically are two examples.
The choices that we make today will have significant consequences on the future of AI. The ethical values that shape our society don’t change because of technological developments, but their practical application might. Business, policymakers and the public alike have a responsibility to ensure that AI systems are used to serve the interests of humanity, and not the other way around.