In AI we Trust? Why we Need to Talk about Ethics and Governance (part 1 of 2)

In AI we Trust? Why we Need to Talk about Ethics and Governance (part 1 of 2)

Increase in AI adoption means higher risks for data bias and misinformation. With the sheer volume of data that today’s digital economy is expected to produce, the consequences can be far-reaching. We examine some of these.

Advances in the performance and capability of Artificial Intelligence (AI) algorithms has led to a significant increase in adoption in recent years. In a February 2021 report by IDC, they estimate that world-wide revenues from AI will grow by 16.4% in 2021 to USD $327 billion. Furthermore, AI adoption is becoming increasingly widespread and not just concentrated within a small number of organisations. With an increased adoption of AI there has been an associated increase in risk, specifically around the ethical use of AI. This has led to the development of regional, industry and organisation policy and guidelines on the subject. 

In this article we explore what Ethical AI is, why it is important, highlight important cases in the news and look at why it is such a challenging problem to solve.

What is Ethical AI

The English word ethics is derived from the Greek word êthos meaning “character or moral nature”. The study of ethics or moral philosophy involves systematising, defending and recommending concepts of right and wrong behaviour. 

While some academics and philosophers may argue that ethics can be extended to the realm of animals, ethics is generally considered a human concern. As the systems we develop become increasingly sophisticated, and in some cases autonomous, we remain ethically responsible for those systems. This includes systems based on AI and ML.

Ethical AI is a multi-disciplinary effort to design and build AI systems that are fair and improve our lives.

Why is Ethical AI Important?

Ethical AI systems should be designed with careful consideration of their fairness, accountability, transparency and impact on people and the world.

Advances in AI have meant that we have moved from building systems that make decisions based on human defined rules, to systems trained on data. When systems behave according to rules defined by humans, the ethical implications of each rule tend to be more transparent and are more of a conscious decision made by at least the designer and, one would hope, the developer. This often leads to clearer links between rules and unethical outcomes. 

With the introduction of ML and Deep Learning (DL), it is now possible to build AI systems that have no ethical considerations at all. An unconstrained AI system will be optimised for whatever its output is. For example, a system designed to approve loans may unfairly penalise particular demographics that are underrepresented in the training data. This clearly has a negative impact on members of those demographics and potentially to the provider of the service. It may also place the provider in violation of organisational or industry guidelines, or in some cases even the law.

Examples of  AI and data bias in the news

AI regularly features positively in the news, from how it is being used in driver-assisted vehicles, screening for cancer in radiology images or advances in gene folding. However, AI has received its fair share of negative press either due to overly inflated expectations or as a result of some unethical outcomes. We consider three examples below:

Robo-Firing

In April 2021, six drivers in the Netherlands were reportedly unfairly terminated by “algorithmic means”. The ensuing legal challenge supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE) was in response to Article 22 of the European Union’s General Data Protection Regulation (GDPR). The article is designed to protect individuals against purely automated decisions with a legal or significant impact. 

The investigation focussed on two main concerns. Firstly, individuals apparently being dismissed without the decision being reviewed by a human. Secondly, the use of facial recognition in Uber’s realtime ID system. Earlier in the year, the ADCU challenged Uber’s use of facial recognition technology over concerns of its accuracy, citing a 2018 MIT study showing that facial recognition systems had been prone to error rates as high as 20% for people of colour and performed less well on women of all ethnicities. 

Following the legal case, Uber was ordered to pay the dismissed drivers compensation.

Insurance Fraud

In May 2021, US Insurance company Lemonade retracted a statement from their corporate Twitter account on how it was using AI to scan customer faces for hints of fraud using “non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process”. Some Twitter users drew parallels with Phrenology to illustrate the absurdity and unfairness of using a physical characteristic to determine behaviour. Similar concerns have also been raised with an EU funded immigration project designed to speed up immigration with an AI lie detector based on facial recognition.

Credit

When Apple introduced the Apple Card, users noticed that women were offered less credit as compared to men as a result of bias in the AI system used to determine credit limits for the card was believed to discriminate against women. An independent third-party later confirmed that Apple’s issuing credit card partner had not used gender in its models but the author of the article went on to state that “machine learning systems can often develop biases even when a protected class variable is absent”.

The examples have shown that like all technology and tools, AI can provide great value and as we have seen, sometimes produce unethical results. So why is it so hard to build ethical systems?

In part 2 of this blog post, we explore the challenges in ensuring ethical AI systems and some ways that these can be overcome.

Find out more

More information on emerging data and machine learning-enabled trends, and working prototypes are available at Cloudera’s Fast Forward Labs. Get more comprehensive and accessible guides at Cloudera’s Fast Forward Labs

Daniel Hand
Field CTO, Cloudera APJ
More by this author

Leave a comment

Your email address will not be published. Links are not permitted in comments.