In AI we trust? Why we Need to Talk About Ethics and Governance (part 2 of 2)

In AI we trust? Why we Need to Talk About Ethics and Governance (part 2 of 2)

Increase in AI adoption means higher risks for data bias and misinformation. What are the challenges in building ethical AI systems that organisations should be aware of?

In part 1 of this blog post, we discussed the need to be mindful of data bias and the resulting consequences when certain parameters are skewed. Surely there are ways to comb through the data to minimise the risks from spiralling out of control. We need to get to the root of the problem.

In 2019, the Gradient institute published a white paper outlining the practical challenges for Ethical AI. They identified four main categories: capturing intent, system design, human judgement & oversight, regulations. We briefly summarise each challenge below.

Capturing Intent

An AI system trained on data has no context outside of that data. There is no moral compass, no frame of reference of what is fair unless we define one. Designers therefore need to explicitly and carefully construct a representation of the intent motivating the design of the system. This involves identifying, quantifying and being able to measure ethical considerations while balancing these with performance objectives.

System Design

Systems should be designed with bias, causality and uncertainty in mind.

Bias should be identified and either reduced or eliminated from data sets when possible. As we have seen in the earlier Credit example, if “protected features” such as gender are not treated correctly, they can actually make a system more biased. The Gradient Institute’s whitepaper shares a powerful example of how omitting gender when screening candidates for roles may unfairly assess a female applicant that has taken time off to raise a family. Even if protected features are removed, they can often be inferred from the presence of proxy features. For example, training an interview screening model using education data often contains gender information.

Bias however is not just a data problem. As discussed in this article, model design can also be a source of bias too. Even something as simple as choosing a loss function can change the bias of a trained model.

Causality vs correlation of factors is another context sensitive problem to solve. The cause and effect of systems needs to be modelled to ensure there are no adverse effects in adjacent systems. For example, consider the case of an AI system used to prioritise patients admitted to hospital. When an AI model does not account for the causal effect of removing a doctor’s judgement, such as prioritising asthma sufferers, it can incorrectly predict the risk profile of some patients.

Uncertainty is a measure of our confidence in the predictions made by a system. We need to understand and provide the greatest human oversight on systems with the greatest levels of uncertainty.

Human Judgement & Oversight

AI systems are consistently and reliably able to make decisions when trained on good quality data. They are not constrained by many of the limitations that we humans have. They do not get tired or have to deal with environmental issues and can scale to volumes of data and complexity far in excess of what we can do. However, as impressive as AI systems are, they lack the emotional intelligence of even a new-born child and cannot deal with exceptional circumstances. The most effective systems are ones that intelligently bring together both human judgement and AI, these take into account model drift, confidence intervals and impact, as well as level of governance.

  • Model Drift

There are a number of metrics that can be used to measure the performance of a system; they include accuracy, precision and F-score to name only three. Which measures of performance we choose depends upon the nature of the problem. Tracking key metrics and statistical distributions over time and alerting humans when either of these significantly drift can ensure that systems remain performant and fair.

  • Impact and confidence intervals

AI systems are increasingly used for a wider array of applications. We have already covered a few of those applications in this article so far. Some applications such as determining whether to dismiss an employee are clearly so important that it is now regulated. Other applications such as e-book recommendations, clearly less so. 

In addition to impact, we need to consider the level of confidence in predictions. Predictions with low confidence levels and of high impact should have the greatest levels of human oversight. The ability to track and alert based on such scenarios and efficiently bring a human into the loop is a valuable capability.

  • Governance

Where and how data scientists and engineers fit into an organisational structure may vary. Some organisations favour a centralised model, some a distributed model with those skills being part of cross-functional teams. In either case, there is significant value and reduced risk in developing centralised governance to ensure best practices are being followed. This includes guidance on algorithms, testing, quality control and reusable artefacts. Another function of a centralised governance capability is to perform quality control spot-checks and assess model performance and suitability based on prior data and problems. This often requires strong data governance, management and lineage controls together with mature ML operational practices.

Regulation

We saw in the earlier example of how article 22 of the GDPR prohibits certain decisions from being fully automated without explainability. Consequently, organisations should be able to reliably reproduce outcomes or recommendations based on historical data and have strong controls over data management.

Organisations can of course wait for regulations to be enforced upon them, or better still, take a proactive approach working in cross-functional teams with regulators to develop new standards.

The union of organisational, industry and country or regional regulations will form the basis of governance efforts across the entire data lifecycle. This includes everything from what data is collected, to how it is transformed and used and by whom and for what purpose until it is finally purged.

Developing a strong internal capability and understanding of regulation and accreditation while working with similar technology business partners will help ensure that organisations can both influence and quickly respond to regulatory change. 

Summary

Part 1 and part 2 of this blog post provided a brief introduction to Ethical AI as well as the key challenges and considerations in achieving it. These are unmissable aspects of an organisation’s AI strategy to ensure that these systems benefit all equitably. 

Find out more

Cloudera’s Fast Forward Labs provides readers with comprehensive and accessible guides to emerging data and machine learning-enabled trends, and working prototypes. Find out more at Cloudera’s Fast Forward Labs

Daniel Hand
Field CTO, Cloudera APJ
More by this author

Leave a comment

Your email address will not be published. Links are not permitted in comments.