Ethical AI: Is AI bias an opportunity to make humans better?
Business leaders are increasingly concerned about the ethics of artificial intelligence (AI). But AI only does what it’s programmed to do. So, is it really a problem that AI highlights biases, or is it an opportunity to recognise our own assumptions and create a fairer society?
In recent years, both the potential of AI in the HR function and its ethical implications have grown. For example, it’s now possible to programme an AI to analyse hundreds of video interviews to detect emotional expression in seconds. But it wouldn’t necessarily understand that emotional expression is cultural, or that disabilities can affect it.
Too often, organisations focus on optimising towards a business goal – such as accelerating the analysis of videos in the example above – and think too little about the ethical and moral implications – such as misreading people of diversity. As a result, programmers often inadvertently build their own, and their stakeholders’, biases and assumptions into AI. With the vast amounts of data AI analyses, the programmes end up amplifying these biases.
This, in turn, increases the risk of reputational damage and lawsuits, which puts many organisations off investing in AI. Yet AI is advancing whether we like it or not. And if we take a different perspective, it’s possible to minimise biases rather than amplify them. We can make the best use of AI technology in HR, hiring and managing people in a way that’s more efficient for organisations and fairer for employees.
Take a different perspective of AI ethics
When AI shows bias, it’s accurately pointing at us and telling us where our problems lie. It’s telling us about institutional biases absorbed through the data organisations generate and human bias encoded in development of AI algorithms.
So, instead of avoiding using AI in case it goes ‘wrong’, or blindly using it without ethical due diligence, organisations have an opportunity to use AI to show where to focus to become fairer and more equal employers. AI can point to targeted actions that cost less and are more effective than big, broad diversity and inclusion initiatives.
In our experience, acting on this opportunity requires organisations to bring together diverse teams, keep inequalities front of mind when applying AI to business problems, and use the outputs of AI to adjust organisational policies.
1. Involve diverse people
Today, the pool of developers working on AI isn’t very diverse, with men holding most roles (something we’re starting to address through free coding courses and other events). There are also long-standing issues with the diversity of organisational leadership, meaning the stakeholders involved in AI decision-making are also, typically, not very diverse. This makes the introduction of unconscious biases more likely at every stage, from identifying a problem through to deploying the AI.
A good example is the Amazon AI hiring tool, which is quite often misreported in the mainstream media as an AI system which taught itself that male candidates were better than female ones (see Reuters and BBC articles). In fact, Amazon employees made several decisions that led to the introduction of unconscious biases.
They could have minimised the introduction of these biases by involving demographically and intellectually diverse minds throughout the process. For example, women who have faced adversity through the hiring process will be more vigilant to gendered flags, and privacy experts can explain how digital footprints and metadata reveal identity.
But minimising doesn’t mean eliminating. Even when involving diverse people, the unconscious biases of our organisations and societies will shine through the power of AI. And that presents an opportunity. For Amazon, the biases in their AI allowed them to consider how hiring managers unconsciously feel information such as membership to a women’s chess club and graduation from an all-women’s college constitute less employability.
Acting on the issues AI highlights will, again, require a diverse team of skilled people. You need diverse data scientists to analyse the AI outputs in a representative way. You need open conversations with the whole organisation to get to the root of the inequality issues. And you need a diverse team of experts to update policies and processes that will shift the culture away from unconscious biases.
2. Make exploring inequalities part of applying AI to business problems
AI practitioners should expect there to be evidence of unconscious biases hidden in the data – after all, the data is just a reflection of our biased society. So, they should come up with a set of inequality statements to test through exploratory analysis.
For example, when reviewing feedback as part of performance evaluation, analysis could look to understand the adversity faced by women and other minorities. Sentiment analysis could compare positive and negative comments in relation to diversity, and Natural Language Processing could then look to determine the quality of evidence given to support those comments. Programmers could then update the AI model for performance evaluation by removing or adjusting weightings for any features associated with discrimination.
As well as improving the AI, the practitioners could feed any unconscious biases back into manual processes for inclusion and diversity teams to review. People managers could then update policies and objectives, and facilitate learning and development where necessary.
3. Use AI insights to adjust corporate polices and company values
It’s important to recognise that using AI to detect unconscious biases isn’t a substitute for getting to the root of inequality issues. For example, more than 50 organisations use the so-called #MeTooBot, the NexLP AI platform, to detect bullying and sexual harassment in company documents, emails and chat. Anything the AI flags goes to a lawyer or HR manager to investigate. As noted in the AI@Work report, using AI as a surveillance tool like this avoids getting to the root of the issue, risks losing employee confidence and encourages workers to find workarounds to cheat the system.
It’s important not to use AI to try to solve inequalities directly, but it can bring to light systemic biases that are at odds with inclusion and diversity policies. So, an organisation’s purpose and values must underpin all use of AI, especially when identifying inequalities. Only with that focus can an organisation be in a position to use the insight gleaned from AI to create the working environment and opportunities it aspires to.