How to audit AI tools for bias
Tags
Rachael Brassey, the global lead for people and change at PA Consulting, authored an article for HR Dive explaining that HR leaders must conduct bias audits of their AI-driven workforce management tools to manage and mitigate potential bias for ethical reasons, and to comply with regulations.
In recent years, many companies have turned to artificial intelligence-driven workforce management tools to streamline and improve people management. While AI-driven tools are revolutionizing how workforces are built and managed across all industries, there is growing concern about the potential biases embedded in their algorithms.
To ensure fairness, companies should take proactive steps to conduct bias audits of their AI-driven workforce management tools to manage and mitigate potential biases for ethical reasons and to comply with emerging regulations.
Because decision-making is derived from AI data created by humans, bias is often embedded in the algorithms. Therefore, hiring practices driven by AI will most likely include similar biases to those experienced when humans make hiring decisions.
If the algorithms are trained with biased data or are not rigorously tested, they can perpetuate and even amplify existing biases. This can inadvertently result in discriminatory workforce management practices that exclude groups based on ethnicity, race, gender, religion, age, disabilities or sexual orientation.
Regulations emerge
Compounding concerns for potential bias, HR professionals are challenged with adhering to new, complex HR laws and regulations governing the use of AI.
Most recently, New York City proposed rules providing guidance on its regulation that prohibits employers from using automated employment selection tools unless specific bias audit and notice requirements are met. Employers and employment agencies are prohibited from using the automated employment decision tool to screen candidates or employees unless: (1) the tool had undergone a bias audit no more than one year prior to its use; (2) a summary of the most recent bias audit had been made publicly available; and (3) notice of use of the tool and an opportunity to request an alternative selection process had been provided to each employee or candidate residing in the city. With the proposed rule, New York City joined Illinois and Maryland, and several other jurisdictions in efforts to regulate AI in the workplace to decrease hiring and promotion bias.
Implement a bias audit
While regulations are critical to help companies manage bias when using AI-driven tools, HR professionals have an individual, ethical responsibility to conduct and/or participate in periodic bias audits that evaluate and analyze potential biases in the design, development and implementation of AI-driven workforce management processes.
The outputs from the audits will be critical to identify and mitigate any discriminatory biases that may inadvertently influence the selection and evaluation of job candidates. However, before conducting bias audits, it’s important for HR professionals to understand the different types of bias and the potential impact on hiring and promotion outcomes. To ensure fairness and meet emerging regulatory requirements, companies can take the following steps to develop and perform bias audits.
- Establish objectives to derive the best results from the bias audit. Objectives may include identifying potential bias in AI-driven hiring tools and workforce management processes; assessing the impact of bias on hiring and promotional outcomes; and measuring the outcomes of the processes used in reducing bias over time. Having well-defined objectives will ensure the audit is focused and actionable.
- Collaborate with diversity experts and data scientists with experience in algorithmic fairness and diversity. These experts will provide valuable insights and guidance throughout the design of the bias audit, including assessment of the tool’s design, implementation and potential sources of bias, while also suggesting strategies to address and mitigate identified biases.
- Assess the algorithmic components. The assessment requires a deep understanding of the underlying technology and the ability to scrutinize the algorithms to identify unintended biases in the scoring or decision-making process.
- Analyze the data used to train the AI-driven hiring tools for potential biases that may have been inadvertently incorporated. Such factors can include under- or overrepresentation of certain demographic groups.
- Evaluate the impact on underrepresented groups to determine whether certain demographic groups face adverse outcomes or experience system disadvantages due to the tool’s recommendations or decisions. This evaluation should consider potential disparities in outcomes to ensure the tool does not perpetuate existing inequities or widen the representation gap.
- Establish an ongoing, iterative testing and validation process to refine AI-driven workforce management tools. An ongoing assessment will help identify and address any new biases that may arise as the tool evolves or as the hiring landscape changes. Regularly monitor and measure the tool’s performance and adjust accordingly.
- Implement mitigation strategies to reduce potential biases. This may include modifying training data to ensure more diverse and representative samples, re-evaluating the weighting and importance of algorithmic factors, and implementing post-processing techniques to calibrate decision outcomes, and finally, regularly evaluating the effectiveness of mitigation strategies to ensure desired results are produced.
Because hiring and promotion decisions are driven by AI algorithms derived from human-created data, companies have an ethical responsibility to conduct and/or participate in periodic bias audits to ensure governance guardrails are in place.
By conducting bias audits, companies can take proactive steps to reduce the potential for unfair and inequitable workforce management decisions, including assessment of algorithms and training data, and evaluation of the impact on underrepresented groups. By prioritizing diversity, fairness, and ongoing testing and validation, companies can work toward creating AI-driven workforce management processes designed to be free from bias to create more diverse and inclusive workforces and comply with emerging regulations.