Embedding human governance in AI frameworks to maintain ethical standards
Tags
This article was first published in Counter Terror Business
The UK’s Integrated Review of Security, Defence, Development and Foreign Policy set out the country’s ambition to be a global leader in artificial intelligence (AI) and data technologies. This was welcome news for the defence and security sector, which already relies on data to inform strategies, insights and operations, a fact GCHQ says will only become more apparent as AI becomes more able to analyse large, complex and distinct data sets.
But law enforcement and national security are high risk contexts for the use of AI. Flawed or unethical AI could result in unjust denial of individual freedoms and human rights, and erode public trust.
While frameworks such as the OECD Principles on Artificial Intelligence and Turing guidelines for responsible AI are bolstering the regulations governing defence and security’s use of data, organisations can do more. By rooting their own ethical frameworks in the human context of AI deployments, they can better ensure their AI is ethical.
Keep humans in the analytical loop
Establishing the scope of an AI application is key to ethical frameworks in defence and security. Regulators, academics and the public all agree that the scope of AI shouldn’t go so far as to replace human decision making in defence and security.
In counter terrorism, for example, RUSI has outlined how AI would be inaccurate if used to identify potential terrorists, concluding that ‘human judgement is essential when making assessments regarding behavioural intent or changes in an individual’s psychological state’. And in criminal justice, most people oppose automated decision making due to the likelihood of unintended biases, such as algorithms based on data that might reflect historic inequalities in society.
Without human analysis, use of AI risks inaccuracies and loss of trust. So, AI should be a tool to aid human decision making. Humans should process and validate outputs against other sources of intelligence while understanding the potential limitations of AI-derived findings. Keeping humans in the analytical loop should be a cornerstone of an ethics framework.
Embed human governance structures to support ethical AI
An ethical framework for AI relies on human governance structures and oversight. Organisations should consider the line of sight from requirements setting, through development, to operational outputs – a diverse, skilled and knowledgeable panel should have visibility of all stages so they can consider factors such as the proportionality of the data used and the potential undesirable consequences. The panel should include those who understand the development of the AI and the data involved, the legal and policy context and the potential real-world effects of proposed developments. And the panel should provide expert, unbiased challenge and diversity of thought.
Keeping this line of sight will maintain ethical principles throughout the lifecycle of development and deployment. Doing this properly will involve workforce upskilling and careful navigation of sensitivities and silos. For example, you could involve data scientists in the planning of operational deployments, or train operational managers to build their understanding of the AI technologies deployed. This might require new governance structures, roles and responsibilities to complement existing compliance structures.
There’s a lot defence and security organisations can learn from the major tech companies, which are very good at ensuring humans are overseeing potential AI deployments. In recent years, Google has scaled back or blocked several AI developments after finding the ethical risks too high.
Balance AI and existing data frameworks
The defence and security sector will need to consider and address tensions between AI and existing compliance, data protection and privacy frameworks. But this isn’t unique to defence and security. For example, a leading pharmaceutical company found strict data retention limits would impact the availability of data needed for AI training and oversight. The company acknowledged these tensions and developed privacy and ethics principles specifically for AI.
Assuring the quality of data sets should be a key part of any AI ethics framework as higher quality inputs lead to more accurate and trustworthy outputs. For defence and security, data might be biased to a specific context that makes it unsuitable for the proposed use, or it might be incomplete or contain errors, which could lead to misleading outputs. This is because security exemptions might mean data subjects are unable to exercise rights that would be available to them from other organisations, such as the right to correct inaccurate data. Similarly, defence and security organisations can’t always implement effective feedback loops between the users of data and data subjects.
So, organisations should make adjustments to mitigate potential quality issues. These mitigations could be business processes (such as the inclusion of data scientists in decision making for operational deployment) or they could be technically based (such as using statistical testing to identify biases in the training data).
Defence and security organisations must embed human governance in AI frameworks
By creating an ethical framework for AI that prizes human analysis, embeds human governance structures and addresses the compliance challenges unique to the sector, defence and security organisations will be able to leverage the power of data and ensure accurate outputs while maintaining public trust and upholding the high ethical standards upon which the UK prides itself.