Insights/Case studies/Newsroom/CareersCareersCareersPartnersConsultantsTechnology innovationCorporateEarly careersSearch Jobs/About us/Contact us Global locations

Search paconsulting.com
  • Phone
  • Contact us
  • Locations
  • Search
  • Menu

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
.
 
Close this video

Beware the black box – a case for better algorithmic governance

By ROB GEAR, PA DIGITAL expert

We live in a world that is increasingly being run by algorithms, but often people are unaware of the extent that these algorithms are influencing their lives. If you’ve recently applied for a loan, the chances are that the decision to grant it or not will be taken by an algorithm. Traffic signals seem to be taking slightly longer to change? Well that’s probably an algorithm too, working away in the background to optimise and balance traffic flow across the city.

We can already see how the properties of algorithms can be exploited for malicious purposes. In 2013, US$136 billion was wiped off of global equity markets when a tweet on a hacked Associated Press (AP) Twitter account, suggested there had been an attack on White House in which the President had been injured. A human would have spotted the tweet as fake fairly quickly due to linguistic and stylistic differences from the AP norms, but within minutes, it had been picked up by trading algorithms that scan across a range of news sources, seeking out nuggets of information that will yield competitive trading positions. Things snowballed as the algorithmic black boxes rapidly amplified false panic across the entire market.

Financial trading is perhaps one of the highest profile users of algorithms. Although the volume of algorithmic high frequency trades has declined since its peak in 2009-2010, when it accounted for more than 60% of all US equities traded in volume, it will be likely to experience further evolution with more sophisticated algorithms, being integrated into trading and operating across more systems, leading to a reduced role for humans. 

But what if the human factors linger on inside the algorithmic black boxes? Since the majority of today’s algorithms are created by people (even if in the future we may see algorithms creating other algorithms!) it is possible that the assumptions and biases of the creators are embedded within the code. The reason that algorithms are often referred to as “black boxes” is that it is not apparent to the casual observer (or sometimes even an investigator with considerable skill) exactly how the algorithm works.

Given this lack of transparency it is possible to envisage scenarios where decisions are taken by algorithms that would be deemed inappropriate, biased, or unethical if taken by people. For example the willingness of an organisation to grant a loan or medical insurance based purely upon opaque decisions and assumptions that have been baked into algorithms at the point of creation. These decisions might take into account information about an individual that they not want, or may not be aware exists, in the public domain. 

 We may also see regional variations in culture and attitudes clouding things even further. Some things are likely to be consistent around the globe, such as the importance of preserving human life (built into the algorithms that are being developed to operate vehicles).  However, an algorithmic assessment of an individual’s security risk developed by an American firm might be different to that developed by a German firm. If that algorithm found a personal interest and browser search history that showed an interest in assault rifles, that might be seen as a positive green light in one country and a flashing red light in another.   

There is also a risk of emergent “statistical morality”, where algorithms reject or punish behaviours that diverge from what has been defined and codified as the norm. Might we see a future where algorithms squeeze out the views of dissenters, the eccentric, or the lone innovator who wishes to challenge the status quo? Would it be fair to prejudice ‘outliers’ because of deviation from the norm, and who should define that norm in the first place?

There are no clear answers. We are living through a great experiment in the impacts of automation and digital technologies and are faced with the increasing risk of algorithms outpacing their control environments. To mitigate against this we must apply the principles of good governance. Governments and organisations should be proactive in improving their understanding of the algorithms that are increasingly embedded in and underpin our day-to-day business and lives. Their functions should be explicit and transparent, and most importantly subject to rigorous oversight. If we do not work to understand what goes on inside the black box, we should not be surprised when the computer says “no”.

Find out more about our work in digital transformation.

Contact the digital transformation team

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.

×