Insights/Case studies/Newsroom/CareersCareersCareersPartnersConsultantsTechnology innovationCorporateEarly careersSearch Jobs/About us/Contact us Global locations

Search paconsulting.com
  • Phone
  • Contact us
  • Locations
  • Search
  • Menu

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
.
 
Close this video

“I’m sorry Dave…I can’t do that” – making sure chatbots and AI don’t ‘go rogue’

By Alastair McAulay, PA IT and cloud strategy expert and Fred Johnsen, PA AI and automation expert

Back in 1968, Stanley Kubrick's sprawling masterpiece '2001: A Space Odyssey' was one of the first movies to have a computer in a leading role. HAL (shift each of the letters IBM one step backwards) is the artificially intelligent computer that manages the spaceship in the best interests of the mission.

A key point in the plot is when one of the crew needs to override the computer so they can survive. “I'm sorry Dave…I can't do that” is HAL's chilling reply. The automaton has encountered a situation no-one anticipated. And the logical AI response contradicts the ethical response.

Science fiction nonsense? I'm afraid not.

When Microsoft unleashed its self-learning chatbot 'Tay' onto Twitter in 2016, in less than 24 hours it was spouting racial and sexual abuse. It had been thoroughly corrupted by the way people use Twitter. Autonomic systems can't be left to their own devices. As systems train themselves they need supervising.

You need to think about the data they'll use for training, and the goals you set them. Much like you have to ease new hires into the business, you have to implement autonomic systems gradually: a security monitoring system will need to understand what patterns of ordinary user behaviour occur during the day, then during the evening and finally over the weekends before it can be trusted to flag potentially errant behaviour.

The problem is that the artificial intelligence built into the system isn't formally specified as a set of logical steps. Many autonomics systems train themselves by analysing huge amounts of data to discern patterns and build their own rules from these patterns.

Make the artificial real - Artificial intelligence and automation

Make the artificial real. Artificial intelligence and automation.

find out more

CIOs need to make sure people in the organisation understand how AI systems are being used in the business. They need to be confident that the AI systems not only adhere to a defined code of conduct, but that they're capable of adhering to it. That means:

  • testing autonomic IT systems enough to earn the trust of employees and customers
  • managing expectations – no matter how well you test the system there will always be some flawed decisionsas the AI system tries to infer the best course of action. Therefore people must always be able to override or intervene
  • auditing systems regularly – to make sure the reasoning behind decisions and the reliance on different data remains relevant long-term
  • building rules into systems – to make sure the AI is using data fairly, ethically and complying with regulation. You need to manage security risks when combining and making decisions on data from different sources. For example, you'd need to check your AI isn't using information about employees' religion or gender to make decisions.

The ethical behaviour of a company will be heavily influenced by the design of autonomic algorithms delegated to automate business processes. The Microsoft 'Tay' chatbot was obviously going rogue, but other AI might misbehave in a more subtle way. You need people in the IT team who have emotional intelligence and customer empathy, rather than the traditional process orientation to provide appropriate governance and oversight of AI systems.

Today's CIOs should start working out whether their organisation has the right mix of emotional, philosophical and technical skills to assess the ethical implications of using AI automated systems. That should help get the balance right between people making decisions or machines. And avoid “Computer says no” at a potentially catastrophic moment.

You might also be interested in
Artificial intelligence (AI) and automation: is your IT department getting left behind?

Avoid the Jeeves syndrome – don’t get too dependent on a specific AI system

Find out more about our work in IT transformation.

Contact the IT transformation team

» Indicates required fields

Your details

By submitting this form:
- you are agreeing to be bound by our legal terms and conditions and our privacy policy.
- you are agreeing that PA can share this information with Whitelane Research BVBA. Whitelane Research is responsible for providing the management summary and will contact you directly. For further information on how Whitelane will handle this data, please contact sarah.scurr@whitelane.com.

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.

×