Insights/Case studies/Newsroom/CareersCareersCareersPartnersConsultantsTechnology innovationCorporateEarly careersSearch Jobs/About us/Contact us Global locations

Search paconsulting.com
  • Phone
  • Contact us
  • Locations
  • Search
  • Menu

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
.
 
Close this video

AI and automation: how far is too far?

By ROB GEAR, PA DIGITAL expert

And so the debate rages on. Artificial intelligence (AI) is an existential threat according to Stephen Hawking and Elon Musk but a source of great potential benefit according to Eric Horowicz. So far, so confusing! 

Last November, PA’s annual innovation event sought to shed some light on the issue of AI and automation and posed the question of “how much is too much?” We ran a survey on the night to gauge the attitudes of attendees towards an increasingly automated world by offering some hypothetical scenarios. Respondents then indicated their acceptance ‘now’, ‘in the future’ or ‘never’. Here is a brief summary of the results:

Would it be acceptable to let your children go to school in a self-driving bus?

An emphatic ‘all aboard’ for self-driving public transport with 90% of respondents indicating they’d be happy to trust the school run to robots now, or in the future. Could this be influenced by widespread reporting of the accident-free miles clocked up by Google’s self-driving car?

Would it be acceptable for an automaton to conduct first round job interviews?

Some organisations are already automating first round job interviews by gathering responses to questions via phone and video to save time and increase efficiency. The responses are currently reviewed by a human, but will we see an automaton asking the questions and assessing the candidates’ responses in the future? Our respondents were broadly split three ways – 38% happy to see this today, 32% happy to see this in the future and  30% never wishing to see machines solely deciding who to hire.  

Would it be acceptable for streets to be monitored by drones?

Perhaps reflecting how normalised public surveillance has become in the UK, 39% were happy to see drones over the streets now, with a further 45% agreeing to drone monitoring in the future. This suggests the majority are comfortable with the concept but need further assurance on the safety and maturity of the technology. 

Would it be acceptable to have robots fight wars?

Our survey group was split down the middle on the issue of automated and robotic warfare. 50% said never, against 32% who were happy to see robots on the battlefield now, and 18% in the future. This polarised response may result from the complex web of ethical, moral and value judgements that surrounds attitudes to conflict. 

Would it be acceptable to have automated systems make triage decisions on initial medical treatment?

Just 15% of those surveyed would never be comfortable with an automated triage diagnosis. 35% would be happy with this in the future, but, perhaps most interestingly, 35% would feel comfortable with this today despite little widespread automation at the moment. Machine learning-based diagnosis will continue to improve in scope and accuracy, and we may soon see automated triage coming to A&E though technology such as this concept vest.

Would it be acceptable to have open heart surgery from a robot with a marginally higher success rate than a human surgeon?

17% of respondents prefer the human touch when it comes to heart surgery, saying never to a robotic surgeon. 31% would be happy to let the robot operate now, with the remaining 52% expecting this to be acceptable in the future. 

For some types of surgery today, autonomous instruments can perform some actions with much smoother, feedback-controlled motions than a human hand. Despite the willingness of our sample to trust in machines now, robots remain expensive. Before the full automation of the most complex operations, we are more likely to see a more collaborative relationship between surgeons and robots.

Would it be acceptable for a machine to play with your emotions?

Despite the fact this is already happening (perhaps reflected by the fact that 17% felt that it is acceptable now), this question drew the greatest negative response, with 61% of respondents feeling that emotional manipulation by machines is never acceptable.

So what does our survey tell us? We should start by stating our respondents were a relatively small sample of 56 senior decision makers mainly working in IT, innovation, and other technology leadership roles – therefore not representative of the views of the general population. We can however draw out some themes from our data that we hope to explore and test further over the coming months.

People are more comfortable with automating tasks involving manual dexterity, skill and precision rather than emotions, values and intuition. Thus, automation of driving or carrying out an operation are deemed to be acceptable whereas automating warfare and manipulating emotions are not. This suggests an opportunity for a greater degree of human-machine symbiosis in the future with humans working alongside machines to maximise their relative strengths.

AI has not yet demonstrated its fitness for purpose. For the majority of questions, a greater proportion of respondents were accepting of automation in the future – a reflection of the uncertainty and nervousness around the technology. Confidence can be built through a combination of rigorous testing and safety assurance, and establishing the necessary regulatory and governance frameworks. It is also important to recognise a major mishap with AI or other automating technologies may trigger a backlash and impede takeup, just as the accident at Hiroshima has constrained the pursuit of nuclear technologies in some countries.

People like to be in control. Of all the scenarios posed in our questions, it was the notion that machine intelligences might toy with our emotions that drew the greatest negative response. Although it could be argued that our emotions are already being manipulated by all kinds of technologies, such as advertising and mass media, the idea machine intelligence may play with our emotions is unacceptable to 60% of respondents.   

Our survey was just a bit of fun but from the responses it is clear that AI and automation technologies will pose complex legal, ethical and psychological questions as they evolve. It is very important we have this debate and maximise the enormous benefits we can achieve from AI and automation technologies, whilst minimising the considerable risks.

Find out more about our work in digital transformation.

Contact the digital transformation team

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.

×