By ROB GEAR, PA DIGITAL expert
Our governance and legal systems frameworks are being stretched and challenged in the modern digital world. Many of these frameworks were put in place a long time ago, in an era where products were more tangible and physical, and boundaries of organisations and states were rather more clearly defined. This disconnect between technological advances and associated governance frameworks repeatedly creates situations where organisations can indulge in behaviours that are not illegal – but may not necessarily be seen as doing the right thing.
I have written previously on the growing risk of algorithmic ‘black boxes’ and the need for better governance in this area. This is perhaps even more pressing as there is a move towards more widespread adoption of AI technologies that don’t come with any kind of ethical frame of reference. I also highlighted the risk of poor or discriminatory assumptions becoming embedded in algorithms that touch increasing areas of our lives and that such algorithms are often being created by individuals in isolation, without any kind of oversight.
Are you the product?
We also see some companies using data in unexpected ways or exhibiting behaviours that would be frowned upon in other sectors. Take for example Facebook’s experiment to influence the emotions of its users through controlling what appeared in their feeds. If a medical company had conducted this kind of psychological experiment at scale without any of the normal controls or evaluation of the potential consequences, it would likely have lost its licence. It comes back to the saying that if you are not paying for the service, you are the product.
A key part of the business model for the internet giants is providing services that learn all about you so that this information can be sold to other service providers. It is rarely transparent for the consumer to understand exactly how much information is being held and how it is being used. Often organisations may be privy to information that their customers would be reluctant to share with their banker, GP, or even their family. For me, this model may be open to misuse. Perhaps a better starting point for organisations to adopt is ‘just because you can, it does not mean that you should’.
Another area where having an informed ethical perspective might shine a valuable light is in the context of privacy versus protection. This debate often plays out behind closed doors and it could be argued that citizens should be afforded greater transparency in how their data is being used by governments and corporations alike.
Top down AND bottom up
It is evident that many of our existing legal frameworks and regulations are in desperate need of reform and are simply not fit for purpose in a digital age. This reform will need be approached top down, and it is encouraging to see initiatives such as the recently announced EU’s Ethics Advisory Group and the forthcoming EU Data Protection Regulation – discussed in my colleague Stephen Bailey’s blog. The work to regulate and compel companies to think through the ethical consequences of their actions will be long and challenging – it’s not easy to put the genie back into the bottle.
I believe there is an opportunity for forward-thinking organisations to seize the initiative and start to make changes from the bottom up. One approach might be to establish an ethics committee to look across every aspect of what the organisation does, from the way it conducts its tax affairs to the assumptions and possible consequences resulting from the algorithms it develops.
The emphasis should be shifted from a myopic focus on shareholder return, to a focus on enhancing wellbeing of customers, employees or wider society. Companies that adopt this approach could expect to be rewarded by increased customer retention, a happier workforce, a culture of honesty and loyalty, and greater brand trust. Additionally, there should be a reduced risk of poor decision making as a result of faulty or inappropriate assumptions in the analysis of data.
As a final observation, we are probably not going to see a Chief Ethics Officer since the abbreviation might result in confusion with the Chief Executive Officer. On the other hand, should it not be the obligation of highly remunerated CEOs to lead the ethical and moral direction of their organisations? Trust is critical in the internet age and amounts to more than just providing services safely and securely – it is also about behaviours. Perhaps establishing an ethics committee would be a good first step?