On 9th November 2022, techUK and Centre for Data Ethics and Innovation (CDEI) board member, Jessica Lennard, John Bowman of IBM, and Phillip James from Eversheds Sutherland discussed the key considerations for C-suite leaders in establishing digital ethics in their businesses, particularly in the context of artificial intelligence (AI)*. Read the key messages and watch the full video below.
1. If you're a business using data or AI (or planning to), and you think digital ethics isn't for you, it is
The creation of the World Wide Web in 1989 has had an unprecedented impact on society. But how many businesses were prepared for the scale of transformation this would bring? Twenty years later, how many were ready for the global impact of GDPR? And how many lost out to competition or fell afoul of regulation because they failed to anticipate these seismic changes in the digital world?
As we transition to Web 3.0, cutting-edge technologies, such as AI, are becoming ever more integral to business and to our daily lives. It is often suggested that the eventual impact of AI may be no less transformative than that of the internet itself. As with all things digital, however, ethics is key. Public consciousness and understanding of potential harms have grown exponentially in recent years, particularly amongst regulators and politicians. For those with sufficient understanding of the issues, the debate around the need for ethics is over: innovation and growth must be done responsibly.
2. Where do business leaders stand on digital ethics today?
Building an ethical digital future is the shared responsibility of many stakeholders (including, of course, governments and regulators), but how well are businesses prepared to play their part? Are C-suites and Boards sufficiently informed and skilled to lead the development and implementation of suitable ethics frameworks? The answer is a mixed one. IBM research last year found that although more than half of organisations had publicly committed to AI ethics principles, less than 25% had operationalised them. An enormous 94% of global C-Suite executives surveyed by Accenture said they were struggling to do just that.
There are grounds for optimism, however. According to IBM, only 20% of CEOs were prepared to act on AI ethics in 2018, but this had risen to 79% three years later. This may be a response to consumer or shareholder pressure, or a desire to get ahead of regulation. It may be to keep pace with competitors - McKinsey research indicates that leaders in digital trust are more likely to see revenue and EBIT growth of at least 10% p.a. It may simply be because they believe it's the right thing to do. Whatever the motivation for implementing digital ethics, the direction of travel is clear. Businesses which thrive will be those which are able to look up from the cycle of quarterly results into a data and AI-driven future and understand that ethics are non-negotiable.
3. It's easier said than done
Common themes emerge from companies struggling with implementation. Those for whom data and AI are increasingly important, but are not yet core business, often realise retrospectively that they lack the in-house skills to translate their principles into BAU processes, like product development, risk management, and hiring. But digital ethics cannot be the sole province of either technologists or privacy lawyers. Teams building, using, or procuring AI; corporate governance and compliance teams (including legal, policy, standards, risk, and audit); strategists; business leaders, and HR all require specialist expertise or support to turn good intentions into reality. Companies wishing to get ahead should be considering how to make themselves attractive to this pool of talent. This expertise may well not exist yet within the C-suite itself, an issue which needs addressing through training and greater exposure to the issues. Both techUK and CDEI believe in the potential of an AI Assurance ecosystem' and CDEI is developing a roadmap to a world-class industry here in the UK. This would provide an evolving repository of information, tools, and frameworks to facilitate the practical application, monitoring, and validation of ethics. The benefits of investing and engaging in these activities are evident. Businesses which can clearly express both their principles and processes (in particular where accountability and responsibility sit within them) will be in a position to lead on responsible innovation.
4. This should not mean starting from scratch
An AI Assurance ecosystem will include innovative, technology-driven tools and techniques, but it will also leverage tried and tested practices, for example auditing and impact assessments. Similarly, aspects of digital ethics may feel nebulous to the C-suite, but implementation relies on many models and processes with which they will be well familiar, for example risk management, product safety, governance frameworks, compliance audits, and senior managerial responsibility. Concepts such as AI Fairness and Explainability require a great deal of ongoing thought and exploration, but frameworks are being created to enable their effective implementation. Businesses should seek to build on their existing values and ethical commitments, translating these into a digital context and leveraging core aspects of assessment, governance, and monitoring which are rapidly evolving to suit the world of data and AI.
5. Legal and regulatory frameworks are starting to emerge
A final, and entirely justifiable, challenge cited by C-suite executives is the lack of regulatory clarity and certainty. We would innovate more if we knew the rules. Although ethics is traditionally concerned with what should or should not be done (as opposed to what is legally permissible), AI is an area where ethics is being regulated for in an uncharacteristically explicit manner. Twelve months ago, 193 countries adopted the first-ever global agreement on AI ethics, and around the world (notably in the EU) new legal and regulatory frameworks are emerging to embed both principles and processes. Here in the UK, regulators including the Bank of England, Financial Conduct Authority, and ICO are working hard to offer regulatory certainty through new frameworks, guidance, and industry engagement. One of the key benefits for businesses of the CDEI's planned AI Assurance Ecosystem is that it will assist in the demonstration of regulatory compliance and alignment with evolving rules and standards.
*It is recognised that AI ethics is a subsection of digital ethics. This discussion focused more so on AI ethics, given the expertise of the panellists.