As AI adoption rapidly increases, it's critical that AI ethics progress from abstract theories to concrete practices.
AI adoption is rapidly growing. Its widely recognized across the sector as ‘..an opportunity and an inevitability..' AI is embedded in everyday life, business, government, medicine and more. AI presents an enormous opportunity to turn data into insights and spark better decision-making. Customers, consumers of services and citizens are expecting to use and engage with generative AI.
With heightened AI use comes heightened risk, in areas ranging from data responsibility to inclusion and algorithmic accountability. Today, many organisations struggle to operationalise AI with confidence and respond to changing regulations. Only by embedding ethical principles into AI applications and processes can we build systems based on trust.
At a recent joint Justice and Emergency Services and Industry event Megan Lee Devlin, co-hosted the event from the Cabinet Office engaged with Law Enforcement and Central Government Agencies for a round table discussion.
During the event, they discussed their perspectives on the significance of ethics in the realm of AI and how they are integrating it into their respective organisations. After the fireside chat, the 40 participants in the room were encouraged to continue their discussions at their respective tables.
Some of the discussion points were:
- Watertight - it is important that we have confidence in gen ai.
- Consumer education -- we want consumers to be aware of how it got to here and how to control it.
- AI is moving into SharePoint where everyone can have a play and basic understanding.
- The EU AI act - is important for organisations to input their own government structure/interventions so that they are ready for regulation that come in down the line.
- People are more afraid of liability of using AI, rather than the technology itself.
- you need to guardrail the data which requires education for developers.
- There is huge need for data infrastructure and data curations to be in place before AI projects happen
Some of the questions raised during the session:
- What are our greatest hopes for Gen AI?
- It can mitigate against risks.
- Benefit the customer experience.
- Benefit the public sector.
- How can we use AI in the development of products?
- How to deploy AI safely across an organisation?
- How to experiment and test AI while moving forward?
- Do we think we're ready for it?
- How to adapt for growing UK economy while being safe?
- How do you monitor on an ongoing basis?
- How do you ensure you are capturing quality data?
- Where does liability sit?
Technology must be transparent and explainable. Organisations must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms' recommendations.
The True Cost of Noncompliance, “The potential costs of non-compliance are staggering and extend far beyond simple fines. Organisations lose an average of £4.85 Million in revenue due to a single non-compliance event. But this is only the tip of the iceberg - the financial impact goes far beyond your bottom line.” IDC
The programme also recently hosted a roundtable with the Office of the Police Chief Scientific Advisor on the use of generative AI and large language models in policing. This really gave a fantastic opportunity for suppliers to share their knowledge and expertise in this space - including challenges policing must consider.
We have now entered the world of AI-augmented work and decision-making across all the functional areas of a business, regularly starting with the front office customer engagement to the back office.
AI, machine learning, and natural language processing are changing organisations and threat actors around the globe across multiple industry sectors. AI disrupters will scale AI initiatives, drive better customer engagements (aiming for a mixture of outcomes), and have faster rates of innovation, higher competitiveness, higher margins, and superior employee experiences.
Organisations across Justice and Emergency Services must evaluate their vision and transform their people, processes, technology, business models, and data readiness to unleash the power of AI and thrive in the digital era.
While many organisations have taken the first step and defined AI principles, translating these into practice is far from easy, especially with few standards or regulations to guide them currently advice from the Cabinet Office CDDIO is recognized as meaningful and they continue to revisit and plan to publish updates regularly.
Responsible AI implementation continues to be a major challenge. Taking a systematic approach from the start while addressing the associated challenges in parallel can be beneficial. A systematic approach requires proven tools, frameworks, and methodologies, enabling organisations to move from principles to practice with confidence and supporting the professionalisation of AI.
Most TechUK members are on the same journey either in shaping their own products and services or helping the customer start the AI journey. Close collaboration to between industry and the sector can be supported through the next term of the Justice and Emergency Services Committee to align to emerging public sector strategy, shape good practice, help resources and ensure responsible AI implementations deliver efficiencies while ensuring trust and confidence in its application.
Spencer Newsham - IBM