Responsible Technology Adoption Unit RTA publishes Responsible AI in Recruitment Guidance

From: techUK
Published: Tue Mar 26 2024


techUK outlines the key aspects of DSIT's latest guidance on Responsible AI in Recruitment.

The Responsible Technology Adoption Unit (RTA, formerly the Centre for Data Ethics and Innovation, CDEI), within the Department for Science, Innovation and Technology (DSIT) yesterday published new guidance on the responsible use of AI in recruitment. This guidance has been created following the publication of the UK's AI governance framework and the government's response to the AI White Paper, and in light of DSIT research that identified a need for clearer guidance for HR and recruitment organisations to responsibly adopt AI.

Below is a summary of the guidance and the information it provides.

This guidance provides information about how AI assurance techniques can help organisations operationalise the government's AI regulatory principles:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The guidance outlines the processes, metrics and frameworks to put these principles into practice throughout the adoption process from procurement to deployment and live operation. It covers a wide spectrum of assurance mechanisms from risk assessments and performance testing, to training and upskilling and live operation feedback.

Across the four stages of the adoption process (pre-procurement, during- post procurement, pre-deployment and live-operation), the guidance sets out the potential tools organisations should consider and corresponding assurance mechanisms.

The section on ‘Pre-procurement', for example, prompts businesses to consider the purpose of an AI system. It includes relevant questions to consider in this respect, and signposts relevant mechanisms to support this such as creating an AI governance framework and conducting an impact assessment.

The information on these mechanisms provides a description of what they are, how they work and several resources and templates to support firms to undertake them.

Taking impact assessments, the guidance notes different types of impact assessment, - such as algorithmic impact assessments and equality impact assessments - describes an example of what this might look like in the case of procuring an automated job description tool, and links to supporting resources. It also notes the AI principles this mechanism can support safety, security and robustness; appropriate transparency and explainability; fairness; and accountability and governance.

This information is provided for each consideration and assurance mechanism, whilst an annex covers referenced AI recruitment tools and their associated risks.

What does the Responsible Technology Adoption Unit's (RTA) Responsible AI in Recruitment Guidance mean for members?

The guidance offers organisations seeking to procure and adopt AI recruitment tools with a toolbox of considerations and assurance mechanisms through which to evaluate the implications of particular applications and mitigate potential risks.

This could prove helpful to businesses considering adoption that lack the resource, capacity or expertise to navigate the complexities of opportunities and risks by signposting key interventions at every stage of the process.

However, as the guidance notes, there is no one size fits all approach to AI assurance and some organisations may only have the resources to focus on key areas of highest risk. This is welcome recognition, though more insight could be provided to businesses on where these areas of highest risk may reside. The rollout and business engagement around this guidance will be key.

techUK will be continuing to engage with the RTA and DSIT in this area and welcomes member's feedback on the guidance.

Company: techUK

Visit website »