News reaction: UK AI regulation consultation comes to a close

29 June 2023

Last week, the UK Government’s consultation on a new AI White Paper, setting out its proposed ‘pro-innovation’ approach for regulating AI, closed.

With plans in place to regulate by sector, we broadly support the UK Government’s endeavors to create a common framework for AI governance that is context-driven and delivers on its ambitions to promote innovation, while keeping the UK and its allies safe and secure.

Here, Chief Scientist, Chris Anley, lays out some of the key points from NCC Group’s submission to the consultation, in which we believe the Government’s plans could be strengthened to build trust in AI technologies and cement the UK’s position as a global leader in this field:

  • The Government’s proposed definition of AI should be updated to ensure that organisations cannotevade regulation. The current definition could create a loophole where organisations design their technologies to be either not adaptive or notautonomous, and thereby not considered true AI under the Government’s framework.
  • A Government-led consumer labelling scheme, backed up by independent third-party product validation, would enable end-users to more confidently use AI technologies, knowing steps have been taken to reduce associated risks. For higher-risk products, we believe that third-party product validation should be mandated.
  • Flexibility, agility and periodic regulatory and legislative reviewsshould be built in from the outset to keep pace with technological and societal developments.
  • In assuming a greater role in regulating the use of AI, regulators should be strengthened in their powers, resources and capabilities.
  • There remains a significant shortage of the skills we need to develop AI frameworks, and assure systems’ safety, security and privacy. If the UK wants to be a global leader in AI, the Government must focus investment on developing the skills we need to make its regime a success.
  • End-users and consumers should be empowered to make decisions about the AI systems they use by improving transparency of where and how AI technologies are being deployed. There should also be clear routes for redress where things go wrong, possibly including through a new AI Ethics Ombudsman.
  • If the Government wants to ensure that UK languages, religious outlooks, values and cultural references are protected, while also minimising the risk of adopting biases seen elsewhere in the world, steps must be taken to make UK datasets more readily available for use in AI.
  • Its regulatory principles should more explicitly tackle the risks associated with bias that we see in some AI systems. Removing or reducing inherent existing biases, while balancing data privacy needs and taking steps to ensure that social issues are not exacerbated, will be crucial. Steps should include: vetting data supply chains; ensuring datasets are representative and appropriate for the jurisdiction in which they are used (taking into account the diversity of the development team whose unconscious bias may effect the data); and, establishing clear reporting processes.
  • The drafting, approval and implementation of technical standards that underpin the UK’s regulatory framework will be critical.
  • Steps must be taken to ensure that all high-risk AI systems are effectively governed. There is a risk with the proposed sectoral approach that some systems fall through the net and go unregulated.

  • The UK Government will consider responses and set out its final approach in the next few months, ahead of a Global AI Safety Summit it is due to host in December.

      We’ll continue to support the development of the regulatory framework by sharing our expertise and insights from operating at the ‘coalface’ of cyber security, safety and resilience.

          Contact

          NCC Group Press Office

          All media enquires relating to NCC Group plc.

          press@nccgroup.com

          +44 7721577574