Global AI Safety Summit 2023: Three key considerations for governments

26 October 2023

Next week, the UK will bring together world leaders to discuss the safe use of artificial intelligence (AI) and help to build an international consensus behind future regulation at the first ever Global AI Safety Summit.

As things start gearing up ahead of the Summit, the UK Government this week shared its discussion paper which will be used to frame the event and places cyber capabilities and risks front and centre of the deliberations. UK Prime Minister Rishi Sunak also gave a speech to mark the launch of the paper, announce a new AI Safety Institute, and confirmed that while regulation has a key role to play, the UK will “not rush to regulate”.

Informed by our recent research, parliamentary evidence, and invitations to pre-Summit events organised by the UK Government, NCC Group shares the top three things Government should be considering when it comes to cybersecurity and AI.

There is no evidence of the existential risk, yet. But that doesn’t mean we shouldn’t be alive to it.

In our work and our research, we are yet to see any evidence of a so-called ‘cybergeddon’ – a runaway wide-scale disruption as a result of AI-enabled cyberattacks. Across our engagement with the pre-Summit events, this assessment was largely echoed by representatives throughout the cyber and tech ecosystem.

However, while evidence today does not point to this outcome, we do not know the unknown unknowns. It is therefore critical that developments are closely monitored by the new AI Safety Institute announced by the Prime Minister today. Indeed, there may come a point that where we cease regulating AI as a product and start treating it as an offensive capability, looking at mechanisms like export restrictions, treaties and similar.

Cybersecurity is the prerequisite of safety.

As highlighted in our recent whitepaper – and reiterated in the assessment published today by the UK intelligence agencies – AI systems are subject to a range of AI-specific cyber vulnerabilities, such as data leaks, data poisoning, backdoors, and remote code execution issues.

These vulnerabilities in turn expose AI systems to many of the risks being considered at next week’s Summit, including bias, data privacy and the safe operation of an AI.

Governments must therefore not only consider the cybersecurity risks in themselves, but also view cyber resilience as the critical underpinning for the safe deployment of AI more broadly. Ultimately, we cannot make assurances about the safety and soundness of a system that we do not control, so we must ensure that we build AI systems that are ‘Secure-by-Design’ and continuously and independently assured against appropriate cybersecurity standards.

Guardrails enable innovation.

The UK Prime Minister and his Government have stated that frontier AI developers “should not mark their own homework”, arguing that “Responsible Capability Scaling” – which defines risk thresholds, specifies mitigations, and implements robust governance – is needed. We see similar positions taken by other governments globally.

Developers and users of AI need regulatory clarity on the guardrails they are operating within. This will not only enable the safe deployment of AI but will help to unleash innovation by building trust and confidence (and investment) in these systems.

While it is true that we must take the time ensure any regulation is well-crafted – built with flexibility and global alignment in mind – governments must prioritise establishing their frameworks and equipping regulators with the skills they need to oversee the safe deployment of this fast-evolving technology. The longer we delay, the longer we operate without clarity on the guardrails – which is not beneficial to users, developers, or society as a whole.

Here, CTO, Sian John, comments “AI holds incredible promise, but we must keep in mind the need for cybersecurity in the rapidly evolving landscape. While AI offers endless opportunities for UK industries, we must make sure that the adoption of AI practices is done so responsibly. The UK has a unique opportunity to drive forward the safe use of AI to open possibilities for businesses across different sectors.

“Rishi Sunak’s speech on AI signified the need for the government, regulators and industries to collaborate and make AI safe. As AI continues to develop at pace, we need to nurture the right skills and resources to keep up with change. The announcement of the world’s first Global AI Safety Institute to be based in the UK is a significant step forward to facilitating the responsible and secure use of technology in the future and will put the UK at the forefront of technological development.”

From prompts to privacy, security to safety - it's time to set a strategy for the AI era. In our new whitepaper we introduce the topic of AI’s reinvention of cyber ecurity, setting a baseline of understanding for some of the key AI concepts, threats and opportunities to business decision makers and policy makers, to support their thinking and strategies in this fast-paced, exciting new technological time.

Click here to download it now: Whitepaper | Cyber Resilience in the Age of Artificial Intelligence - NCC Group

Contact

NCC Group Press Office

All media enquires relating to NCC Group plc.

press@nccgroup.com

+44 7721577574