AI Standards Hub Global Summit

By April 30, 2025Uncategorized

As AI continues to evolve at an unprecedented pace, ensuring its safe, reliable, and ethical deployment has become a top priority for governments, industries, and regulatory bodies. 

In March 2025, AI Standards Hub Global Summit took place in London. The two-day Summit was organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, and the Partnership on AI. 

It brings together a diverse range of global stakeholders from the international AI ecosystem to examine the current state of AI standardisation, as well as examine the changing role of standards in relation to AI governance frameworks and emerging regulation.

Day 1:
AI standards
The Summit began with an introduction from David Cuckow, the Director of Digital ICT at BSI, who highlighted that AI standards are essential to responsibly support society. Standards also need to ensure quality and consistency, while promoting fairness, safety and innovation. 

Public benefit of AI
The Minister from the Department of Science Innovation and Technology, Baroness Maggie Jones focused on the importance of AI for growth and the department’s target to increase 20x of compute power by 2030. By achieving this, people with personalised plans will feel the benefit in public services, healthcare and schools. 

AI Act
The AI Act has been in the spotlight since it was introduced. Roberto Viola, the Director General for Digital, European Commission stated that industry feedback was being listened to. Roberto emphasised that they want to use AI in every part of society, however it needs to be monitored for safety reasons.

AI Standards Hub
The AI Standards Hub is a partnership between the Alan Turing Institute, National Physical Laboratory, the British Standards Institution, and supported by the Department for Science, Innovation and Technology (DSIT) and  the Office for AI (OAI). It’s dedicated to the evolving and international field of standardisation for AI technologies.
Florian Ostmann from the Alan Turing Institute spoke about its mission to advance responsible AI and unlock standards as governance tools and innovation enablers. 

Fostering internal collaboration
Jerry Sheehan from The Organisation of Economic Co-operation and Development (OECD) highlighted that while AI innovation is accelerating, AI governance is in its early days.

The OECD’s mission is to foster international collaboration around AI governance but stressed that there is no size that fits all across countries and that we need to foster compatibility.

Standardisation and regulation
A panel discussion also took place which explored the relationship between standardisation and regulation. The event was moderated by Florian Ostmann, while the panellists were: 

  • Karine Perset, Head of AI and Emerging Technologies, OECD
  • Tatiana Evas, Legal and Policy Officer, EU AI Office
  • David Leslie, Director of Ethics and Responsible Innovation Research, Alan Turing Inst
  • Jungwook Kim, Exec Director, Korea Development Institute

Topics explored in the conversation included the EU AI Act and that international standards are the guidelines on how to develop and meet their AI legal obligations. There are new laws being developed around the world such as the existing South Korea AI Act.

Other key points raised included emphasis that there’s a difference between technical standards and guidance on values and that AI standards must be sociotechnical. 

Additional events taking place were addresses from Peggy Hicks and Volker Turk, both from the UN Office of High Commissioner for Human Rights.
Their speeches focused on the importance of integrating human rights into standards, and that we need to reduce entry barriers for human rights experts so they can join the standard developments. 

Finally, representatives from Korea, the Kenya Bureau of Standards, the Brazilian Software Association along with Monica Okoth also took part in a panel discussion exploring global cooperation in standards development.

Day 2:
While day 1 focussed on global collaboration and human rights, day 2 had a more technical focus. 

Ana Alania from Alan Turing Institute moderated a panel discussion with Maria Liakata, Elliot Jones, Adam Leon Smith and Lauriane Aufrant. Key points included that international standards must incorporate risk and quality evaluation. 

Other panel events taking place included evaluating and ensuring foundation model safety and standardisation and the future of AI Assurance. 

Conclusion
The AI Standards Hub Global Summit 2025 served as a powerful reminder that the future of artificial intelligence depends not only on technological breakthroughs, but also on shared responsibility, collaboration, and robust governance.
The energy and insight from the Summit emphasised that the work ahead is complex, but the global commitment to safe and ethical AI is strong.
Through continued dialogue, cross-country collaboration, and alignment between standards and regulation, we can build an AI future that is fair, trustworthy, and sustainable.

Julia Latif

Author Julia Latif

Julia ensures seamless day-to-day operations as Business Support for Inclusioneering. With a career that has shaped a diverse skill set in entrepreneurship, Julia’s mission is to empower and connect businesswomen, especially from ethnic minorities. Julia is also founder of Effect UK, a support network for business women from ethnic minorities and diverse nationalities.

More posts by Julia Latif

Leave a Reply