For too long, the primary arbiters of an AI’s quality have been the very companies that built them. That’s like letting students mark their own exams. To move forward, we require a common yardstick, a shared understanding of what “good” looks like. Without it, we’re just flying blind, hoping for the best while preparing for headlines about the worst.
What Are We Even Measuring? The Role of Performance Metrics
When you talk about judging an AI, most people think about accuracy. Did it get the right answer? But that’s a small part of the picture. True AI performance metrics go far deeper, probing for weaknesses that aren’t immediately obvious. We’re talking about robustness (does it fall apart when faced with an unexpected question?), fairness (is it biased against certain groups?), and security (can it be easily tricked or hijacked?).
Think of it like an MOT for a car. The test doesn’t just check if the engine starts; it examines the brakes, the tyres, the emissions, and the structural integrity. It’s a holistic assessment designed to ensure the vehicle is safe for public roads. We are now at the stage where we need a similar, standardised inspection for advanced AI systems before they are unleashed on the public.
This is why a set of agreed-upon metrics is so vital. It moves the conversation beyond a company’s polished marketing claims and into the realm of empirical, verifiable evidence.
Creating the AI Premier League with Technology Benchmarking
Once we agree on what to measure, the next step is how we compare the results. This is the essence of technology benchmarking. It’s about creating a level playing field where different models can be tested against the same set of challenges, allowing us to create a sort of “Premier League” table for AI. Is Model A genuinely better at creative writing than Model B, or does it just have a better PR team?
Effective benchmarking provides clarity. It helps developers understand where their models excel and where they fail, driving meaningful improvement. For businesses, it provides a basis for making informed procurement decisions. And for regulators, it provides the data needed to oversee this powerful technology, which is a cornerstone of effective global AI governance.
Getting the Band Together: Global AI Governance in Action
The reality is that AI is a global phenomenon. A model developed in California can be deployed in Kenya in seconds. As Adam Beaumont, the head of the UK’s AI Safety Institute, rightly pointed out in a recent government release, “Advanced AI systems are being developed and deployed globally, so our approach to evaluating them has to be global too.”
This is the logic behind the newly rebranded International Network for Advanced AI Measurement, Evaluation and Science. It brings together key global players—including Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the UK, and the US—to share knowledge and align their approaches. The group, which first came together in November 2024, is shifting its focus towards creating a robust science of AI evaluation.
Let’s be clear: this isn’t just a friendly science fair. This is geopolitics. The nations that successfully define the AI evaluation standards will have a significant say in the direction of the technology for years to come. By coordinating efforts, these countries aim to establish a unified front, ensuring that the rules of the road are written by a broad coalition rather than a single actor. The UK’s Department for Science, Innovation and Technology (DSIT) has even taken on a Network Coordinator role, signalling a clear intent to be at the heart of this global conversation.
From Theory to Practice: Measurement Best Practices
So, how do we perform these evaluations rigorously? It’s a significant challenge. The most advanced systems exhibit what are known as “emergent capabilities”—abilities that weren’t explicitly programmed into them. This makes their behaviour difficult to predict.
Establishing measurement best practices means moving beyond static, academic benchmarks that can be “gamed” and towards more dynamic, adversarial testing. This involves:
– Red Teaming: Actively trying to trick or break the AI to find its weak spots.
– Real-World Simulations: Testing the model in complex scenarios that mirror its intended use.
– Auditing for Bias: Systematically checking for and quantifying any harmful biases in the AI’s output.
Sharing the findings and methodologies from these tests across the international network is crucial. What one country’s safety institute discovers can help every other member build better, safer systems.
The Foundation of It All: Public Trust
Why does any of this technical and political wrangling matter? It boils down to one word: trust. As Kanishka Narayan, a Director at DSIT, stated, “Trust in AI isn’t a choice – it’s a necessity.” He is absolutely right. Without public trust, AI will never reach its potential. Consumers will reject it, businesses will be too risk-averse to adopt it, and regulators will be forced to impose crippling restrictions.
Transparent and rigorous AI evaluation standards are the most direct path to building that trust. When people know that an independent, globally-recognised body has scrutinised a system and deemed it safe and effective, they are far more likely to embrace it. This trust, as Narayan also noted, is what will “unlock its benefits for everyone.”
The Future is Standardised
Looking ahead, the alignment of these best practices is not just about safety; it’s about fostering innovation. A fragmented world with dozens of different AI evaluation regimes would be a nightmare for developers, creating trade barriers and slowing down progress. A common set of standards acts as a market-maker, creating a single, enormous playing field where the best ideas can compete and win.
We are likely moving towards a future where AI models will come with a safety and capability rating, much like the energy efficiency labels on home appliances. An “A++” rated model might be certified for use in critical infrastructure, while a “C” rated model might be limited to less sensitive applications. This framework of AI evaluation standards will become the bedrock of the entire AI economy.
The work being done by the International Network for Advanced AI Measurement, Evaluation and Science may seem abstract, but it is some of the most important work happening in technology today. They aren’t just writing rules; they are building the foundations for a future where we can harness the incredible power of AI safely and confidently.
What do you think? Is a global, standardised approach the right way forward, or will it stifle the very innovation it seeks to enable? Let me know your thoughts below.


