Skip to content

‘The stakes are high’: Global AI safety report highlights risks

    Created by almost 100 AI experts, the report gives a scientific understanding on what risks advanced AI might pose to society.

    In 2023, the UK hosted the world’s first major AI Safety Summit to promote greater international collaboration on the emerging technology.

    From that came the Bletchley Declaration, which marked an agreement between some of the world’s major powers to work together in regulating AI.

    The countries present – which included Ireland – also agreed to support the creation of an international scientific report on the safety of advanced AI.

    Now, following an interim report in May 2024, which was presented at the AI Seoul Summit, the full International AI Safety Report has now been published.

    Ciarán Seoighe, deputy CEO of Research Ireland, is listed as the expert for Ireland – though he said he worked as a conduit for connecting Ireland’s AI experts to the report. Specifically, he called out University College Cork professor Barry O’Sullivan, University College Dublin assistant professor Susan Leavy and Dublin City University professor and former director of the Insight Centre for Data Analytics Alan Smeaton.

    Speaking to SiliconRepublic.com, Seoighe said that while the report does acknowledge the potential benefits of AI, its job was to identify and focus on the risks that AI pose.

    “The focus was really to go and say, ‘well, what can go wrong?’ It is not intended to be a policy document, per se. So, it doesn’t prescribe what policymakers can do or should do. What it’s doing is outlining what the risks are in a clearly structured, scientific manner, so the policymakers can use this as a guide and then decide what rules and regulations to apply.”

    Chaired by Prof Yoshua Bengio of the Mila – Quebec AI Institute, the report is the work of 96 international AI experts from across 30 countries as well as the OECD, the EU and the UN, who worked to establish “an internationally shared scientific understanding of risks from advanced AI”.

    The report focused on general-purpose AI, ie, systems that are able to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and other such tasks.

    The experts aimed to summarise scientific evidence around three core questions: What can general-purpose AI do? What are risks associated with general-purpose AI? And what mitigation techniques are there against these risks?

    The biggest risk factors

    The global report outlines several risks from malicious use to malfunctions and systemic risks. Malicious use risks range from harming individuals through fake content such as scams, manipulating public opinion – something that became particularly obvious though various elections in 2024 – cyber offences and chemical attacks.

    Risks from malfunctions range from reliability issues such as hallucinations, bias within the systems, which can amplify racial, cultural, gender and other such biases, and a potential loss of control.

    Systemic risks refer to the wider societal impact the evolution of AI can have, such as growing concerns around the labour market and loss of jobs, the environmental impact from the energy required to power these models and risks to privacy and copyright infringement.

    Seoighe said Ireland-based experts were particularly vocal on the importance of including the impact of AI on fundamental human rights.

    “This is something that’s important and that should be called out,” he said, adding that Ireland had flagged objections in relation to this in earlier drafts. “And then, in fairness, the team in the UK, they address that in the subsequent versions.

    “We just wanted to make sure that there’s a reference to the impact on the individuals and our individual human rights in the report. But that was only one small part. There’s a lot of input from the Irish team and Susan Leavy is an AI ethicist, and actually, she became a part of the core writing group.”

    Ongoing questions

    The experts said the report is essential for improving the collective understanding of AI and its potential risks. However, the report also highlighted disagreements around several questions.

    Shedding light on these disagreements, Seoighe said the real challenge is trying to understand the pace at which some of the big AI developments will happen and by extension, how soon certain risks could become a reality.

    “Some people think that the risk of us losing control of AI, for example … some people think that could happen sooner rather than later, and others think that could be decades away,” he said.

    “It’s hard to know, you’re looking at something that has really gone exponential in the last year or two … If it continues exponentially, what would that mean? What could it do? And of course, if it continued exponentially at the current rate for a couple of years, it would soon use more power than the entire planet can create. So that’s obviously not going to happen, but it’s the disruptors that can continue to happen.”

    DeepSeek is just one example of a disrupter in a sector that was already seen as disruption in itself. When ChatGPT burst into the mainstream in November 2022, the word disruption was bandied about for months in relation to similar such generative AI models.

    But while ChatGPT-4 cost in the region of $100m to train, Chinese AI model DeepSeek has ruffled feathers on the global AI stage, disrupting the disrupters at a fraction of the cost.

    “What we’re seeing is there are risks in cybersecurity, there are risks of bad actors using this thing in different ways, or there are risks of us losing control of it in different ways, but we just don’t know to what extent or how quickly that might happen. And you know, that’s where a lot of the disagreement was,” said Seoighe.

    Where we are now?

    One of the biggest concerns around AI safety is the vested interests either fighting regulation, claiming it stifles innovation, or are making a case for self-regulation. All of these discussions are tough to parse through when many models are already very much in use.

    Seoighe offered hope regarding this, quoting Stuart J Russell, a leading researcher in AI. “He uses the line: ‘our job is to make safe AI, not make AI safe’. And by that, what he means is design it with safety built in from the outset, rather than build and design it, and then later try to control it,” said Seoighe.

    “Imagine AI is like flight. [Russel’s] view is, we’re very much at that stage where we’re the Wright brothers. We are not wandering around in jets yet. We’re at the very, very early stages. So it is still time to regulate, to control, to set guidelines and policies around this thing again, while not stifling innovation and creativity, but recognising that we want to keep people safe, that we want to recognise the risks of it.”

    Seoighe added that while the report did have feedback from a variety of groups outside the scientific community, including industry groups and government groups, the point of the report was really to focus on the AI risks “from a purely scientific perspective” in order to give a transparent view of the knowns and unknowns so policymakers can think about how to handle them.

    In the report itself, the experts said the goal is to “help the international community to move towards greater consensus about general-purpose AI and mitigate its risks more effectively” so that people can safely experience its benefits.

    “The stakes are high. We look forward to continuing this effort.”

    Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

    www.siliconrepublic.com (Article Sourced Website)

    #stakes #high #Global #safety #report #highlights #risks