The UK Lists Top Nightmare AI Scenarios Ahead of Its Big Tech Summit


Deadly bioweapons, automated cybersecurity attacks, powerful AI models escaping human control. Those are just some of the potential threats posed by artificial intelligence, according to a new UK government report. It was released to help set the agenda for an international summit on AI safety to be hosted by the UK next week. The report was compiled with input from leading AI companies such as Google’s DeepMind unit and multiple UK government departments, including intelligence agencies.

Joe White, the UK’s technology envoy to the US, says the summit provides an opportunity to bring countries and leading AI companies together to better understand the risks posed by the technology. Managing the potential downsides of algorithms will require old-fashioned organic collaboration, says White, who helped plan next week’s summit. “These aren’t machine-to-human challenges,” White says. “These are human-to-human challenges.”

UK prime minister Rishi Sunak will make a speech tomorrow about how, while AI opens up opportunities to advance humanity, it’s important to be honest about the new risks it creates for future generations.

The UK’s AI Safety Summit will take place on November 1 and 2 and will mostly focus on the ways people can misuse or lose control of advanced forms of AI. Some AI experts and executives in the UK have criticized the event’s focus, saying the government should prioritize more near-term concerns, such as helping the UK compete with global AI leaders like the US and China.

Some AI experts have warned that a recent uptick in discussion about far-off AI scenarios, including the possibility of human extinction, could distract regulators and the public from more immediate problems, such as biased algorithms or AI technology strengthening already dominant companies.

The UK report released today considers the national security implications of large language models, the AI technology behind ChatGPT. White says UK intelligence agencies are working with the Frontier AI Task Force, a UK government expert group, to explore scenarios like what could happen if bad actors combined a large language model with secret government documents. One doomy possibility discussed in the report suggests a large language model that accelerates scientific discovery could also boost projects trying to create biological weapons.

This July, Dario Amodei, CEO of AI startup Anthropic, told members of the US Senate that within the next two or three years it could be possible for a language model to suggest how to carry out large-scale biological weapons attacks. But White says the report is a high-level document that is not intended to “serve as a shopping list of all the bad things that can be done.”

In addition to UK government agencies, the report released today was reviewed by a panel including policy and ethics experts from Google’s DeepMind AI lab, which began as a London AI startup and was acquired by the search company in 2014, and Hugging Face, a startup developing open source AI software.

Yoshua Bengio, one of three “godfathers of AI” who won the highest award in computing, the Turing Award, for machine-learning techniques central to the current AI boom was also consulted. Bengio recently said his optimism about the technology he helped foster has soured and that a new “humanity defense” organization is needed to help keep AI in check.



Source link

We will be happy to hear your thoughts

Leave a reply

Our Visitor

0 0 4 4 5 2
Users Today : 2
Users Yesterday : 4
Users Last 7 days : 33
Users Last 30 days : 158
Users This Month : 104
Users This Year : 1840
Total Users : 4452
Views Today : 9
Views Yesterday : 14
Views Last 7 days : 127
Views Last 30 days : 461
Views This Month : 338
Views This Year : 4453
Total views : 11840
Who's Online : 0
Your IP Address : 54.186.197.195
Server Time : 2023-12-22
BuySemperFi
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart