Artificial intelligence companies and governments should dedicate at least one-third of their research and development funding to ensuring the safety and ethical use of AI systems, according to a paper released by top AI researchers. The paper, published a week ahead of the international AI Safety Summit in London, outlines measures that governments and companies should take to address AI risks.
The authors, including three Turing Award winners, a Nobel laureate, and numerous AI academics, emphasized the need for democratic oversight and legal liability for harms caused by AI systems. They highlighted the rapid progress of AI technology, stressing the urgency for adequate precautions and regulations.
Prominent figures like Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, and Yuval Noah Harari have contributed to the paper. Despite warnings from academics and CEOs about AI risks, there is currently a lack of comprehensive regulations addressing AI safety, with the European Union’s legislation still pending approval due to unresolved issues.
The paper urges swift action, emphasizing the significance of democratic oversight and regulatory measures to keep pace with the rapid advancements in AI technology.