A recent report commissioned by the U.S. State Department has exposed significant safety concerns voiced by employees at leading artificial intelligence labs, including those of OpenAI, Google, and Mark Zuckerberg’s Meta, highlighting the lack of adequate safeguards and potential national security risks posed by advanced AI systems.
TIME reports that according to the government-commissioned report authored by employees of Gladstone AI, some of the world’s top AI researchers harbor grave apprehensions regarding the safety measures and incentives driving their organizations. The authors conducted interviews with over 200 experts, including employees from pioneering AI labs such as OpenAI, Google DeepMind, Meta, and Anthropic – all of which are actively pursuing the development of artificial general intelligence (AGI), a hypothetical technology capable of performing most…