Calculating AI Safety: A Call for Responsibility
Max Tegmark emphasizes the need for AI companies to calculate the risks associated with Artificial Super Intelligence. He introduces the concept of the Compton constant, urging developers to rigorously assess these probabilities. Recent discussions among AI experts highlight the importance of prioritizing safety in the evolving landscape of AI technology.
USAGEFUTUREWORKTOOLS
The AI Maker
3/2/20262 min read


As artificial intelligence continues to evolve, the conversation surrounding its safety has become increasingly urgent. Max Tegmark, a notable figure in AI safety, recently emphasized the importance of replicating the safety calculations that guided Robert Oppenheimer during the Trinity test of the atomic bomb. His assertion? AI companies must rigorously assess the risks associated with their creations, particularly concerning Artificial Super Intelligence (ASI).
In a compelling paper co-authored with students at the Massachusetts Institute of Technology (MIT), Tegmark introduced the concept of the "Compton constant." This term refers to the probability that a super intelligent AI could escape human control, akin to the calculations made by physicist Arthur Compton before the nuclear test. Back in 1945, Compton estimated the risk of a runaway fusion reaction as "slightly less" than one in three million, which was deemed acceptable enough to proceed with the test. Tegmark urges today’s AI developers to adopt a similar level of diligence when assessing the risks of their technologies.
It’s not sufficient for companies to feel optimistic about their AI systems; they need to provide concrete calculations to back their confidence. Tegmark argues that a consensus on the Compton constant among AI firms could foster the political will necessary to establish global safety standards for AI development. As the AI landscape evolves, this collaborative approach could be crucial in ensuring that the advancements we make do not come at the expense of safety.
Tegmark, who is also a co-founder of the Future of Life Institute, has been an advocate for responsible AI development. In 2023, he helped produce an open letter calling for a pause in the development of powerful AI systems, which garnered over 33,000 signatures, including those of high-profile figures like Elon Musk and Steve Wozniak. This letter highlighted the frantic pace at which AI labs were racing to deploy increasingly powerful systems that are difficult to understand and control.
Recently, Tegmark participated in discussions with AI experts and industry professionals, resulting in the Singapore Consensus on Global AI Safety Research Priorities. This report outlines three key areas for prioritization: measuring the impact of current and future AI systems, defining desired AI behavior, and managing AI systems effectively. These guidelines could serve as a framework for advancing AI technology while ensuring safety remains a top priority.
While the road ahead may seem daunting, Tegmark believes that the recent governmental AI summit in Paris marked a turning point. With renewed international collaboration in AI safety, there is hope that we can navigate the challenges posed by these powerful systems responsibly. As the future of AI unfolds, it’s essential that we prioritize safety and accountability, ensuring that our journey into this new technological frontier is both innovative and secure.
Your Data, Your Insights
Unlock the power of your data effortlessly. Update it continuously. Automatically.
Answers
Sign up NOW
info at aimaker.com
© 2024. All rights reserved. Terms and Conditions | Privacy Policy
