Meta says some AGI systems are too risky to release

by Bella Baker


It seems like since AI came into our world, creators have put a lead foot down on the gas. However, according to a new policy document, Meta CEO Mark Zuckerberg might slow or stop the development of AGI systems that are deemed “high risk” or “critical risk.”

AGI is an AI system that can do anything a human can do, and Zuckerberg promised to make it openly available one day. But in the document “Frontier AI Framework,” Zuckerberg concedes that some highly capable AI systems won’t be released publicly because they could be too risky.

The framework “focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons.”

“By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems,” a press release about the document reads.

Mashable Light Speed

For example, the framework intends to identify “potential catastrophic outcomes related to cyber, chemical and biological risks that we strive to prevent.” It also conducts “threat modeling exercises to anticipate how different actors might seek to misuse frontier AI to produce those catastrophic outcomes” and has “processes in place to keep risks within acceptable levels.”

If the company determines that the risks are too high, it will keep the system internal instead of allowing public access.

“While the focus of this Framework is on our efforts to anticipate and mitigate risks of catastrophic outcomes, it is important to emphasize that the reason to develop advanced AI systems in the first place is because of the tremendous potential for benefits to society from those technologies,” the document reads.

Yet, it looks like Zuckerberg’s hitting the brakes — at least for now — on AGI’s fast track to the future.





Source link

Related Posts

Leave a Comment