News

Meta states it may stop the development of any risky AI systems

Meta CEO Mark Zuckerberg has promised to make artificial general intelligence (AGI). This is roughly described as AI that can perform any task and is openly available for one day. According to the new policy document, Meta suggests that certain scenarios may not increase a higher capability AI system it developed internally.

The paper, which Meta is referring to as its Frontier AI Framework, lists two categories of AI systems that it deems too dangerous to make public: “high risk” and “critical risk” systems.

According to Meta, both “high-risk” and “critical-risk” systems can support chemical, biological, and cybersecurity attacks; the distinction is that “critical-risk” systems have the potential to produce a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.” Comparatively speaking, high-risk systems may make an assault simpler to execute, but they are not as dependable or consistent as critical risk systems.

What sort of attacks are we talking about here? Meta offers a few examples, like the “automated end-to-end compromise of a best-practice-protected corporate-scale environment” and the “proliferation of high-impact biological weapons.”

Although the business admits that the list of potential disasters in Meta’s document is far from all-inclusive, it does include those that Meta considers to be “the most urgent” and likely to occur as a direct result of deploying a potent AI system.

Surprisingly, the document states that Meta categorizes system risk based on the opinions of both internal and external academics, who are then reviewed by “senior-level decision-makers,” rather than on any one empirical test. Why? According to Meta, the science of evaluation is not “sufficiently robust as to provide definitive quantitative metrics” for determining the riskiness of a system.

According to Meta, if a system is deemed high-risk, internal access will be restricted, and the system won’t be made public until mitigations have been put in place to “reduce risk to moderate levels.” However, Meta claims that if a system is judged to be a critical risk, it will suspend development until it can be made less risky and will put in place unidentified security measures to block the system from being exfiltrated.

It seems that Meta’s Frontier AI Framework, which the business claims will change with the AI landscape and that it had previously promised to disclose before this month’s France AI Action Summit, is a reaction to criticism of the company’s “open” approach to system development. 

Read Also:

  1. How To Add Music To Instagram Post?
  2. NASA Perseverance Rover Finds Strange-Looking Zebra-Striped Rock on Mars
  3. How To Play Google Tic-Tac-Toe?

News Source

Avijit Sah

Avijit Sah is a digital marketing expert specializing in SEO, social media, and content strategy. With a passion for helping businesses grow online, Avijit Sah uses data-driven tactics to boost visibility and engagement. Follow Avijit for the latest digital marketing tips and insights.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button