Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenAI sets up "Preparedness" early warning team: the Board of Directors has the right to prevent the release of the new AI model

2024-10-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

CTOnews.com, December 19 / PRNewswire-Asianet /-- OpenAI, which developed ChatGPT, recently announced the formation of a new "Preparedness" team to monitor the potential threat posed by its technology, prevent it from falling into the wrong hands, and even be used to make chemical and biological weapons.

The team, led by Aleksander Madry, a professor of artificial intelligence at MIT, will recruit artificial intelligence researchers, computer scientists, national security experts and policy experts to continuously monitor and test technology developed by OpenAI and warn the company if there are any signs of danger.

OpenAI released guidelines called the Preparedness Framework (Preparedness Framework) on Monday and stressed that the guidelines were still in the testing stage.

It is reported that the defense team will send a report to a new internal security advisory group every month, which will then analyze it and submit recommendations to Sam Altman, OpenAI chief executive, and the board of directors. Altman and company executives can decide whether to release a new AI system based on these reports, but the board has the right to reverse that decision.

The defense team will repeatedly evaluate OpenAI's state-of-the-art, unreleased AI model and rate it into four levels according to different types of perceived risk, from low to high, "low", "medium", "high" and "serious". Under the new guidelines, OpenAI will only launch models rated "low" and "medium".

OpenAI's "defensive" team lies between two existing teams: the "security systems" team, which is responsible for eliminating racial prejudice in the AI system, and the "Superalignment" team, which studies how to ensure that AI does not harm humans in future scenarios that transcend human intelligence.

CTOnews.com noted that the popularity of ChatGPT and the rapid development of generative AI technology have triggered heated discussions in the scientific and technological community about the potential dangers of this technology. Prominent AI experts from OpenAI, Google and Microsoft warned this year that the technology could pose a threat to the survival of humans on a par with epidemics or nuclear weapons. Other AI researchers believe that paying too much attention to these distant and huge risks ignores the potential harm caused by AI technology. There are also some AI business leaders who believe that concerns about risks are exaggerated and that companies should continue to promote technological development for the benefit of society and benefit from it.

OpenAI took a more eclectic position in this debate. Chief executive Sam Sam Altman acknowledged the serious long-term risks to the technology, but also called for attention to solving existing problems. He believes that regulation should not hinder the competition of small companies in the AI field. At the same time, he also pushed the company to commercialize its technology and raise funds to speed up its development.

Madrid is a senior AI researcher who was in charge of the deployable Machine Learning Center at MIT and co-led the MIT AI Policy Forum. He joined OpenAI this year but resigned along with a handful of OpenAI executives after Altman was fired by the board and returned to the company five days later when Altman was reinstated. OpenAI is managed by a non-profit board whose mission is to promote the development of AI and make it benefit all mankind. After Altman was reinstated, three board members who fired him resigned and the group is in the process of selecting new board members.

Despite the "upheaval" experienced by the leadership, Madrid said he still believed that OpenAI's board took AI's risks seriously.

In addition to AI talent, OpenAI's Preparedness team will recruit experts from areas such as national security to help companies understand how to deal with major risks. Madrid said the team has begun to contact agencies such as the US Nuclear Safety Administration to ensure that the company can properly study the risks of AI.

One of the team's priorities is to monitor when and how OpenAI technology leads people to computer intrusions or dangerous chemical, biological and nuclear weapons, beyond what can be found online through conventional research. Madrid is looking for such people: "they will think deeply, 'how do I break these rules? how can I be the smartest villain?'"

OpenAI said in a blog post on Monday that the company would also allow "qualified independent third parties" to test its technology from outside OpenAI.

Madrid said he disagreed with neither the "doomsayers" who feared that AI had surpassed human intelligence, nor the "accelerators" who wanted to remove all obstacles to AI development.

"I really believe that it is a very simple way to separate development from inhibition," he said. "AI has great potential, but we also need to work to ensure that these potentials are realized and minimize negative impacts."

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report