以下是关于 AI 的最新新闻:
[title]OpenAI新模型9.12发布:OpenAI o1-WaytoAGI整理[heading1]OpenAI o1-preview[heading2]Safety安全To match the new capabilities of these models,we’ve bolstered our safety work,internal governance,and federal government collaboration.This includes rigorous testing and evaluations using our[Preparedness Framework(opens in a new window)](https://cdn.openai.com/openai-preparedness-framework-beta.pdf),best-in-class red teaming,and board-level review processes,including by our Safety & Security Committee.为了匹配这些模型的新功能,我们加强了安全工作、内部治理和联邦政府合作。这包括使用我们的准备框架进行严格的测试和评估,一流的红队,以及包括我们的安全与保障委员会在内的董事会级审查流程。To advance our commitment to AI safety,we recently formalized agreements with the U.S.and U.K.AI Safety Institutes.We've begun operationalizing these agreements,including granting the institutes early access to a research version of this model.This was an important first step in our partnership,helping to establish a process for research,evaluation,and testing of future models prior to and following their public release.为了推进我们对AI安全的承诺,我们最近与美国和英国AI安全研究所正式达成协议。我们已经开始实施这些协议,包括允许这些机构提前获得该模型的研究版本。这是我们合作中重要的第一步,有助于建立未来模型公开发布之前和之后的研究、评估和测试流程。
[title]拜登签署的AI行政命令_2023.10.30Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S.government.In accordance with the Defense Production Act,the Order will require that companies developing any foundation model that poses a serious risk to national security,national economic security,or national public health and safety must notify the federal government when training the model,and must share the results of all red-team safety tests.These measures will ensure AI systems are safe,secure,and trustworthy before companies make them public.Develop standards,tools,and tests to help ensure that AI systems are safe,secure,and trustworthy.The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure,as well as chemical,biological,radiological,nuclear,and cybersecurity risks.Together,these are the most significant actions ever taken by any government to advance the field of AI safety.Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.Agencies that fund life-science projects will establish these standards as a condition of federal funding,creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.