中美在 AI 技术的通信方面存在以下差距:
需要注意的是,目前提供的内容中未明确提及中国在这些方面的具体情况,以上是基于所给资料中关于一般监管机构在 AI 相关方面能力差距的分析。
expertise.Ourresearch151has highlighted different levels of capability among regulators when itcomes to understanding AI and addressing its unique characteristics.Our engagement has alsoelicited a wide range of views on the capabilities regulators require to address AI risks and onthe best way for regulators to acquire these.103.We identified potential capability gaps among many,but not all,regulators,primarily in relationto:AI expertise.Particularly:oTechnical expertise in AItechnology.152For example,on how AI is being used to deliverproducts and services and on the development,use and applicability of technicalstandards.153oExpertise on how AI use cases interact across multiple regulatory regimes.oMarket intelligence on how AI technologies are being used to disrupt existing business models,both in terms of the potential opportunities and risks that can impact regulatory objectives.Organisational capacity.A regulator’s ability to:oEffectively adapt to the emergence of AI use cases and applications,and assimilate and sharethis knowledge throughout the organisation.oWork with organisations that provide assurance techniques(e.g.assurance service providers)and develop technical standards(i.e.standards development organisations),to identify relevanttools and embed them into the regulatory framework and best practice.oWork across regulators to share knowledge and cooperate in the regulation of AI use cases thatinteract across multiple regulatory regimes.Any attempt by a regulator to enforce a principle beyond its existing remit and powers may be legally challenged on thebasis of going beyond its legal authority.Including but not limited to Common Regulatory Capacity for AI,The Alan Turing Institute,2022.There is evidence that this is predominantly a recruitment problem.Regulators are trying to recruit but often cannot find theright candidates as they are competing for a limited supply of suitable candidates.Evidence showed that technical standards expertise varies across regulators.MHRA regularly uses and designatesstandards to clarify legal requirements,provide presumptive conformity and demonstrate the state of the art.Other regulators
[title]拜登签署的AI行政命令_2023.10.30As we advance this agenda at home,the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI.The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia,Brazil,Canada,Chile,the European Union,France,Germany,India,Israel,Italy,Japan,Kenya,Mexico,the Netherlands,New Zealand,Nigeria,the Philippines,Singapore,South Korea,the UAE,and the UK.The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process,the UK Summit on AI Safety,India’s leadership as Chair of the Global Partnership on AI,and ongoing discussions at the United Nations.The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe,secure,and trustworthy AI.More action will be required,and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.For more on the Biden-Harris Administration’s work to advance AI,and for opportunities to join the Federal AI workforce,visit[AI.gov](https://ai.gov/).
oReflected stakeholder feedback by expanding on concepts such as robustness and governance.We have also considered the results of public engagement research that highlighted anPlan for Digital Regulation,DSIT(formerly DCMS),2021.The Taskforce on Innovation,Growth and Regulatory Reform independent report,10 Downing Street,2021.The reportargues for UK regulation that is:proportionate,forward-looking,outcome-focussed,collaborative,experimental,andresponsive.Closing the gap:getting from principles to practices for innovation friendly regulation,Regulatory Horizons Council,2022.Pro-innovation Regulation of Technologies Review:Digital Technologies,HM Treasury,2023.Establishing a pro-innovation approach to regulating AI,Office for Artificial Intelligence,2022.A pro-innovation approach to AI regulationexpectation for principles such as transparency,fairness and accountability to be included withinan AI governanceframework.91oMerged the safety principle with security and robustness,given the significant overlap betweenthese concepts.oBetter reflected concepts of accountability and responsibility.oRefined each principle’s definition and rationale.Principle Safety,Security and RobustnessDefinitionandexplanationAI systems should function in a robust,secure and safe way throughout theAI life cycle,and risks should be continually identified,assessed andmanaged.Regulators may need to introduce measures for regulated entities to ensurethat AI systems are technically secure and function reliably as intendedthroughout their entire life cycle.Rationalefor theprincipleThe breadth of possible uses for AI and its capacity to autonomously developnew capabilities and functions mean that AI can have a significant impact onsafety and security.Safety-related risks are more apparent in certaindomains,such as health or critical infrastructure,but they can materialise inmany areas.Safety will be a core consideration for some regulators andmore marginal for others.However,it will be important for all regulators toassess the likelihood that AI could pose a risk to safety in their sector or