AI 治理涵盖多个维度,以下为您梳理的相关内容:
[title]拜登签署的AI行政命令_2023.10.30As we advance this agenda at home,the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI.The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia,Brazil,Canada,Chile,the European Union,France,Germany,India,Israel,Italy,Japan,Kenya,Mexico,the Netherlands,New Zealand,Nigeria,the Philippines,Singapore,South Korea,the UAE,and the UK.The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process,the UK Summit on AI Safety,India’s leadership as Chair of the Global Partnership on AI,and ongoing discussions at the United Nations.The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe,secure,and trustworthy AI.More action will be required,and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.For more on the Biden-Harris Administration’s work to advance AI,and for opportunities to join the Federal AI workforce,visit[AI.gov](https://ai.gov/).
embedded in the broader regulatory considerations as regulators and AI life cycle actors are expected to comply with the UK’sdata protection framework.Public expectations for AI governance(transparency,fairness and accountability),Centre for DataEthics and Innovation,2023.Principles for the security of machine learning,National Cyber Security Centre,2022.A pro-innovation approach to AI regulationguidance in a way that is coordinated and coherent with the activities of otherregulators.Regulators’ implementation of this principle may require thecorresponding AI life cycle actors to regularly test or carry out due diligenceon the functioning,resilience and security of asystem.93Regulators mayalso need to consider technical standards addressing safety,robustness andsecurity to benchmark the safe and robust performance of AI systems and toprovide AI life cycle actors with guidance for implementing this principle intheir remit.Principle Appropriate transparency and explainabilityDefinitionandexplanationAI systems should be appropriately transparent and explainable.Transparency refers to the communication of appropriate information aboutan AI system to relevant people(for example,information on how,when,andfor which purposes an AI system is being used).Explainability refers to theextent to which it is possible for relevant parties to access,interpret andunderstand the decision-making processes of an AIsystem.94An appropriate level of transparency and explainability will mean thatregulators have sufficient information about AI systems and their associatedinputs and outputs to give meaningful effect to the other principles(e.g.toidentify accountability).An appropriate degree of transparency andexplainability should be proportionate to the risk(s)presented by an AIsystem.Regulators may need to look for ways to support and encourage relevant lifecycle actors to implement appropriate transparency measures,for example
However,AI can increase the riskof unfair bias or discrimination across a range of indicators or characteristics.Thiscould undermine public trust in AI.Product safety laws ensure that goods manufactured and placed on the market inthe UK are safe.Product-specific legislation(such as for electrical and electronicequipment,56medicaldevices,57and toys58)may apply to some products thatinclude integrated AI.However,safety risks specific to AI technologies should bemonitored closely.As the capability and adoption of AI increases,it may pose newand substantial risks that are unaddressed by existing rules.Global Innovation Index 2022,GII 2022; Global Indicators of Regulatory Governance,World Bank,2023.Demand for AI skills in jobs,OECD Science,Technology and Industry Working Papers,2021.The protected characteristics are age,disability,gender reassignment,marriage and civil partnership,race,religion orbelief,sex,and sexual orientation.Article 5(1)(a)Principles relating to processing of personal data,HM Government,2016.Electrical Equipment(Safety)Regulations,HM Government,2016.Medical Devices Regulation,HM Government,2002.Toys(Safety)Regulations,HM Government,2011.A pro-innovation approach to AI regulationConsumer rightslaw59may protect consumers where they have entered into asales contract for AI-based products and services.Certain contract terms(forexample,that goods are of satisfactory quality,fit for a particular purpose,and asdescribed)are relevant to consumer contracts.Similarly,businesses are prohibitedfrom including certain terms in consumer contracts.Tort law provides acomplementary regime that may provide redress where a civil wrong has causedharm.It is not yet clear whether consumer rights law will provide the right level ofprotection in the context of products that include integrated AI or services based on