以下是一些保持对 AI 信息敏感度的建议:
missing opportunities to doso.84Regulators told us that AI risk assessments should include thefailure to exploit AI capabilities.For example,there can be a significant opportunity cost relatedto not having access to AI in safety-critical operations,from heavyindustry,85to personalhealthcare(see box 1.1).Sensitivity to context will allow the framework to respond to the level ofrisk in a proportionate manner and avoid stifling innovation or missing opportunities to capitaliseon the social benefits made available by AI.
embedded in the broader regulatory considerations as regulators and AI life cycle actors are expected to comply with the UK’sdata protection framework.Public expectations for AI governance(transparency,fairness and accountability),Centre for DataEthics and Innovation,2023.Principles for the security of machine learning,National Cyber Security Centre,2022.A pro-innovation approach to AI regulationguidance in a way that is coordinated and coherent with the activities of otherregulators.Regulators’ implementation of this principle may require thecorresponding AI life cycle actors to regularly test or carry out due diligenceon the functioning,resilience and security of asystem.93Regulators mayalso need to consider technical standards addressing safety,robustness andsecurity to benchmark the safe and robust performance of AI systems and toprovide AI life cycle actors with guidance for implementing this principle intheir remit.Principle Appropriate transparency and explainabilityDefinitionandexplanationAI systems should be appropriately transparent and explainable.Transparency refers to the communication of appropriate information aboutan AI system to relevant people(for example,information on how,when,andfor which purposes an AI system is being used).Explainability refers to theextent to which it is possible for relevant parties to access,interpret andunderstand the decision-making processes of an AIsystem.94An appropriate level of transparency and explainability will mean thatregulators have sufficient information about AI systems and their associatedinputs and outputs to give meaningful effect to the other principles(e.g.toidentify accountability).An appropriate degree of transparency andexplainability should be proportionate to the risk(s)presented by an AIsystem.Regulators may need to look for ways to support and encourage relevant lifecycle actors to implement appropriate transparency measures,for example
Strengthen privacy-preserving research and technologies,such as cryptographic tools that preserve individuals’ privacy,by funding a Research Coordination Network to advance rapid breakthroughs and development.The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.Evaluate how agencies collect and use commercially available information —including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks.This work will focus in particular on commercially available information containing personally identifiable data.Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques,including those used in AI systems.These guidelines will advance agency efforts to protect Americans’ data.Advancing Equity and Civil RightsIrresponsible uses of AI can lead to and deepen discrimination,bias,and other abuses in justice,healthcare,and housing.The Biden-Harris Administration has already taken action by publishing the[Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/)and issuing an[Executive Order directing agencies to combat algorithmic discrimination](https://www.whitehouse.gov/briefing-room/statements-releases/2023/02/16/fact-sheet-president-biden-signs-executive-order-to-strengthen-racial-equity-and-support-for-underserved-communities-across-the-federal-government/),while enforcing existing authorities to protect people’s rights and safety.To ensure that AI advances equity and civil rights,the President directs the following additional actions:Provide clear guidance to landlords,Federal benefits programs,and federal contractors to keep AI algorithms from being used to exacerbate discrimination.Address algorithmic discrimination through training,technical assistance,and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.