以下是关于《促进创新的人工智能监管方法》的相关内容:
在附件 A:实施部分,对于对个人有法律或类似重大影响的情况,监管机构需要考虑要求人工智能系统运营商向受影响方提供适当决策理由的适用性。人工智能系统应遵守特定监管领域内与个人脆弱性相关的监管要求。监管机构需依据现有权力和职责,考虑人工智能系统的使用如何改变个人的脆弱性。同时,应考虑可用的解决人工智能公平、偏差缓解和伦理考虑的技术标准(如 ISO/IEC TR 24027:2021、ISO/IEC 12791*、ISO/IEC TR 24368:2022),以明确监管指导并支持风险处理措施的实施。
在责任和治理方面,预计监管机构需要确定谁对现有法规和原则的合规负责。在实施的初始阶段,监管机构可能会就如何证明责任提供指导。从中长期来看,政府可能会就责任如何适用于生态系统内的特定参与者发布额外指导,并提供关于治理机制的指导,包括潜在的适当风险管理和治理流程(包括报告职责)范围内的活动。
此外,文中还提供了一些人工智能系统的说明性示例,如客户服务聊天机器人中的自然语言处理,其具有适应性和自主性,能根据大量数据集训练来识别普通人类语言中的统计模式,随着系统从每次新体验中学习,个性化程度可能会提高,但其可能会无意中包含不准确或误导性信息。自动化医疗分诊系统能根据医疗数据集、患者记录和实时健康数据分析预测患者病情并生成信息,但也存在潜在风险。
当人工智能系统的可解释性不足时,供应商和用户可能会无意中违反法律、侵犯权利、造成伤害并危及人工智能系统的安全。人工智能系统应根据其上下文显示适当的可解释性水平。
在公平原则方面,人工智能系统不应损害个人或组织的合法权利,不应不公平地歧视个人或造成不公平的市场结果。参与人工智能生命周期各个阶段的行为者应考虑适合系统使用、结果和相关法律应用的公平定义。监管机构可能需要制定并公布相关描述和说明。
legal or similarly significant effect on an individual,regulators will need to consider the suitability ofrequiring AI system operators to provide an appropriatejustification for that decision to affected parties.•AI systems should comply with regulatory requirementsrelating to vulnerability of individuals within specificregulatory domains.Regulators will need to considerhow use of AI systems may alter individuals’vulnerability,pursuant to their existing powers andremits.•Consider the role of available technical standardsaddressing AI fairness,bias mitigation and ethicalconsiderations(e.g.ISO/IEC TR 24027:2021,ISO/IEC12791*,ISO/IEC TR 24368:2022)to clarify regulatoryguidance and support the implementation of risktreatment measures.Accountabilityand governanceWe anticipate that regulators will need to:•Determine who is accountable for compliance withexisting regulation and the principles.In the initial stagesof implementation,regulators might provide guidance onhow to demonstrate accountability.In the medium tolong term,government may issue additional guidance onhow accountability applies to specific actors within theecosystem.•Provide guidance on governance mechanisms including,potentially,activities in scope of appropriate riskmanagement and governance processes(includingreporting duties).•Consider how available technical standards addressingAI governance,risk management,transparency and
1.42.Below,we provide some illustrative examples of AI systems to demonstrate their autonomous and adaptive characteristics.While many aspects of the technologies described in these case studies will be covered by existing law,they illustrate how AI-specific characteristics introduce novel risks and regulatory implications.Figure 1:Illustration of our strategy for regulating AIcharacteristics ensure any current or future AI system that meets this criteria will be within scope.See A guide to using artificial intelligence in the public sector,Government Digital Service and Office for Artificial Intelligence,2019.23A pro-innovation approach to AI regulationCase study 3.1:Natural language processing in customer service chatbotsAdaptivity:Provides responses to real-time customer messages,having been trained on huge datasets to identify statistical patterns in ordinary human speech,potentially increasing personalisation over time as the system learns from each new experience.Autonomy:Generates a human-like output based on the customer's text input,to answer queries,help customers find products and services,or send targeted updates.Operates with little need for human oversight or intervention.Illustrative AI-related regulatory implication:Unintentional inclusion of inaccurate or misleading information in training data,producing harmful instructions or convincingly spreading misinformation.Case study 3.2:Automated healthcare triage systemsAdaptivity:Predicts patient conditions based on the pathology,treatment and risk factors associated with health conditions from the analysis of medical datasets,patient records and real-time health data.Autonomy:Generates information about the likely causes of a patient’s symptoms and recommends potential interventions and treatments,either to a medical professional or straight to a patient.
When AI systems are not sufficiently explainable,AI suppliers and users riskinadvertently breaking laws,infringing rights,causing harm andcompromising the security of AI systems.At a technical level,the explainability of AI systems remains an importantresearch and development challenge.The logic and decision-making in AIsystems cannot always be meaningfully explained in a way that is intelligibleto humans,although in many settings this poses no substantial risk.It is alsotrue that in some cases,a decision made by AI may perform no worse onexplainability than a comparable decision made by a human.98 Futuredevelopments of the technology may pose additional challenges to achievingexplainability.AI systems should display levels of explainability that areappropriate to their context,including the level of risk and consideration ofwhat is achievable given the state of the art.Principle FairnessDefinitionandexplanationAI systems should not undermine the legal rights of individuals ororganisations,discriminate unfairly against individuals or create unfair marketoutcomes.Actors involved in all stages of the AI life cycle should considerdefinitions of fairness that are appropriate to a system’s use,outcomes andthe application of relevant law.Fairness is a concept embedded across many areas of law and regulation,including equality and human rights,data protection,consumer andcompetition law,public and common law,and rules protecting vulnerablepeople.Regulators may need to develop and publish descriptions and illustrations of