以下是一个关于 Python + AI 在实际工作中的应用案例:
在自动驾驶车辆领域,对于 AI 系统的可解释性需求程度高度取决于具体情境,包括应用的安全关键程度。例如,设计自动驾驶车辆的技术专家需要理解系统的决策能力以进行测试、评估和改进;普通用户可能仅需了解决策过程以安全使用车辆;若车辆发生故障并导致有害结果,监管机构可能需要有关系统如何运作的信息以分配责任。尽管 AI 可解释性仍是技术挑战和活跃的研究领域,但监管机构已在开展相关工作以解决此问题。如 2021 年,ICO 和艾伦图灵研究所共同发布了关于用 AI 解释决策的指导,为组织提供了实用建议,以帮助向受其影响的个人解释由 AI 交付或协助的流程、服务和决策。
1.4.How could current routes to contest or seek redress for AI-related harms be improved,if at all?2.5.Do you agree that,when implemented effectively,the revised cross-sectoral principles will cover the risks posed by AI technologies?3.6.What,if anything,is missing from the revised principles?33A pro-innovation approach to AI regulationCase Study 3.4:Explainable AI in practiceThe level of explainability needed from an AI system is highly specific to its context,including the extent to which an application is safety-critical.The level and type of explainability required will likely vary depending on whether the intended audience of the explanation is a regulator,technical expert,or lay person.For example,a technical expert designing self-driving vehicles would need to understand the system’s decision-making capabilities to test,assess and refine them.In the same context,a lay person may need to understand the decision-making process only in order to use the vehicle safely.If the vehicle malfunctioned and caused a harmful outcome,105 a regulator may need information about how the system operates in order to allocate responsibility–similar to the level of explainability currently needed to hold human drivers accountable.While AI explainability remains a technical challenge and an area of active research,regulators are already conducting work to address it.In 2021,the ICO and the Alan Turing Institute issued co-developed guidance on explaining decisions made with AI,106 giving organisations practical advice to help explain the processes,services and decisions delivered or assisted by AI to the individuals affected by them.