以下是一些与您所在的信息安全服务行业相结合的 AI 工具及您需要学习的内容:
AI 工具:
需要学习的内容:
在引入AIGC的概念之前,本报告将先解释另一相关的热门词条“GenAI”,全称Generative AI,即生成式AI。GenAI是一种基于深度学习技术(deep learning algorithm),利用机器学习(machine learning)算法从已有数据中学习并生成新的数据或内容的AI应用。其工作原理是通过大规模的数据集训练深度神经网络模型,学习各种数据的规律和特征,实现对输入数据的分析、理解和生成。GenAI为游戏、娱乐和产品设计等应用提供了新颖且有创意的解决方案,如自动写作、虚拟现实、音乐创作等,甚至协助科学研究开辟了新的可能性。目前典型的GenAI包括OpenAI推出的语言模型ChatGPT、GPT-4、图像模型DALL-E以及百度推出的文心一言、阿里云推出的通义千问等。虽然生成式AI是一种非常强大的技术,能够应用于诸多专业领域;但其在数据处理过程中存在多重潜在合规风险,如未经授权收集信息、提供虚假信息、侵害个人隐私等。AIGC(全称AI-Generated Content)指利用GenAI创建的内容,如图像、视频、音频、文本和三维模型。具体来讲,AIGC工具使用机器学习算法,通常以自然语言处理为基础,分析大型文本数据集,并学习如何生成风格和语气相似的新内容。国内目前主要是在《网络安全法》《数据安全法》以及《个人信息保护法》的框架下,由《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》、《生成式人工智能服务管理暂行办法》、《科技伦理审查办法(试行)》共同监管AIGC行业。
1.22.The concept of AI is not new,but recent advances in data generation and processing have changed the field and the technology it produces.For example,while recent developments in the capabilities of generative AI models have created exciting opportunities,they have also sparked new debates about potential AI risks.39 As AI research and development continues at pace and scale,we expect to see even greater impact and public awareness of AI risks.402.23.We know that not all AI risks arise from the deliberate action of bad actors.Some AI risks can emerge as an unintended consequence or from a lack of appropriate controls to ensure responsible AI use.413.24.We have made an initial assessment of AI-specific risks and their potential to cause harm,with reference in our analysis to the values that they threaten if left unaddressed.These values include safety,security,fairness,privacy and agency,human rights,societal well-being and prosperity.4.25.Our assessment of cross-cutting AI risk identified a range of high-level risks that our framework will seek to prioritise and mitigate with proportionate interventions.For example,safety risks include physical damage to humans and property,as well as damage to mental health.42 AI38 Intelligent security tools,National Cyber Security Centre,2019.39 What is generative AI,and why is it suddenly everywhere?,Vox,2023.40 See,for example,The Benefits and Harms of Algorithms,The Digital Regulation Cooperation Forum,2022;Harms of AI,Acemoglu,2021.41 AI Accidents:An Emerging Threat,Center for Security and Emerging Technology,2021.42 AI for radiographic COVID-19 detection selects shortcuts over signal,DeGrave,Janizek and Lee,2021;Pathways:How digital design puts children at risk,5Rights Foundation,2021.
Disinformation generated and propagated by AI could undermine access to reliable information and trust in democratic institutions and processes.43 The Malicious Use of Artificial Intelligence,Malicious AI Report,2018.44 Constitutional Challenges in the Algorithmic Society,Micklitz et al.,2022.45Smart Speakers and Voice Assistants,CDEI,2019;Deepfakes and Audiovisual disinformation,CDEI,2019.46 Artificial Intelligence,Human Rights,Democracy and the Rule of Law,Leslie et al.,2021.47 Government has already committed to addressing some of these issues more broadly.See,for example,the Inclusive Britain report,Race Disparity Unit,2022.12A pro-innovation approach to AI regulationRisks to securityAI tools can be used to automate,accelerate and magnify the impact of highly targeted cyber attacks,increasing the severity of the threat from malicious actors.The emergence of LLMs enables hackers48 with little technical knowledge or skill to generate phishing campaigns with malware delivery capabilities.49