以下是关于人工智能诈骗技术的相关内容:
欧洲议会和欧盟理事会规定人工智能的统一规则,并修正300/2008号、167/2013号、168/2013号、2018/858号、2018/1139号和2019/214号条例以及2014/90/EU号、2016/797号和20(29)人工智能的操纵技术可被用来劝说人们做出不想从事的行为,或通过诱导其做出决定来对其加以欺骗,从而颠覆和损害他们的自主、决策和自由选择。在市场上投放、提供服务或使用特定的人工智能系统,其目的或效果是实质性地扭曲人的行为,从而可能造成重大的伤害,特别是对身体、心理健康或经济利益产生足够重要的不利影响,这是特别危险的,因此应予禁止。这类人工智能系统采用潜意识的成分,例如人们无法感知的音频、图像、视频刺激,因为这些刺激超出了人的感知范围,或者采用其他操纵或欺骗技术,以人们无法意识到的方式颠覆或损害人的自主、决策或自由选择,或者即使意识到了,人们仍然被欺骗,或者无法控制或抵制。例如,脑机界面或虚拟现实就可能促进这种情况的发生,因其允许对呈现给人的刺激进行更大程度的控制,这些刺激可能相应地以明显有害的方式实质性地扭曲人的行为。此外,人工智能系统还可能以其他方式利用个人或特定群体由于年龄、欧洲议会和理事会2019/882号指令17所指的残疾或特定的社会或经济状况,相应社会或经济状况可能使得这些人更容易受到剥削,例如生活在极端贫困中的人、少数民族或宗教少数群体。此类人工智能系统可被投放市场、提供服务或加以使用,其目的或效果是实质性地扭曲个人的行为,并对该人或其他个人或群体造成,或者有合理可能性地造成重大的危害,包括可能长期累积的危害,因此应予禁止。如果扭曲行为是人工智能系统之外的因素造成的,而这些因素又不在提供者或部署者的控制范围之内,也就是说,人工智能系统的提供者或部署者可能无法合理地预见和缓解这些因素,则可能无法推定有扭曲行为的意图。在任何情况下
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S.government.In accordance with the Defense Production Act,the Order will require that companies developing any foundation model that poses a serious risk to national security,national economic security,or national public health and safety must notify the federal government when training the model,and must share the results of all red-team safety tests.These measures will ensure AI systems are safe,secure,and trustworthy before companies make them public.Develop standards,tools,and tests to help ensure that AI systems are safe,secure,and trustworthy.The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure,as well as chemical,biological,radiological,nuclear,and cybersecurity risks.Together,these are the most significant actions ever taken by any government to advance the field of AI safety.Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.Agencies that fund life-science projects will establish these standards as a condition of federal funding,creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
Risks to societal wellbeingDisinformation generated and propagated by AI could undermine access to reliableinformation and trust in democratic institutions and processes.The Malicious Use of Artificial Intelligence,Malicious AI Report,2018.Constitutional Challenges in the Algorithmic Society,Micklitz et al.,2022.Smart Speakers and Voice Assistants,CDEI,2019; Deepfakes and Audiovisual disinformation,CDEI,2019.Artificial Intelligence,Human Rights,Democracy and the Rule of Law,Leslie et al.,2021.Government has already committed to addressing some of these issues more broadly.See,for example,the InclusiveBritain report,Race Disparity Unit,2022.A pro-innovation approach to AI regulationRisks to securityAI tools can be used to automate,accelerate and magnify the impact of highlytargeted cyber attacks,increasing the severity of the threat from malicious actors.The emergence of LLMs enableshackers48with little technical knowledge or skill togenerate phishing campaigns with malware delivery