以下是关于人工智能诈骗的相关信息:
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S.government.In accordance with the Defense Production Act,the Order will require that companies developing any foundation model that poses a serious risk to national security,national economic security,or national public health and safety must notify the federal government when training the model,and must share the results of all red-team safety tests.These measures will ensure AI systems are safe,secure,and trustworthy before companies make them public.Develop standards,tools,and tests to help ensure that AI systems are safe,secure,and trustworthy.The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure,as well as chemical,biological,radiological,nuclear,and cybersecurity risks.Together,these are the most significant actions ever taken by any government to advance the field of AI safety.Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.Agencies that fund life-science projects will establish these standards as a condition of federal funding,creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
2014年,一款在圣彼得堡开发的名叫“尤金·古斯特曼([Eugene Goostman](https://en.wikipedia.org/wiki/Eugene_Goostman))”的聊天机器人,使用一些技巧通过了图灵测试。它事先声称自己是一个13岁的乌克兰男孩,这解释了它在回答人类评审员问题时为什么会出现知识缺乏和文字习惯差异。这款机器人在长达5分钟的对话中让超过30%的人类评审员认为它是一个真人,而图灵在此前认为人类在2000年就能使机器人达到这个测试指标。不过我们应该认识到,这个事件并不能说明我们已经创造出了一个真正的智能系统,也不能说明计算机系统骗过了人类评审员——不是系统骗过了人类,而是机器人的创造者骗过了人类!✅你有没有被聊天机器人骗过,以为自己是在和真人说话?它是怎样骗过你的?
欧洲议会和欧盟理事会规定人工智能的统一规则,并修正300/2008号、167/2013号、168/2013号、2018/858号、2018/1139号和2019/214号条例以及2014/90/EU号、2016/797号和20(132)特定的旨在与自然人互动或生成内容的人工智能系统,无论是否符合高风险的条件,都可能带来假冒或欺骗的具体风险。因此,在特定的情况下,这些系统的使用应遵守具体的透明度义务,同时不影响对高风险人工智能系统的要求和义务,并应考虑到执法的特殊需要,遵守有针对性的例外规定。特别是,自然人应被告知他们正在与人工智能系统互动,除非从自然人的角度来看,这一点是显而易见的,因为考虑使用的情况和场景,自然人有合理的充分知情权、观察力和谨慎性。在履行这项义务时,应考虑到因年龄或残疾而属于弱势群体的个人的特点,只要人工智能系统也意图与这些群体互动。此外,如果系统通过处理自然人的生物数据,能够识别或推断出这些人的情绪或意图,或将其归入特定类别,则应通知自然人。这些特定类别可能涉及性别、年龄、发色、眼色、纹身、个人特征、种族血统、个人喜好和兴趣等方面。此类信息和通知应以无障碍的格式提供给残疾人。