以下是为您提供的大模型相关报告和指南:
[title]4.4历史更新[heading2]2024年9月[heading3]9月10日《[大模型行业可信应用框架研究报告](https://waytoagi.feishu.cn/record/Z9evrpRl6ezkSwcdOyPcDiffn2d)》由蚂蚁集团和中国信通院联合发布,探讨了大模型在金融、医疗、政务等行业的应用挑战和解决方案。报告提出了一个系统化的可信应用框架,强调了专业性、可控性、真实性和安全性四个核心要素,并从数据质量、模型能力、推理过程、系统安全等方面提供了具体的技术实现建议。其它一些报告发布在[研究报告板块](https://waytoagi.feishu.cn/wiki/WvhZwk16WiEnSvk8AcpcdZetnMe)和[知识星球](https://t.zsxq.com/18DnZxlrl):《[InfoQ:中国AI Agent应用研究报告2024](https://waytoagi.feishu.cn/record/Y45LrXJiwe4SgYc5tMZcVVtqn6b)》《[新战略:2024人形机器人产业半年研究报告](https://waytoagi.feishu.cn/record/CMtPrA26ReWXCBcrc6HcHC1ynHo)》《[脉脉:2024大模型人才报告](https://waytoagi.feishu.cn/record/BaV7rrxQneDbSmcGAYCcsyKPnrd)》《[2024人工智能术语研究阶段性成果报告](https://waytoagi.feishu.cn/record/UeYSrwRKsehI4acgKR5cqIfPnvb)》
LLMs,and the potential creation of new or previously unforeseen risks.As such,LLMs willbe a core focus of our monitoring and risk assessment functions and we will work with thewider AI community to ensure our adaptive framework is capable of identifying andresponding to developments relating to LLMs.For example,one way of monitoring the potential impact of LLMs could be by monitoringthe amount of compute used to train them,which is much easier to assess and governthan other inputs such as data,or talent.This could involve statutory reportingrequirements for models over a certain size.This metric could become less useful as away of establishing who has access to powerful models if machine learning developmentbecomes increasinglyopen-source.138Life cycle accountability – including the allocation of responsibility and liability for risksarising from the use of foundation models including LLMs – is a priority area for ongoingresearch and policy development.We will explore the ways in which technical standardsand other tools for trustworthy AI can support good practices for responsible innovationacross the life cycle and supply chain.We will also work with regulators to ensure they areappropriately equipped to engage with actors across the AI supply chain and allocate legalliability appropriately.Consultation questions:F1.What specific challenges will foundation models such as large language models(LLMs)or open-source models pose for regulators trying to determine legal responsibilityfor AI outcomes?F2.Do you agree that measuring compute provides a potential tool that could beconsidered as part of the governance of foundation models?F3.Are there other approaches to governing foundation models that would be moreeffective?3.3.4 Artificial intelligence sandboxes and testbeds
[title]大模型入门指南原文地址:https://mp.weixin.qq.com/s/9nJ7g2mo7nOv4iGXT_CPNg作者:写代码的西瓜随着ChatGPT的到来,大模型([1])(Large Language Model,简称LLM)成了新时代的buzzword,各种GPT产品百花齐放。大多数人直接用现有产品就可以了,但对于喜欢刨根问底的程序员来说,能够在本地运行会更有意思。但由于没有相关背景,笔者一开始在接触时,很多GitHub上的搭建教程看得是云里雾里,而且这方面的介绍文章要不就是太晦涩难懂,要不就是太大众小白,于是就有了这篇文章,主要介绍笔者在搭建大模型过程中学到的知识,以及如何在macOS上运行大模型。笔者水平有限,不足之处请读者指出。