以下是为您推荐的查询相关信息源的工具:
相关来源查找器:输入您的论文主题或问题,让Textero.ai为您找到相关的来源。拥有超过2.14亿个来源的数据库,您可以探索推荐的资料,甚至下载PDF文件以阅读摘要。您还可以将自己的来源上传到工具中。AI研究助理:"Ask AI"功能提供有关来源的详细见解和智能推荐。这些信息可以轻松地复制到您的草稿中。灵感助手和自定义提示的文本编辑:使用创意生成功能,根据Textero.ai广泛的来源数据库创建内容。一旦生成内容,将其整合到您的论文中。您可以使用诸如"缩短"、"改写"或"扩展"等命令编辑您的文本。您还可以使用粗体或斜体格式化文本,然后将文件下载到计算机中。文本摘要生成器:这是一种不是所有AI论文生成器都具备的额外功能。Textero.ai对大型研究研究和各种学术论文进行总结,提供详细的摘要。它有助于节省时间并专注于写作。大纲生成器:Textero.ai自动为您的论文生成大纲。您可以请求特定部分,如结论、反驳或引言部分。
如果作为输入的一部分提供,模型可以利用外部信息源。这可以帮助模型生成更明智和最新的响应。例如,如果用户询问有关特定电影的问题,将有关电影的高质量信息(例如演员、导演等)添加到模型的输入中可能会很有用。嵌入可用于实现高效的知识检索,以便在运行时将相关信息动态添加到模型输入中。文本嵌入是一个向量,可以衡量文本字符串之间的相关性。相似或相关的字符串将比不相关的字符串靠得更近。这一事实以及快速向量搜索算法的存在意味着嵌入可用于实现高效的知识检索。特别是,一个文本语料库可以被分割成块,每个块都可以被嵌入和存储。然后,给定的查询可以被嵌入,可以进行向量搜索,以找到与查询最相关的语料库的嵌入文本块(即,在嵌入空间中最接近的)。
When history majors encounter LLMs,then,they are already trained to recognize some of the by-now-familiar pitfalls of services like ChatGPT — such as factual inaccuracies — and to address them via skills like fact-checking,analyzing genre and audience,or reading “around” a topic by searching in related sources.Importantly,too,because so many sources are out of copyright and available in multilingual editions on Wikipedia and Wikisource,language models are abundantly trained on historical primary sources in hundreds of different languages.[(1)](https://resobscura.substack.com/p/simulating-history-with-chatgpt#footnote-1-136683347)For these reasons,I agree with Tyler Cowen that language models are[specifically a good thing for historians](https://marginalrevolution.com/marginalrevolution/2023/01/chatgpt-and-the-revenge-of-history.html)— but I would go further and say that they are also specifically a good thing for history majors.On the other hand,I foresee major problems for history teachers and other educators in the short-term.[Ted Underwood is right](https://tedunderwood.com/2023/07/31/we-can-save-what-matters-about-writing-at-a-price/):we professors are going to have to fundamentally rethink many of our assignments.I’ve seen many people dismiss ChatGPT as an essay writing tool because simply plugging in a prompt from an assignment results in a weak piece of writing.But LLMs are all about iterative feedback,and experimenting with well-known prompting methods dramatically improves results.Here’s an example from one of my own past classes.When given a question from my “Early Modern Europe” survey about how Benvenuto Cellini’s Autobiography illustrates new ways of thinking about identity during the early modern period,GPT-4 can produce dramatically different results depending on the prompt.