国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Unifying Large Language Models and Knowledge Graphs: A Roadmap 論文閱讀筆記

這篇具有很好參考價(jià)值的文章主要介紹了Unifying Large Language Models and Knowledge Graphs: A Roadmap 論文閱讀筆記。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

Key Words:?

NLP, LLM, Generative Pre-training, KGs, Roadmap, Bidirectional Reasoning

Abstract:

LLMs are black models and can't capture and access factual knowledge. KGs are structured knowledge models that explicitly store rich factual knowledge. The combinations of KGs and LLMs have three frameworks,?

  1. KG-enhanced LLMs, pre-training and inference stages to provide external knowledge, used for analyzing LLMs and providing interpretability.

  2. LLM - augmented KGs, KG embedding, KG completion, KG construction, KG-to text generation, KGQA.

  3. Synergized LLMs+KGs, enhance performance in knowledge representation and reasoning.

Background

Introduction of LLMs

Encoder-only LLMs

Use the encoder to encode the sentence and understand the relationships between words.

Predict the mask words in an input sentence. Text classification, named entity recognition.

Encoder-decoder LLMs

Adopt both encoder and decoder modules. The encoder module works for encoding the input into a hidden-space, and the decoder is used to generate the target output text. Summarization, translation, question answering.

Decoder-only LLMs

Adopt the decoder module to generate target output text.

Prompt Engineering

Prompt is a sequence of natural language inputs for LLMs that specified for the task, including:

  1. Instruction: instructs the model to do a specific task.

  2. Context: provides the context for the input text or few-shot examples.

  3. Input text: the text that needs to be processed by the model.

Improve the capacity of LLMs in deverse complex tasks. CoT prompt enables complex reasoning capabilities throught intermediate reasoning steps.

Introduction of KGs

Roadmap

KG-enhanced LLMs

  • Pre-training stage

    • Integrating KGs into Training Objective

    • Integrating KGs into LLMs Input

    • KGs Instruction-tuning

  • Inference stage

    • Retrieval-Augmented Knowledge Fusion

      • RAG

    • KGs Prompting

  • Interpretability

    • KGs for LLM probing

    • KGs for LLM Analysis

LLM-augmented KGs

Knowledge Graph embedding aims to map each entity and relation into a low-dimensional vector space.

  • Text encoders for KG-related tasks

  • LLM processes the original corpus and entities for KG construction.

    • End-to-End KG Construction

    • Distilling Knowledge Graphs from LLMs

  • KG prompt, KG completion and KG reasoning.

    • PaE (LLM as Encoders)

    • PaG (LLM as Generators)

  • LLM-augmented KG-to-text Generation

    • Leveraging Knowledge from LLMs

    • Constructing large weakly KG-text aligned Corpus

  • LLM-augmented KG Question Answering

    • LLMs as Entity/relation Extractors

    • LLMs as Answer Reasoners

Synergized LLMs + KGs

Unifying Large Language Models and Knowledge Graphs: A Roadmap 論文閱讀筆記,語(yǔ)言模型,知識(shí)圖譜,論文閱讀

Synergized Knowledge Representation

Aims to design a synergized model can represent knowledge from both LLMs and KGs.

Synergized Reasoning
  • LLM-KG Fusion Reasoning文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-826761.html

到了這里,關(guān)于Unifying Large Language Models and Knowledge Graphs: A Roadmap 論文閱讀筆記的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 論文筆記:A Simple and Effective Pruning Approach for Large Language Models

    論文筆記:A Simple and Effective Pruning Approach for Large Language Models

    iclr 2024 reviewer 評(píng)分 5668 大模型網(wǎng)絡(luò)剪枝的paper 在努力保持性能的同時(shí),舍棄網(wǎng)絡(luò)權(quán)重的一個(gè)子集 現(xiàn)有方法 要么需要重新訓(xùn)練 這對(duì)于十億級(jí)別的LLMs來(lái)說(shuō)往往不現(xiàn)實(shí) 要么需要解決依賴于二階信息的權(quán)重重建問(wèn)題 這同樣可能帶來(lái)高昂的計(jì)算成本 ——引入了一種新穎、簡(jiǎn)單且有

    2024年04月17日
    瀏覽(27)
  • 【論文筆記】A Survey of Large Language Models in Medicine - Progress, Application, and Challenges

    【論文筆記】A Survey of Large Language Models in Medicine - Progress, Application, and Challenges

    將LLMs應(yīng)用于醫(yī)學(xué),以協(xié)助醫(yī)生和病人護(hù)理,成為人工智能和臨床醫(yī)學(xué)領(lǐng)域的一個(gè)有前景的研究方向。為此, 本綜述提供了醫(yī)學(xué)中LLMs當(dāng)前進(jìn)展、應(yīng)用和面臨挑戰(zhàn)的全面概述 。 具體來(lái)說(shuō),旨在回答以下問(wèn)題: 1)什么是LLMs,如何構(gòu)建醫(yī)學(xué)LLMs? 2)醫(yī)學(xué)LLMs的下游表現(xiàn)如何? 3)

    2024年02月03日
    瀏覽(36)
  • Why Large Language Models Hallucinate and How to solve this//LLM為什么產(chǎn)生幻覺以及如何應(yīng)對(duì)

    Why Large Language Models Hallucinate and How to solve this//LLM為什么產(chǎn)生幻覺以及如何應(yīng)對(duì)

    \\\" Large language models (LLMs) can generate fluent and coherent text on various topics and domains, but they are also prone to hallucinations or generating plausible sounding nonsense. This can range from minor inconsistencies to completely fabricated or contradictory statements. The causes of hallucinations are related to data quality, generation methods an

    2024年02月11日
    瀏覽(22)
  • 【論文閱讀】Reachability Queries with Label and Substructure Constraints on Knowledge Graphs

    【論文閱讀】Reachability Queries with Label and Substructure Constraints on Knowledge Graphs

    Wan X, Wang H. Reachability Queries With Label and Substructure Constraints on Knowledge Graphs[J]. IEEE Transactions on Knowledge and Data Engineering, 2022. 由于知識(shí)圖(KGs)描述和建模了現(xiàn)實(shí)世界中實(shí)體和概念之間的關(guān)系,因此對(duì)KGs的推理通常對(duì)應(yīng)于具有標(biāo)簽和實(shí)體的可達(dá)性查詢穿刺約束(LSCR)。特別地,對(duì)

    2024年02月04日
    瀏覽(19)
  • KILM: Knowledge Injection into Encoder-Decoder Language Models

    本文是LLM系列文章,針對(duì)《KILM: Knowledge Injection into Encoder-Decoder Language Models》的翻譯。 大型預(yù)訓(xùn)練語(yǔ)言模型(PLMs)已被證明在其參數(shù)內(nèi)保留隱含知識(shí)。為了增強(qiáng)這種隱性知識(shí),我們提出了知識(shí)注入語(yǔ)言模型(KILM),這是一種通過(guò)持續(xù)預(yù)訓(xùn)練生成知識(shí)填充目標(biāo)將實(shí)體相關(guān)知識(shí)注入編

    2024年02月07日
    瀏覽(19)
  • Language Models as Knowledge Embeddings:語(yǔ)言模型用作知識(shí)嵌入 IJCAI 2022

    Language Models as Knowledge Embeddings:語(yǔ)言模型用作知識(shí)嵌入 IJCAI 2022

    1)基于結(jié)構(gòu)的知識(shí)嵌入 進(jìn)一步分成基于翻譯的模型和基于語(yǔ)義匹配的模型 基于翻譯的模型采用基于距離的評(píng)分函數(shù),TransE把實(shí)體和關(guān)系嵌入到一個(gè)維度為d的共享向量空間中;TransH,TransR,RotatE. 語(yǔ)義匹配模型采用基于相似性的評(píng)分函數(shù),RESCAL,DistMult,CoKE. 2)基于描述的知識(shí)嵌入

    2024年02月07日
    瀏覽(21)
  • Automatically Correcting Large Language Models

    本文是大模型相關(guān)領(lǐng)域的系列文章,針對(duì)《Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies》的翻譯。 大型語(yǔ)言模型(LLM)在一系列NLP任務(wù)中表現(xiàn)出了卓越的性能。然而,它們的功效被不受歡迎和不一致的行為所破壞,包括幻覺、不忠實(shí)的

    2024年02月12日
    瀏覽(44)
  • 文獻(xiàn)閱讀:Large Language Models as Optimizers

    文獻(xiàn)閱讀:Large Language Models as Optimizers

    文獻(xiàn)閱讀:Large Language Models as Optimizers 1. 文章簡(jiǎn)介 2. 方法介紹 1. OPRO框架說(shuō)明 2. Demo驗(yàn)證 1. 線性回歸問(wèn)題 2. 旅行推銷員問(wèn)題(TSP問(wèn)題) 3. Prompt Optimizer 3. 實(shí)驗(yàn)考察 結(jié)論 1. 實(shí)驗(yàn)設(shè)置 2. 基礎(chǔ)實(shí)驗(yàn)結(jié)果 1. GSM8K 2. BBH 3. 泛化性 3. 消融實(shí)驗(yàn) 1. meta-prompt 2. 生成prompt的數(shù)目 3. 起始點(diǎn) 4.

    2024年01月19日
    瀏覽(33)
  • A Survey of Large Language Models

    A Survey of Large Language Models

    本文是LLM系列的第一篇文章,針對(duì)《A Survey of Large Language Models》的翻譯。 自從20世紀(jì)50年代提出圖靈測(cè)試以來(lái),人類一直在探索通過(guò)機(jī)器掌握語(yǔ)言智能。語(yǔ)言本質(zhì)上是一個(gè)由語(yǔ)法規(guī)則控制的復(fù)雜的人類表達(dá)系統(tǒng)。開發(fā)能夠理解和掌握語(yǔ)言的人工智能算法是一個(gè)重大挑戰(zhàn)。在過(guò)

    2024年02月09日
    瀏覽(64)
  • A Survey of Knowledge-Enhanced Pre-trained Language Models

    本文是LLM系列的文章,針對(duì)《A Survey of Knowledge-Enhanced Pre-trained Language Models》的翻譯。 預(yù)訓(xùn)練語(yǔ)言模型(PLM)通過(guò)自監(jiān)督學(xué)習(xí)方法在大文本語(yǔ)料庫(kù)上進(jìn)行訓(xùn)練,在自然語(yǔ)言處理(NLP)的各種任務(wù)中都取得了良好的性能。然而,盡管具有巨大參數(shù)的PLM可以有效地?fù)碛袕拇罅坑?xùn)練

    2024年02月09日
    瀏覽(41)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包