国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

[oneAPI] 基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù)

這篇具有很好參考價(jià)值的文章主要介紹了[oneAPI] 基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

比賽:https://marketing.csdn.net/p/f3e44fbfe46c465f4d9d6c23e38e0517
Intel? DevCloud for oneAPI:https://devcloud.intel.com/oneapi/get_started/aiAnalyticsToolkitSamples/

Intel? Optimization for PyTorch and Intel? DevCloud for oneAPI

我們?cè)贗ntel? DevCloud for oneAPI平臺(tái)上構(gòu)建了實(shí)驗(yàn)環(huán)境,充分發(fā)揮其完全虛擬化的優(yōu)勢(shì)。更具影響力的是,我們充分發(fā)揮了Intel? Optimization for PyTorch的強(qiáng)大功能,將其無(wú)縫融入我們的PyTorch模型中。這項(xiàng)優(yōu)化策略的成功應(yīng)用,不僅進(jìn)一步提升了我們實(shí)驗(yàn)的效果,也顯著加速了模型的訓(xùn)練和推斷過(guò)程。通過(guò)這種深度融合硬件和軟件的精妙設(shè)計(jì),我們不僅釋放了硬件的潛力,還為我們的研究和實(shí)驗(yàn)帶來(lái)了新的可能性。這一系列的努力為人工智能領(lǐng)域的創(chuàng)新開(kāi)辟了更廣闊的前景。
[oneAPI] 基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù),python雜記,oneapi,bert,人工智能

基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù)

SQuAD(Stanford Question Answering Dataset)是一個(gè)廣泛使用的英文問(wèn)答數(shù)據(jù)集,由斯坦福大學(xué)發(fā)布。它旨在促進(jìn)機(jī)器閱讀理解研究,對(duì)于理解文本內(nèi)容并從中提取答案非常有價(jià)值。SQuAD數(shù)據(jù)集的主要特點(diǎn)是,每篇文章都有一系列問(wèn)題,以及與每個(gè)問(wèn)題相關(guān)的精確答案片段,這些答案是從原始文章中抽取的。

在SQuAD英文問(wèn)答任務(wù)中,模型需要讀取文章、理解上下文,并從中準(zhǔn)確提取出問(wèn)題的答案。該任務(wù)對(duì)于開(kāi)發(fā)強(qiáng)大的閱讀理解模型和問(wèn)答系統(tǒng)具有重要的意義。

SQuAD英文問(wèn)答任務(wù)的特點(diǎn)和價(jià)值:

  • 真實(shí)性: SQuAD數(shù)據(jù)集的文章和問(wèn)題都來(lái)自真實(shí)的文本,確保了任務(wù)的實(shí)際應(yīng)用性。
  • 機(jī)器閱讀理解: 任務(wù)要求模型閱讀文章,理解其內(nèi)容,然后從中定位和提取出準(zhǔn)確的答案,這是機(jī)器閱讀理解的典型應(yīng)用。

在SQuAD英文問(wèn)答任務(wù)中,Bert(Bidirectional Encoder Representations from Transformers)是一種重要的模型,它通過(guò)預(yù)訓(xùn)練語(yǔ)言表示,在問(wèn)答系統(tǒng)和信息提取領(lǐng)域取得了顯著成就。

Bert模型的實(shí)用設(shè)計(jì)和價(jià)值影響:

  • 雙向上下文理解: Bert模型具備雙向上下文理解能力,可以同時(shí)考慮文本的前后信息,從而更好地捕捉單詞之間的關(guān)系。
  • 預(yù)訓(xùn)練與微調(diào): Bert在大規(guī)模語(yǔ)料庫(kù)上進(jìn)行預(yù)訓(xùn)練,學(xué)習(xí)了豐富的語(yǔ)言表示,然后通過(guò)微調(diào)在特定任務(wù)上表現(xiàn)出色,適應(yīng)任務(wù)需求。

語(yǔ)料介紹

所謂問(wèn)題回答指的就是同時(shí)給模型輸入一個(gè)問(wèn)題和一段描述,最后需要模型從給定的描述中預(yù)測(cè)出答案所在的位置(text span)。例如:

描述:蘇軾是北宋著名的文學(xué)家與政治家,眉州眉山人。
問(wèn)題:蘇軾是哪里人?
標(biāo)簽:眉州眉山人

對(duì)于這樣一個(gè)問(wèn)題問(wèn)答任務(wù)我們應(yīng)該怎么來(lái)構(gòu)建這個(gè)模型呢?

在做這個(gè)任務(wù)之前首先需要明白的就是:①最終問(wèn)題的答案一定是在給定的描述中;②問(wèn)題的答案一定是一段連續(xù)的字符。例如對(duì)于上面的描述,如果給出問(wèn)題“蘇軾生活在什么年代他是哪里人?”,那么模型并不會(huì)給出“北宋”和“眉州眉山人”這兩個(gè)分離的答案,最好的情況下便是給出“北宋著名的文學(xué)家與政治家,眉州眉山人”這一個(gè)答案。

在有了這兩個(gè)限制條件后,對(duì)于這類問(wèn)答任務(wù)的本質(zhì)也就變成了需要讓模型預(yù)測(cè)得到答案在描述中的起始位置(start position)以及它的結(jié)束位置(end position)。所以,問(wèn)題最終又變成了如何在BERT模型的基礎(chǔ)上再構(gòu)建一個(gè)分類器來(lái)對(duì)BERT最后一層輸出的每個(gè)Token進(jìn)行分類,判斷它們是否屬于start position或者是end position。

數(shù)據(jù)下載

由于沒(méi)有找到類似的高質(zhì)量中文數(shù)據(jù)集,所以在這里使用到的也是論文中所提到的SQuAD(The Stanford Question Answering Dataset 1.1 )數(shù)據(jù)集,即給定一個(gè)問(wèn)題和描述需要模型從描述中找出答案的起止位置。

構(gòu)建

對(duì)于數(shù)據(jù)預(yù)處理部分我們可以繼續(xù)繼承之前文本分類處理的這個(gè)類LoadSingleSentenceClassificationDataset,然后再稍微修改其中的部分方法即可。

import torch
from torch.utils.data import DataLoader
from tqdm import tqdm
import pandas as pd
import json
import logging
import os
from sklearn.model_selection import train_test_split
import collections
import six


class Vocab:
    """
    根據(jù)本地的vocab文件,構(gòu)造一個(gè)詞表
    vocab = Vocab()
    print(vocab.itos)  # 得到一個(gè)列表,返回詞表中的每一個(gè)詞;
    print(vocab.itos[2])  # 通過(guò)索引返回得到詞表中對(duì)應(yīng)的詞;
    print(vocab.stoi)  # 得到一個(gè)字典,返回詞表中每個(gè)詞的索引;
    print(vocab.stoi['我'])  # 通過(guò)單詞返回得到詞表中對(duì)應(yīng)的索引
    print(len(vocab))  # 返回詞表長(zhǎng)度
    """
    UNK = '[UNK]'

    def __init__(self, vocab_path):
        self.stoi = {}
        self.itos = []
        with open(vocab_path, 'r', encoding='utf-8') as f:
            for i, word in enumerate(f):
                w = word.strip('\n')
                self.stoi[w] = i
                self.itos.append(w)

    def __getitem__(self, token):
        return self.stoi.get(token, self.stoi.get(Vocab.UNK))

    def __len__(self):
        return len(self.itos)


def build_vocab(vocab_path):
    """
    vocab = Vocab()
    print(vocab.itos)  # 得到一個(gè)列表,返回詞表中的每一個(gè)詞;
    print(vocab.itos[2])  # 通過(guò)索引返回得到詞表中對(duì)應(yīng)的詞;
    print(vocab.stoi)  # 得到一個(gè)字典,返回詞表中每個(gè)詞的索引;
    print(vocab.stoi['我'])  # 通過(guò)單詞返回得到詞表中對(duì)應(yīng)的索引
    """
    return Vocab(vocab_path)


def pad_sequence(sequences, batch_first=False, max_len=None, padding_value=0):
    """
    對(duì)一個(gè)List中的元素進(jìn)行padding
    Pad a list of variable length Tensors with ``padding_value``
    a = torch.ones(25)
    b = torch.ones(22)
    c = torch.ones(15)
    pad_sequence([a, b, c],max_len=None).size()
    torch.Size([25, 3])
        sequences:
        batch_first: 是否把batch_size放到第一個(gè)維度
        padding_value:
        max_len :
                當(dāng)max_len = 50時(shí),表示以某個(gè)固定長(zhǎng)度對(duì)樣本進(jìn)行padding,多余的截掉;
                當(dāng)max_len=None是,表示以當(dāng)前batch中最長(zhǎng)樣本的長(zhǎng)度對(duì)其它進(jìn)行padding;
    Returns:
    """
    if max_len is None:
        max_len = max([s.size(0) for s in sequences])
    out_tensors = []
    for tensor in sequences:
        if tensor.size(0) < max_len:
            tensor = torch.cat([tensor, torch.tensor([padding_value] * (max_len - tensor.size(0)))], dim=0)
        else:
            tensor = tensor[:max_len]
        out_tensors.append(tensor)
    out_tensors = torch.stack(out_tensors, dim=1)
    if batch_first:
        return out_tensors.transpose(0, 1)
    return out_tensors


def cache(func):
    """
    本修飾器的作用是將SQuAD數(shù)據(jù)集中data_process()方法處理后的結(jié)果進(jìn)行緩存,下次使用時(shí)可直接載入!
    :param func:
    :return:
    """

    def wrapper(*args, **kwargs):
        filepath = kwargs['filepath']
        postfix = kwargs['postfix']
        data_path = filepath.split('.')[0] + '_' + postfix + '.pt'
        if not os.path.exists(data_path):
            logging.info(f"緩存文件 {data_path} 不存在,重新處理并緩存!")
            data = func(*args, **kwargs)
            with open(data_path, 'wb') as f:
                torch.save(data, f)
        else:
            logging.info(f"緩存文件 {data_path} 存在,直接載入緩存文件!")
            with open(data_path, 'rb') as f:
                data = torch.load(f)
        return data

    return wrapper


class LoadSingleSentenceClassificationDataset:
    def __init__(self,
                 vocab_path='./vocab.txt',  #
                 tokenizer=None,
                 batch_size=32,
                 max_sen_len=None,
                 split_sep='\n',
                 max_position_embeddings=512,
                 pad_index=0,
                 is_sample_shuffle=True
                 ):

        """

        :param vocab_path: 本地詞表vocab.txt的路徑
        :param tokenizer:
        :param batch_size:
        :param max_sen_len: 在對(duì)每個(gè)batch進(jìn)行處理時(shí)的配置;
                            當(dāng)max_sen_len = None時(shí),即以每個(gè)batch中最長(zhǎng)樣本長(zhǎng)度為標(biāo)準(zhǔn),對(duì)其它進(jìn)行padding
                            當(dāng)max_sen_len = 'same'時(shí),以整個(gè)數(shù)據(jù)集中最長(zhǎng)樣本為標(biāo)準(zhǔn),對(duì)其它進(jìn)行padding
                            當(dāng)max_sen_len = 50, 表示以某個(gè)固定長(zhǎng)度符樣本進(jìn)行padding,多余的截掉;
        :param split_sep: 文本和標(biāo)簽之前的分隔符,默認(rèn)為'\t'
        :param max_position_embeddings: 指定最大樣本長(zhǎng)度,超過(guò)這個(gè)長(zhǎng)度的部分將本截取掉
        :param is_sample_shuffle: 是否打亂訓(xùn)練集樣本(只針對(duì)訓(xùn)練集)
                在后續(xù)構(gòu)造DataLoader時(shí),驗(yàn)證集和測(cè)試集均指定為了固定順序(即不進(jìn)行打亂),修改程序時(shí)請(qǐng)勿進(jìn)行打亂
                因?yàn)楫?dāng)shuffle為True時(shí),每次通過(guò)for循環(huán)遍歷data_iter時(shí)樣本的順序都不一樣,這會(huì)導(dǎo)致在模型預(yù)測(cè)時(shí)
                返回的標(biāo)簽順序與原始的順序不一樣,不方便處理。

        """
        self.tokenizer = tokenizer
        self.vocab = build_vocab(vocab_path)
        self.PAD_IDX = pad_index
        self.SEP_IDX = self.vocab['[SEP]']
        self.CLS_IDX = self.vocab['[CLS]']
        # self.UNK_IDX = '[UNK]'

        self.batch_size = batch_size
        self.split_sep = split_sep
        self.max_position_embeddings = max_position_embeddings
        if isinstance(max_sen_len, int) and max_sen_len > max_position_embeddings:
            max_sen_len = max_position_embeddings
        self.max_sen_len = max_sen_len
        self.is_sample_shuffle = is_sample_shuffle

    @cache
    def data_process(self, filepath, postfix='cache'):
        """
        將每一句話中的每一個(gè)詞根據(jù)字典轉(zhuǎn)換成索引的形式,同時(shí)返回所有樣本中最長(zhǎng)樣本的長(zhǎng)度
        :param filepath: 數(shù)據(jù)集路徑
        :return:
        """
        raw_iter = open(filepath, encoding="utf8").readlines()
        data = []
        max_len = 0
        for raw in tqdm(raw_iter, ncols=80):
            line = raw.rstrip("\n").split(self.split_sep)
            s, l = line[0], line[1]
            tmp = [self.CLS_IDX] + [self.vocab[token] for token in self.tokenizer(s)]
            if len(tmp) > self.max_position_embeddings - 1:
                tmp = tmp[:self.max_position_embeddings - 1]  # BERT預(yù)訓(xùn)練模型只取前512個(gè)字符
            tmp += [self.SEP_IDX]
            tensor_ = torch.tensor(tmp, dtype=torch.long)
            l = torch.tensor(int(l), dtype=torch.long)
            max_len = max(max_len, tensor_.size(0))
            data.append((tensor_, l))
        return data, max_len

    def load_train_val_test_data(self, train_file_path=None,
                                 val_file_path=None,
                                 test_file_path=None,
                                 only_test=False):
        postfix = str(self.max_sen_len)
        test_data, _ = self.data_process(filepath=test_file_path, postfix=postfix)
        test_iter = DataLoader(test_data, batch_size=self.batch_size,
                               shuffle=False, collate_fn=self.generate_batch)
        if only_test:
            return test_iter
        train_data, max_sen_len = self.data_process(filepath=train_file_path,
                                                    postfix=postfix)  # 得到處理好的所有樣本
        if self.max_sen_len == 'same':
            self.max_sen_len = max_sen_len
        val_data, _ = self.data_process(filepath=val_file_path,
                                        postfix=postfix)
        train_iter = DataLoader(train_data, batch_size=self.batch_size,  # 構(gòu)造DataLoader
                                shuffle=self.is_sample_shuffle, collate_fn=self.generate_batch)
        val_iter = DataLoader(val_data, batch_size=self.batch_size,
                              shuffle=False, collate_fn=self.generate_batch)
        return train_iter, test_iter, val_iter

    def generate_batch(self, data_batch):
        batch_sentence, batch_label = [], []
        for (sen, label) in data_batch:  # 開(kāi)始對(duì)一個(gè)batch中的每一個(gè)樣本進(jìn)行處理。
            batch_sentence.append(sen)
            batch_label.append(label)
        batch_sentence = pad_sequence(batch_sentence,  # [batch_size,max_len]
                                      padding_value=self.PAD_IDX,
                                      batch_first=False,
                                      max_len=self.max_sen_len)
        batch_label = torch.tensor(batch_label, dtype=torch.long)
        return batch_sentence, batch_label


class LoadSQuADQuestionAnsweringDataset(LoadSingleSentenceClassificationDataset):
    """
    Args:
        doc_stride: When splitting up a long document into chunks, how much stride to
                    take between chunks.
                    當(dāng)上下文過(guò)長(zhǎng)時(shí),按滑動(dòng)窗口進(jìn)行移動(dòng),doc_stride表示每次移動(dòng)的距離
        max_query_length: The maximum number of tokens for the question. Questions longer than
                    this will be truncated to this length.
                    限定問(wèn)題的最大長(zhǎng)度,過(guò)長(zhǎng)時(shí)截?cái)?        n_best_size: 對(duì)預(yù)測(cè)出的答案近后處理時(shí),選取的候選答案數(shù)量
        max_answer_length: 在對(duì)候選進(jìn)行篩選時(shí),對(duì)答案最大長(zhǎng)度的限制

    """

    def __init__(self, doc_stride=64,
                 max_query_length=64,
                 n_best_size=20,
                 max_answer_length=30,
                 **kwargs):
        super(LoadSQuADQuestionAnsweringDataset, self).__init__(**kwargs)
        self.doc_stride = doc_stride
        self.max_query_length = max_query_length
        self.n_best_size = n_best_size
        self.max_answer_length = max_answer_length

    @staticmethod
    def get_format_text_and_word_offset(text):
        """
        格式化原始輸入的文本(去除多個(gè)空格),同時(shí)得到每個(gè)字符所屬的元素(單詞)的位置
        這樣,根據(jù)原始數(shù)據(jù)集中所給出的起始index(answer_start)就能立馬判定它在列表中的位置。
        :param text:
        :return:
        e.g.
            text = "Architecturally, the school has a Catholic character. "
            return:['Architecturally,', 'the', 'school', 'has', 'a', 'Catholic', 'character.'],
            [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3,
             3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
        """

        def is_whitespace(c):
            if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
                return True
            return False

        doc_tokens = []
        char_to_word_offset = []
        prev_is_whitespace = True
        # 以下這個(gè)for循環(huán)的作用就是將原始context中的內(nèi)容進(jìn)行格式化
        for c in text:  # 遍歷paragraph中的每個(gè)字符
            if is_whitespace(c):  # 判斷當(dāng)前字符是否為空格(各類空格)
                prev_is_whitespace = True
            else:
                if prev_is_whitespace:  # 如果前一個(gè)字符是空格
                    doc_tokens.append(c)
                else:
                    doc_tokens[-1] += c  # 在list的最后一個(gè)元素中繼續(xù)追加字符
                prev_is_whitespace = False
            char_to_word_offset.append(len(doc_tokens) - 1)
        return doc_tokens, char_to_word_offset

    def preprocessing(self, filepath, is_training=True):
        """
        將原始數(shù)據(jù)進(jìn)行預(yù)處理,同時(shí)返回得到答案在原始context中的具體開(kāi)始和結(jié)束位置(以單詞為單位)
        :param filepath:
        :param is_training:
        :return:
        返回形式為一個(gè)二維列表,內(nèi)層列表中的各個(gè)元素分別為 ['問(wèn)題ID','原始問(wèn)題文本','答案文本','context文本',
        '答案在context中的開(kāi)始位置','答案在context中的結(jié)束位置'],并且二維列表中的一個(gè)元素稱之為一個(gè)example,即一個(gè)example由六部分組成
        如下示例所示:
        [['5733be284776f41900661182', 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
        'Saint Bernadette Soubirous', 'Architecturally, the school has a Catholic character......',
        90, 92],
         ['5733be284776f4190066117f', ....]]
        """
        with open(filepath, 'r') as f:
            raw_data = json.loads(f.read())
            data = raw_data['data']
        examples = []
        for i in tqdm(range(len(data)), ncols=80, desc="正在遍歷每一個(gè)段落"):  # 遍歷每一個(gè)paragraphs
            paragraphs = data[i]['paragraphs']  # 取第i個(gè)paragraphs
            for j in range(len(paragraphs)):  # 遍歷第i個(gè)paragraphs的每個(gè)context
                context = paragraphs[j]['context']  # 取第j個(gè)context
                context_tokens, word_offset = self.get_format_text_and_word_offset(context)
                qas = paragraphs[j]['qas']  # 取第j個(gè)context下的所有 問(wèn)題-答案 對(duì)
                for k in range(len(qas)):  # 遍歷第j個(gè)context中的多個(gè) 問(wèn)題-答案 對(duì)
                    question_text = qas[k]['question']
                    qas_id = qas[k]['id']
                    if is_training:
                        answer_offset = qas[k]['answers'][0]['answer_start']
                        orig_answer_text = qas[k]['answers'][0]['text']
                        answer_length = len(orig_answer_text)
                        start_position = word_offset[answer_offset]
                        end_position = word_offset[answer_offset + answer_length - 1]
                        actual_text = " ".join(
                            context_tokens[start_position:(end_position + 1)])
                        cleaned_answer_text = " ".join(orig_answer_text.strip().split())
                        if actual_text.find(cleaned_answer_text) == -1:
                            logging.warning("Could not find answer: '%s' vs. '%s'",
                                            actual_text, cleaned_answer_text)
                            continue
                    else:
                        start_position = None
                        end_position = None
                        orig_answer_text = None
                    examples.append([qas_id, question_text, orig_answer_text,
                                     " ".join(context_tokens), start_position, end_position])
        return examples

    @staticmethod
    def improve_answer_span(context_tokens,
                            answer_tokens,
                            start_position,
                            end_position):
        """
        本方法的作用有兩個(gè):
            1. 如https://github.com/google-research/bert中run_squad.py里的_improve_answer_span()函數(shù)一樣,
               用于提取得到更加匹配答案的起止位置;
            2. 根據(jù)原始起止位置,提取得到token id中答案的起止位置
        # The SQuAD annotations are character based. We first project them to
        # whitespace-tokenized words. But then after WordPiece tokenization, we can
        # often find a "better match". For example:
        #
        #   Question: What year was John Smith born?
        #   Context: The leader was John Smith (1895-1943).
        #   Answer: 1895
        #
        # The original whitespace-tokenized answer will be "(1895-1943).". However
        # after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
        # the exact answer, 1895.

        context = "The leader was John Smith (1895-1943).
        answer_text = "1985"
        :param context_tokens: ['the', 'leader', 'was', 'john', 'smith', '(', '1895', '-', '1943', ')', '.']
        :param answer_tokens: ['1895']
        :param start_position: 5
        :param end_position: 5
        :return: [6,6]
        再例如:
        context = "Virgin mary reputedly appeared to Saint Bernadette Soubirous in 1858"
        answer_text = "Saint Bernadette Soubirous"
        :param context_tokens: ['virgin', 'mary', 'reputed', '##ly', 'appeared', 'to', 'saint', 'bern', '##ade',
                                '##tte', 'so', '##ub', '##iro', '##us', 'in', '1858']
        :param answer_tokens: ['saint', 'bern', '##ade', '##tte', 'so', '##ub', '##iro', '##us'
        :param start_position = 5
        :param end_position = 7
        return (6,13)

        """
        new_end = None
        for i in range(start_position, len(context_tokens)):
            if context_tokens[i] != answer_tokens[0]:
                continue
            for j in range(len(answer_tokens)):
                if answer_tokens[j] != context_tokens[i + j]:
                    break
                new_end = i + j
            if new_end - i + 1 == len(answer_tokens):
                return i, new_end
        return start_position, end_position

    @staticmethod
    def get_token_to_orig_map(input_tokens, origin_context, tokenizer):
        """
           本函數(shù)的作用是根據(jù)input_tokens和原始的上下文,返回得input_tokens中每個(gè)單詞在原始單詞中所對(duì)應(yīng)的位置索引
           :param input_tokens:  ['[CLS]', 'to', 'whom', 'did', 'the', 'virgin', '[SEP]', 'architectural', '##ly',
                                   ',', 'the', 'school', 'has', 'a', 'catholic', 'character', '.', '[SEP']
           :param origin_context: "Architecturally, the Architecturally, test, Architecturally,
                                    the school has a Catholic character. Welcome moon hotel"
           :param tokenizer:
           :return: {7: 4, 8: 4, 9: 4, 10: 5, 11: 6, 12: 7, 13: 8, 14: 9, 15: 10, 16: 10}
                   含義是input_tokens[7]為origin_context中的第4個(gè)單詞 Architecturally,
                        input_tokens[8]為origin_context中的第4個(gè)單詞 Architecturally,
                        ...
                        input_tokens[10]為origin_context中的第5個(gè)單詞 the
           """
        origin_context_tokens = origin_context.split()
        token_id = []
        str_origin_context = ""
        for i in range(len(origin_context_tokens)):
            tokens = tokenizer(origin_context_tokens[i])
            str_token = "".join(tokens)
            str_origin_context += "" + str_token
            for _ in str_token:
                token_id.append(i)

        key_start = input_tokens.index('[SEP]') + 1
        tokenized_tokens = input_tokens[key_start:-1]
        str_tokenized_tokens = "".join(tokenized_tokens)
        index = str_origin_context.index(str_tokenized_tokens)
        value_start = token_id[index]
        token_to_orig_map = {}
        # 處理這樣的邊界情況: Building's gold   《==》   's', 'gold', 'dome'
        token = tokenizer(origin_context_tokens[value_start])
        for i in range(len(token), -1, -1):
            s1 = "".join(token[-i:])
            s2 = "".join(tokenized_tokens[:i])
            if s1 == s2:
                token = token[-i:]
                break

        while True:
            for j in range(len(token)):
                token_to_orig_map[key_start] = value_start
                key_start += 1
                if len(token_to_orig_map) == len(tokenized_tokens):
                    return token_to_orig_map
            value_start += 1
            token = tokenizer(origin_context_tokens[value_start])

    @cache
    def data_process(self, filepath, is_training=False, postfix='cache'):
        """

        :param filepath:
        :param is_training:
        :return: [[example_id, feature_id, input_ids, seg, start_position,
                    end_position, answer_text, example[0]],input_tokens,token_to_orig_map [],[],[]...]
                  分別對(duì)應(yīng):[原始樣本Id,訓(xùn)練特征id,input_ids,seg,開(kāi)始,結(jié)束,答案文本,問(wèn)題id,input_tokens,token_to_orig_map]
        """
        logging.info(f"## 使用窗口滑動(dòng)滑動(dòng),doc_stride = {self.doc_stride}")
        examples = self.preprocessing(filepath, is_training)
        all_data = []
        example_id, feature_id = 0, 1000000000
        # 由于采用了滑動(dòng)窗口,所以一個(gè)example可能構(gòu)造得到多個(gè)訓(xùn)練樣本(即這里被稱為feature);
        # 因此,需要對(duì)其分別進(jìn)行編號(hào),并且這主要是用在預(yù)測(cè)后的結(jié)果后處理當(dāng)中,訓(xùn)練時(shí)用不到
        # 當(dāng)然,這里只使用feature_id即可,因?yàn)槊總€(gè)example其實(shí)對(duì)應(yīng)的就是一個(gè)問(wèn)題,所以問(wèn)題ID和example_id本質(zhì)上是一樣的
        for example in tqdm(examples, ncols=80, desc="正在遍歷每個(gè)問(wèn)題(樣本)"):
            question_tokens = self.tokenizer(example[1])
            if len(question_tokens) > self.max_query_length:  # 問(wèn)題過(guò)長(zhǎng)進(jìn)行截取
                question_tokens = question_tokens[:self.max_query_length]
            question_ids = [self.vocab[token] for token in question_tokens]
            question_ids = [self.CLS_IDX] + question_ids + [self.SEP_IDX]
            context_tokens = self.tokenizer(example[3])
            context_ids = [self.vocab[token] for token in context_tokens]
            logging.debug(f"<<<<<<<<  進(jìn)入新的example  >>>>>>>>>")
            logging.debug(f"## 正在預(yù)處理數(shù)據(jù) {__name__} is_training = {is_training}")
            logging.debug(f"## 問(wèn)題 id: {example[0]}")
            logging.debug(f"## 原始問(wèn)題 text: {example[1]}")
            logging.debug(f"## 原始描述 text: {example[3]}")
            start_position, end_position, answer_text = -1, -1, None
            if is_training:
                start_position, end_position = example[4], example[5]
                answer_text = example[2]
                answer_tokens = self.tokenizer(answer_text)
                start_position, end_position = self.improve_answer_span(context_tokens,
                                                                        answer_tokens,
                                                                        start_position,
                                                                        end_position)
            rest_len = self.max_sen_len - len(question_ids) - 1
            context_ids_len = len(context_ids)
            logging.debug(f"## 上下文長(zhǎng)度為:{context_ids_len}, 剩余長(zhǎng)度 rest_len 為 : {rest_len}")
            if context_ids_len > rest_len:  # 長(zhǎng)度超過(guò)max_sen_len,需要進(jìn)行滑動(dòng)窗口
                logging.debug(f"## 進(jìn)入滑動(dòng)窗口 …… ")
                s_idx, e_idx = 0, rest_len
                while True:
                    # We can have documents that are longer than the maximum sequence length.
                    # To deal with this we do a sliding window approach, where we take chunks
                    # of the up to our max length with a stride of `doc_stride`.
                    tmp_context_ids = context_ids[s_idx:e_idx]
                    tmp_context_tokens = [self.vocab.itos[item] for item in tmp_context_ids]
                    logging.debug(f"## 滑動(dòng)窗口范圍:{s_idx, e_idx},example_id: {example_id}, feature_id: {feature_id}")
                    # logging.debug(f"## 滑動(dòng)窗口取值:{tmp_context_tokens}")
                    input_ids = torch.tensor(question_ids + tmp_context_ids + [self.SEP_IDX])
                    input_tokens = ['[CLS]'] + question_tokens + ['[SEP]'] + tmp_context_tokens + ['[SEP]']
                    seg = [0] * len(question_ids) + [1] * (len(input_ids) - len(question_ids))
                    seg = torch.tensor(seg)
                    if is_training:
                        new_start_position, new_end_position = 0, 0
                        if start_position >= s_idx and end_position <= e_idx:  # in train
                            logging.debug(f"## 滑動(dòng)窗口中存在答案 -----> ")
                            new_start_position = start_position - s_idx
                            new_end_position = new_start_position + (end_position - start_position)

                            new_start_position += len(question_ids)
                            new_end_position += len(question_ids)
                            logging.debug(f"## 原始答案:{answer_text} <===>處理后的答案:"
                                          f"{' '.join(input_tokens[new_start_position:(new_end_position + 1)])}")
                        all_data.append([example_id, feature_id, input_ids, seg, new_start_position,
                                         new_end_position, answer_text, example[0], input_tokens])
                        logging.debug(f"## start pos:{new_start_position}")
                        logging.debug(f"## end pos:{new_end_position}")
                    else:
                        all_data.append([example_id, feature_id, input_ids, seg, start_position,
                                         end_position, answer_text, example[0], input_tokens])
                        logging.debug(f"## start pos:{start_position}")
                        logging.debug(f"## end pos:{end_position}")
                    token_to_orig_map = self.get_token_to_orig_map(input_tokens, example[3], self.tokenizer)
                    all_data[-1].append(token_to_orig_map)
                    logging.debug(f"## example id: {example_id}")
                    logging.debug(f"## feature id: {feature_id}")
                    logging.debug(f"## input_tokens: {input_tokens}")
                    logging.debug(f"## input_ids:{input_ids.tolist()}")
                    logging.debug(f"## segment ids:{seg.tolist()}")
                    logging.debug(f"## orig_map:{token_to_orig_map}")
                    logging.debug("======================\n")
                    feature_id += 1
                    if e_idx >= context_ids_len:
                        break
                    s_idx += self.doc_stride
                    e_idx += self.doc_stride

            else:
                input_ids = torch.tensor(question_ids + context_ids + [self.SEP_IDX])
                input_tokens = ['[CLS]'] + question_tokens + ['[SEP]'] + context_tokens + ['[SEP]']
                seg = [0] * len(question_ids) + [1] * (len(input_ids) - len(question_ids))
                seg = torch.tensor(seg)
                if is_training:
                    start_position += (len(question_ids))
                    end_position += (len(question_ids))
                token_to_orig_map = self.get_token_to_orig_map(input_tokens, example[3], self.tokenizer)
                all_data.append([example_id, feature_id, input_ids, seg, start_position,
                                 end_position, answer_text, example[0], input_tokens, token_to_orig_map])
                logging.debug(f"## input_tokens: {input_tokens}")
                logging.debug(f"## input_ids:{input_ids.tolist()}")
                logging.debug(f"## segment ids:{seg.tolist()}")
                logging.debug(f"## orig_map:{token_to_orig_map}")
                logging.debug("======================\n")
                feature_id += 1
            example_id += 1
        #  all_data[0]: [原始樣本Id,訓(xùn)練特征id,input_ids,seg,開(kāi)始,結(jié)束,答案文本,問(wèn)題id, input_tokens,ori_map]
        data = {'all_data': all_data, 'max_len': self.max_sen_len, 'examples': examples}
        return data

    def generate_batch(self, data_batch):
        batch_input, batch_seg, batch_label, batch_qid = [], [], [], []
        batch_example_id, batch_feature_id, batch_map = [], [], []
        for item in data_batch:
            # item: [原始樣本Id,訓(xùn)練特征id,input_ids,seg,開(kāi)始,結(jié)束,答案文本,問(wèn)題id,input_tokens,ori_map]
            batch_example_id.append(item[0])  # 原始樣本Id
            batch_feature_id.append(item[1])  # 訓(xùn)練特征id
            batch_input.append(item[2])  # input_ids
            batch_seg.append(item[3])  # seg
            batch_label.append([item[4], item[5]])  # 開(kāi)始, 結(jié)束
            batch_qid.append(item[7])  # 問(wèn)題id
            batch_map.append(item[9])  # ori_map

        batch_input = pad_sequence(batch_input,  # [batch_size,max_len]
                                   padding_value=self.PAD_IDX,
                                   batch_first=False,
                                   max_len=self.max_sen_len)  # [max_len,batch_size]
        batch_seg = pad_sequence(batch_seg,  # [batch_size,max_len]
                                 padding_value=self.PAD_IDX,
                                 batch_first=False,
                                 max_len=self.max_sen_len)  # [max_len, batch_size]
        batch_label = torch.tensor(batch_label, dtype=torch.long)
        # [max_len,batch_size] , [max_len, batch_size] , [batch_size,2], [batch_size,], [batch_size,]
        return batch_input, batch_seg, batch_label, batch_qid, batch_example_id, batch_feature_id, batch_map

    def load_train_val_test_data(self, train_file_path=None,
                                 val_file_path=None,
                                 test_file_path=None,
                                 only_test=True):
        doc_stride = str(self.doc_stride)
        max_sen_len = str(self.max_sen_len)
        max_query_length = str(self.max_query_length)
        postfix = doc_stride + '_' + max_sen_len + '_' + max_query_length
        data = self.data_process(filepath=test_file_path,
                                 is_training=False,
                                 postfix=postfix)
        test_data, examples = data['all_data'], data['examples']
        test_iter = DataLoader(test_data, batch_size=self.batch_size,
                               shuffle=False,
                               collate_fn=self.generate_batch)
        if only_test:
            logging.info(f"## 成功返回測(cè)試集,一共包含樣本{len(test_iter.dataset)}個(gè)")
            return test_iter, examples

        data = self.data_process(filepath=train_file_path,
                                 is_training=True,
                                 postfix=postfix)  # 得到處理好的所有樣本
        train_data, max_sen_len = data['all_data'], data['max_len']
        _, val_data = train_test_split(train_data, test_size=0.3, random_state=2021)
        if self.max_sen_len == 'same':
            self.max_sen_len = max_sen_len
        train_iter = DataLoader(train_data, batch_size=self.batch_size,  # 構(gòu)造DataLoader
                                shuffle=self.is_sample_shuffle, collate_fn=self.generate_batch)
        val_iter = DataLoader(val_data, batch_size=self.batch_size,  # 構(gòu)造DataLoader
                              shuffle=False, collate_fn=self.generate_batch)
        logging.info(f"## 成功返回訓(xùn)練集樣本({len(train_iter.dataset)})個(gè)、開(kāi)發(fā)集樣本({len(val_iter.dataset)})個(gè)"
                     f"測(cè)試集樣本({len(test_iter.dataset)})個(gè).")
        return train_iter, test_iter, val_iter

    @staticmethod
    def get_best_indexes(logits, n_best_size):
        """Get the n-best logits from a list."""
        # logits = [0.37203778 0.48594432 0.81051651 0.07998148 0.93529721 0.0476721
        #  0.15275263 0.98202781 0.07813079 0.85410559]
        # n_best_size = 4
        # return [7, 4, 9, 2]
        index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)

        best_indexes = []
        for i in range(len(index_and_score)):
            if i >= n_best_size:
                break
            best_indexes.append(index_and_score[i][0])
        return best_indexes

    def get_final_text(self, pred_text, orig_text):
        """Project the tokenized prediction back to the original text."""

        # ref: https://github.com/google-research/bert/blob/master/run_squad.py
        # When we created the data, we kept track of the alignment between original
        # (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
        # now `orig_text` contains the span of our original text corresponding to the
        # span that we predicted.
        #
        # However, `orig_text` may contain extra characters that we don't want in
        # our prediction.
        #
        # For example, let's say:
        #   pred_text = steve smith
        #   orig_text = Steve Smith's
        #
        # We don't want to return `orig_text` because it contains the extra "'s".
        #
        # We don't want to return `pred_text` because it's already been normalized
        # (the SQuAD eval script also does punctuation stripping/lower casing but
        # our tokenizer does additional normalization like stripping accent
        # characters).
        #
        # What we really want to return is "Steve Smith".
        #
        # Therefore, we have to apply a semi-complicated alignment heruistic between
        # `pred_text` and `orig_text` to get a character-to-charcter alignment. This
        # can fail in certain cases in which case we just return `orig_text`.

        def _strip_spaces(text):
            ns_chars = []
            ns_to_s_map = collections.OrderedDict()
            for (i, c) in enumerate(text):
                if c == " ":
                    continue
                ns_to_s_map[len(ns_chars)] = i
                ns_chars.append(c)
            ns_text = "".join(ns_chars)
            return (ns_text, ns_to_s_map)

        # We first tokenize `orig_text`, strip whitespace from the result
        # and `pred_text`, and check if they are the same length. If they are
        # NOT the same length, the heuristic has failed. If they are the same
        # length, we assume the characters are one-to-one aligned.

        tok_text = " ".join(self.tokenizer(orig_text))

        start_position = tok_text.find(pred_text)
        if start_position == -1:
            return orig_text
        end_position = start_position + len(pred_text) - 1

        (orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
        (tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)

        if len(orig_ns_text) != len(tok_ns_text):
            return orig_text

        # We then project the characters in `pred_text` back to `orig_text` using
        # the character-to-character alignment.
        tok_s_to_ns_map = {}
        for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
            tok_s_to_ns_map[tok_index] = i

        orig_start_position = None
        if start_position in tok_s_to_ns_map:
            ns_start_position = tok_s_to_ns_map[start_position]
            if ns_start_position in orig_ns_to_s_map:
                orig_start_position = orig_ns_to_s_map[ns_start_position]

        if orig_start_position is None:
            return orig_text

        orig_end_position = None
        if end_position in tok_s_to_ns_map:
            ns_end_position = tok_s_to_ns_map[end_position]
            if ns_end_position in orig_ns_to_s_map:
                orig_end_position = orig_ns_to_s_map[ns_end_position]

        if orig_end_position is None:
            return orig_text

        output_text = orig_text[orig_start_position:(orig_end_position + 1)]
        return output_text

    def write_prediction(self, test_iter, all_examples, logits_data, output_dir):
        """
        根據(jù)預(yù)測(cè)得到的logits將預(yù)測(cè)結(jié)果寫入到本地文件中
        :param test_iter:
        :param all_examples:
        :param logits_data:
        :return:
        """
        qid_to_example_context = {}  # 根據(jù)qid取到其對(duì)應(yīng)的context token
        for example in all_examples:
            context = example[3]
            context_list = context.split()
            qid_to_example_context[example[0]] = context_list
        _PrelimPrediction = collections.namedtuple(  # pylint: disable=invalid-name
            "PrelimPrediction",
            ["text", "start_index", "end_index", "start_logit", "end_logit"])
        prelim_predictions = collections.defaultdict(list)
        for b_input, _, _, b_qid, _, b_feature_id, b_map in tqdm(test_iter, ncols=80, desc="正在遍歷候選答案"):
            # 取一個(gè)問(wèn)題對(duì)應(yīng)所有特征樣本的預(yù)測(cè)logits(因?yàn)橛辛嘶瑒?dòng)窗口,所以原始一個(gè)context可以構(gòu)造得到多個(gè)訓(xùn)練樣子本)
            all_logits = logits_data[b_qid[0]]
            for logits in all_logits:
                if logits[0] != b_feature_id[0]:
                    continue  # 非當(dāng)前子樣本對(duì)應(yīng)的logits忽略
                # 遍歷每個(gè)子樣本對(duì)應(yīng)logits的預(yù)測(cè)情況
                start_indexes = self.get_best_indexes(logits[1], self.n_best_size)
                # 得到開(kāi)始位置幾率最大的值對(duì)應(yīng)的索引,例如可能是 [ 4,6,3,1]
                end_indexes = self.get_best_indexes(logits[2], self.n_best_size)
                # 得到結(jié)束位置幾率最大的值對(duì)應(yīng)的索引,例如可能是 [ 5,8,10,9]
                for start_index in start_indexes:
                    for end_index in end_indexes:  # 遍歷所有存在的結(jié)果組合
                        if start_index >= b_input.size(0):
                            continue  # 起始索引大于token長(zhǎng)度,忽略
                        if end_index >= b_input.size(0):
                            continue  # 結(jié)束索引大于token長(zhǎng)度,忽略
                        if start_index not in b_map[0]:
                            continue  # 用來(lái)判斷索引是否位于[SEP]之后的位置,因?yàn)榇鸢钢粫?huì)在[SEP]以后出現(xiàn)
                        if end_index not in b_map[0]:
                            continue
                        if end_index < start_index:
                            continue
                        length = end_index - start_index + 1
                        if length > self.max_answer_length:
                            continue
                        token_ids = b_input.transpose(0, 1)[0]
                        strs = [self.vocab.itos[s] for s in token_ids]
                        tok_text = " ".join(strs[start_index:(end_index + 1)])
                        tok_text = tok_text.replace(" ##", "").replace("##", "")
                        tok_text = tok_text.strip()
                        tok_text = " ".join(tok_text.split())

                        orig_doc_start = b_map[0][start_index]
                        orig_doc_end = b_map[0][end_index]
                        orig_tokens = qid_to_example_context[b_qid[0]][orig_doc_start:(orig_doc_end + 1)]
                        orig_text = " ".join(orig_tokens)
                        final_text = self.get_final_text(tok_text, orig_text)

                        prelim_predictions[b_qid[0]].append(_PrelimPrediction(
                            text=final_text,
                            start_index=int(start_index),
                            end_index=int(end_index),
                            start_logit=float(logits[1][start_index]),
                            end_logit=float(logits[2][end_index])))
                        # 此處為將每個(gè)qid對(duì)應(yīng)的所有預(yù)測(cè)結(jié)果放到一起,因?yàn)橐粋€(gè)qid對(duì)應(yīng)的context應(yīng)該滑動(dòng)窗口
                        # 會(huì)有構(gòu)造得到多個(gè)訓(xùn)練樣本,而每個(gè)訓(xùn)練樣本都會(huì)對(duì)應(yīng)得到一個(gè)預(yù)測(cè)的logits
                        # 并且這里取了n_best個(gè)logits,所以組合后一個(gè)問(wèn)題就會(huì)得到過(guò)個(gè)預(yù)測(cè)的答案

        for k, v in prelim_predictions.items():
            # 對(duì)每個(gè)qid對(duì)應(yīng)的所有預(yù)測(cè)答案按照start_logit+end_logit的大小進(jìn)行排序
            prelim_predictions[k] = sorted(prelim_predictions[k],
                                           key=lambda x: (x.start_logit + x.end_logit),
                                           reverse=True)
        best_results, all_n_best_results = {}, {}
        for k, v in prelim_predictions.items():
            best_results[k] = v[0].text  # 取最好的第一個(gè)結(jié)果
            all_n_best_results[k] = v  # 保存所有預(yù)測(cè)結(jié)果
        with open(os.path.join(output_dir, f"best_result.json"), 'w') as f:
            f.write(json.dumps(best_results, indent=4) + '\n')
        with open(os.path.join(output_dir, f"best_n_result.json"), 'w') as f:
            f.write(json.dumps(all_n_best_results, indent=4) + '\n')

模型

我們只需要在原始BERT模型的基礎(chǔ)上取最后一層的輸出結(jié)果,然后再加一個(gè)分類層即可。因此這部分代碼相對(duì)來(lái)說(shuō)也比較容易理解。

from Bert import BertModel
import torch.nn as nn


class BertForQuestionAnswering(nn.Module):
    """
    用于建模類似SQuAD這樣的問(wèn)答數(shù)據(jù)集
    """

    def __init__(self, config, bert_pretrained_model_dir=None):
        super(BertForQuestionAnswering, self).__init__()
        if bert_pretrained_model_dir is not None:
            self.bert = BertModel.from_pretrained(config, bert_pretrained_model_dir)
        else:
            self.bert = BertModel(config)
        self.qa_outputs = nn.Linear(config.hidden_size, 2)

    def forward(self, input_ids,
                attention_mask=None,
                token_type_ids=None,
                position_ids=None,
                start_positions=None,
                end_positions=None):
        """
        :param input_ids: [src_len,batch_size]
        :param attention_mask: [batch_size,src_len]
        :param token_type_ids: [src_len,batch_size]
        :param position_ids:
        :param start_positions: [batch_size]
        :param end_positions:  [batch_size]
        :return:
        """
        _, all_encoder_outputs = self.bert(
            input_ids=input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids)
        sequence_output = all_encoder_outputs[-1]  # 取Bert最后一層的輸出
        # sequence_output: [src_len, batch_size, hidden_size]
        logits = self.qa_outputs(sequence_output)  # [src_len, batch_size,2]
        start_logits, end_logits = logits.split(1, dim=-1)
        # [src_len,batch_size,1]  [src_len,batch_size,1]
        start_logits = start_logits.squeeze(-1).transpose(0, 1)  # [batch_size,src_len]
        end_logits = end_logits.squeeze(-1).transpose(0, 1)  # [batch_size,src_len]
        if start_positions is not None and end_positions is not None:
            # 由于部分情況下start/end 位置會(huì)超過(guò)輸入的長(zhǎng)度
            # (例如輸入序列的可能大于512,并且正確的開(kāi)始或者結(jié)束符就在512之后)
            # 那么此時(shí)就要進(jìn)行特殊處理
            ignored_index = start_logits.size(1)  # 取輸入序列的長(zhǎng)度
            start_positions.clamp_(0, ignored_index)
            # 如果正確起始位置start_positions中,存在輸入樣本的開(kāi)始位置大于輸入長(zhǎng)度,
            # 那么直接取輸入序列的長(zhǎng)度作為開(kāi)始位置
            end_positions.clamp_(0, ignored_index)

            loss_fct = nn.CrossEntropyLoss(ignore_index=ignored_index)
            # 這里指定ignored_index其實(shí)就是為了忽略掉超過(guò)輸入序列長(zhǎng)度的(起始結(jié)束)位置
            # 在預(yù)測(cè)時(shí)所帶來(lái)的損失,因?yàn)檫@些位置并不能算是模型預(yù)測(cè)錯(cuò)誤的(只能看做是沒(méi)有預(yù)測(cè)),
            # 同時(shí)如果不加ignore_index的話,那么可能會(huì)影響模型在正常情況下的語(yǔ)義理解能力
            start_loss = loss_fct(start_logits, start_positions)
            end_loss = loss_fct(end_logits, end_positions)
            return (start_loss + end_loss) / 2, start_logits, end_logits
        else:
            return start_logits, end_logits  # [batch_size,src_len]

定義一個(gè)ModelConfig類來(lái)對(duì)分類模型中的超參數(shù)以及其它變量進(jìn)行管理,代碼如下所示:

class BertConfig(object):
    """Configuration for `BertModel`."""

    def __init__(self,
                 vocab_size=21128,
                 hidden_size=768,
                 num_hidden_layers=12,
                 num_attention_heads=12,
                 intermediate_size=3072,
                 pad_token_id=0,
                 hidden_act="gelu",
                 hidden_dropout_prob=0.1,
                 attention_probs_dropout_prob=0.1,
                 max_position_embeddings=512,
                 type_vocab_size=2,
                 initializer_range=0.02):
        """Constructs BertConfig.
        Args:
          vocab_size: Vocabulary size of `inputs_ids` in `BertModel`.
          hidden_size: Size of the encoder layers and the pooler layer.
          num_hidden_layers: Number of hidden layers in the Transformer encoder.
          num_attention_heads: Number of attention heads for each attention layer in
            the Transformer encoder.
          intermediate_size: The size of the "intermediate" (i.e., feed-forward)
            layer in the Transformer encoder.
          hidden_act: The non-linear activation function (function or string) in the
            encoder and pooler.
          hidden_dropout_prob: The dropout probability for all fully connected
            layers in the embeddings, encoder, and pooler.
          attention_probs_dropout_prob: The dropout ratio for the attention
            probabilities.
          max_position_embeddings: The maximum sequence length that this model might
            ever be used with. Typically set this to something large just in case
            (e.g., 512 or 1024 or 2048).
          type_vocab_size: The vocabulary size of the `token_type_ids` passed into
            `BertModel`.
          initializer_range: The stdev of the truncated_normal_initializer for
            initializing all weight matrices.
        """
        self.vocab_size = vocab_size
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.hidden_act = hidden_act
        self.intermediate_size = intermediate_size
        self.pad_token_id = pad_token_id
        self.hidden_dropout_prob = hidden_dropout_prob
        self.attention_probs_dropout_prob = attention_probs_dropout_prob
        self.max_position_embeddings = max_position_embeddings
        self.type_vocab_size = type_vocab_size
        self.initializer_range = initializer_range

    @classmethod
    def from_dict(cls, json_object):
        """Constructs a `BertConfig` from a Python dictionary of parameters."""
        config = BertConfig(vocab_size=None)
        for (key, value) in six.iteritems(json_object):
            config.__dict__[key] = value
        return config

    @classmethod
    def from_json_file(cls, json_file):
        """Constructs a `BertConfig` from a json file of parameters."""
        """從json配置文件讀取配置信息"""
        with open(json_file, 'r') as reader:
            text = reader.read()
        logging.info(f"成功導(dǎo)入BERT配置文件 {json_file}")
        return cls.from_dict(json.loads(text))

    def to_dict(self):
        """Serializes this instance to a Python dictionary."""
        output = copy.deepcopy(self.__dict__)
        return output

    def to_json_string(self):
        """Serializes this instance to a JSON string."""
        return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"

結(jié)果

[oneAPI] 基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù),python雜記,oneapi,bert,人工智能

參考資料

基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù):https://www.ylkz.life/deeplearning/p10265968/文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-666991.html

到了這里,關(guān)于[oneAPI] 基于BERT預(yù)訓(xùn)練模型的SQuAD問(wèn)答任務(wù)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包