transformers BERT预训练模型度量文本相似性


transformers BERT预训练模型度量文本相似性

经过与专业人士交流,了解到目前业内常用的度量文本相似性的做法有两种,效果差不多:

  1. 把文本A和文本B拼接起来,进行二分类任务,分类标签为相似或不相似。这种做法需要微调BERT模型,和我们之前微调BERT用于垃圾邮件的二分类任务是一样的思路和结果,可以用网络输出大于0.5是相似,小于0.5是不相似;
  2. 直接用文本A和文本B在预训练模型上度量相似性,得到的是两个tensor,计算余弦相似度即可。这样可以得到一个数值,来衡量相似性。

第一种做法与前面一样,就不写了,本文展示的是第二种做法

Sentence Similarity

Hugging Face/Models搜索任务为Sentence Similarity和语言为zh的模型,找到了这一个,给的样例代码简单明了,我们来跑一下试试:

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


sentences = ['我爱你', '我喜欢你']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli')
model = AutoModel.from_pretrained('symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli')

# 文本tokenize
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

得到输出为:

tensor([[ 0.2630,  1.8734, -0.6537,  ..., -0.4792, -0.0700, -0.0297],
        [ 0.0908,  1.9412, -0.8271,  ..., -0.5505, -0.2465, -0.1056]])

这是一个二维tensor,我们计算余弦距离:

similarity = torch.cosine_similarity(sentence_embeddings[0], sentence_embeddings[1], 0).item()
print(similarity)

得到的结果为;

0.8886765241622925

表面这两句话是非常相似的。

更换模型

现在我们把模型换成熟悉的哈工大的hfl/chinese-roberta-wwm-ext模型,再跑一下代码:

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


sentences = ['我爱你', '我喜欢你']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hfl/chinese-roberta-wwm-ext')
model = AutoModel.from_pretrained('hfl/chinese-roberta-wwm-ext')

# 文本tokenize
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

得到输出为:

tensor([[ 0.4178, -0.1403,  0.1729,  ..., -0.5353,  0.1616, -0.5226],
        [ 0.3651,  0.0084, -0.1581,  ..., -0.5166,  0.2051, -0.5374]])

计算余弦距离:

similarity = torch.cosine_similarity(sentence_embeddings[0], sentence_embeddings[1], 0).item()
print(similarity)

得到的结果为;

0.9244731664657593

表面哈工大模型认为这两句话更相似。

至此我们把文本相似性度量也按照业界目前最常用最标杆的方法做完啦,经专业人士鉴定,我距离NLP顶会ACL已经很近了😆😆😆


评论
  目录