Retrievers
Collection
4 items โข Updated โข 1
This is vocabulary pruned version of intfloat/multilingual-e5-base.
Uses only russian and english tokens.
| intfloat/multilingual-e5-base | d0rj/e5-base-en-ru | |
|---|---|---|
| Model size (MB) | 1060.65 | 504.89 |
| Params (count) | 278,043,648 | 132,354,048 |
| Word embeddings dim | 192,001,536 | 46,311,936 |
Performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
|---|---|---|
| recall@3 | ||
| map@3 | ||
| mrr@3 | ||
| recall@5 | ||
| map@5 | ||
| mrr@5 | ||
| recall@10 | ||
| map@10 | ||
| mrr@10 |
Use dot product distance for retrieval.
Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: ะะดะต ะฑัะป ัะพะทะดะฐะฝ ะฟะตัะฒัะน ััะพะปะปะตะนะฑัั?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: ะะพัะฟะพัะฐัะธะฒะฝัะน ัะฐะนั โ ัะพะดะตัะถะธั ะฟะพะปะฝัั ะธะฝัะพัะผะฐัะธั ะพ ะบะพะผะฟะฐะฝะธะธ-ะฒะปะฐะดะตะปััะต, ััะปัะณะฐั
/ะฟัะพะดัะบัะธะธ, ัะพะฑััะธัั
ะฒ ะถะธะทะฝะธ ะบะพะผะฟะฐะฝะธะธ. ะัะปะธัะฐะตััั ะพั ัะฐะนัะฐ-ะฒะธะทะธัะบะธ ะธ ะฟัะตะดััะฐะฒะธัะตะปััะบะพะณะพ ัะฐะนัะฐ ะฟะพะปะฝะพัะพะน ะฟัะตะดััะฐะฒะปะตะฝะฝะพะน ะธะฝัะพัะผะฐัะธะธ, ะทะฐัะฐัััั ัะพะดะตัะถะธั ัะฐะทะปะธัะฝัะต ััะฝะบัะธะพะฝะฐะปัะฝัะต ะธะฝััััะผะตะฝัั ะดะปั ัะฐะฑะพัั ั ะบะพะฝัะตะฝัะพะผ (ะฟะพะธัะบ ะธ ัะธะปัััั, ะบะฐะปะตะฝะดะฐัะธ ัะพะฑััะธะน, ัะพัะพะณะฐะปะตัะตะธ, ะบะพัะฟะพัะฐัะธะฒะฝัะต ะฑะปะพะณะธ, ัะพััะผั). ะะพะถะตั ะฑััั ะธะฝัะตะณัะธัะพะฒะฐะฝ ั ะฒะฝัััะตะฝะฝะธะผะธ ะธะฝัะพัะผะฐัะธะพะฝะฝัะผะธ ัะธััะตะผะฐะผะธ ะบะพะผะฟะฐะฝะธะธ-ะฒะปะฐะดะตะปััะฐ (ะะะก, CRM, ะฑัั
ะณะฐะปัะตััะบะธะผะธ ัะธััะตะผะฐะผะธ). ะะพะถะตั ัะพะดะตัะถะฐัั ะทะฐะบััััะต ัะฐะทะดะตะปั ะดะปั ัะตั
ะธะปะธ ะธะฝัั
ะณััะฟะฟ ะฟะพะปัะทะพะฒะฐัะตะปะตะน โ ัะพัััะดะฝะธะบะพะฒ, ะดะธะปะตัะพะฒ, ะบะพะฝััะฐะณะตะฝัะพะฒ ะธ ะฟั.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-base-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
from sentence_transformers import SentenceTransformer
sentences = [
'query: ะงัะพ ัะฐะบะพะต ะบััะณะปัะต ัะตะฝะทะพัั?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-base-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])