SynCABEL: Synthetic Contextualized Augmentation for Biomedical Entity Linking

SynCABEL

SynCABEL is a novel framework that addresses data scarcity in biomedical entity linking through synthetic data generation. The method, introduced in our [paper]

SynCABEL (QUAERO-MEDLINE Edition)

This is a finetuned version of LLaMA-3-8B trained on QUAERO-MEDLINE using SynthQUAERO (our synthetic dataset generated via the SynCABEL framework).

Base Model meta-llama/Meta-Llama-3-8B-Instruct
Training Data QUAERO-MEDLINE (real) + SynthQUAERO (synthetic)
Fine-tuning Supervised Fine-Tuning

Training Data Composition

The model is trained on a mix of human-annotated and synthetic data:

QUAERO-MEDLINE (human)   : 9,074 mentions
SynthQUAERO (synthetic)  : 396,914 mentions

To ensure balanced learning, human data is upsampled during training so that each batch contains:

50% human-annotated data
50% synthetic data

In other words, although SynthMM is larger, the model always sees a 1:1 ratio of human to synthetic examples, preventing synthetic data from overwhelming human supervision.

Usage

Loading

import torch
from transformers import AutoModelForCausalLM

# Load the model (requires trust_remote_code for custom architecture)
model = AutoModelForCausalLM.from_pretrained(
    "AnonymousARR42/SynCABEL_QUAERO_MEDLINE",
    trust_remote_code=True,
    device_map="auto"
)

Unconstrained Generation

# Let the model freely generate concept names
sentences = [
    "Le patient atteint de [Embolie pulmonaire massive]{Disorders} a présenté des signes de détresse respiratoire.",
    "Le patient reçoit régulièrement des [corticoïdes]{Chemicals & Drugs} pour soulager les symptômes cutanés."
]

results = model.sample(
    sentences=sentences,
    constrained=False,
    num_beams=2,
)

for i, beam_results in enumerate(results):
    print(f"Input: {sentences[i]}")

    mention = beam_results[0]["mention"]
    print(f"Mention: {mention}")

    for j, result in enumerate(beam_results):
        print(
            f"Beam {j+1}:\n"
            f"Predicted concept name:{result['pred_concept_name']}\n"
            f"Predicted code: {result['pred_concept_code']}\n"
            f"Beam score: {result['beam_score']:.3f}\n"
        )

Output:

Input: Le patient atteint de [Embolie pulmonaire massive]{Disorders} a présenté des signes de détresse respiratoire.
Mention: Embolie pulmonaire massive
Beam 1:
Predicted concept name:Embolie pulmonaire massive
Predicted code: NO_CODE
Beam score: 0.979

Beam 2:
Predicted concept name:Massive pulmonary embolus
Predicted code: NO_CODE
Beam score: 0.678

Input: Le patient reçoit régulièrement des [corticoïdes]{Chemicals & Drugs} pour soulager les symptômes cutanés.
Mention: corticoïdes
Beam 1:
Predicted concept name:Corticosteroid
Predicted code: NO_CODE
Beam score: 0.937

Beam 2:
Predicted concept name:Corticosteroids
Predicted code: NO_CODE
Beam score: 0.654

Constrained Decoding (Recommended for Entity Linking)

# Constrained to valid biomedical concepts
sentences = [
    "Le patient atteint de [Embolie pulmonaire massive]{Disorders} a présenté des signes de détresse respiratoire.",
    "Le patient reçoit régulièrement des [corticoïdes]{Chemicals & Drugs} pour soulager les symptômes cutanés."
]

results = model.sample(
    sentences=sentences,
    constrained=True,
    num_beams=2,
)

for i, beam_results in enumerate(results):
    print(f"Input: {sentences[i]}")

    mention = beam_results[0]["mention"]
    print(f"Mention: {mention}")

    for j, result in enumerate(beam_results):
        print(
            f"Beam {j+1}:\n"
            f"Predicted concept name:{result['pred_concept_name']}\n"
            f"Predicted code: {result['pred_concept_code']}\n"
            f"Beam score: {result['beam_score']:.3f}\n"
        )

Output:

Input: Le patient atteint de [Embolie pulmonaire massive]{Disorders} a présenté des signes de détresse respiratoire.
Mention: Embolie pulmonaire massive
Beam 1:
Predicted concept name:Embolie pulmonaire massive aiguë
Predicted code: C0340535
Beam score: 0.130

Beam 2:
Predicted concept name:Embolie pulmonaire
Predicted code: C0034065
Beam score: 0.087

Input: Le patient reçoit régulièrement des [corticoïdes]{Chemicals & Drugs} pour soulager les symptômes cutanés.
Mention: corticoïdes
Beam 1:
Predicted concept name:Corticosteroid topical
Predicted code: C0304604
Beam score: 0.627

Beam 2:
Predicted concept name:Corticosteroid ophthalmologic and otologic preparations
Predicted code: C3540726
Beam score: 0.020

Scores

Entity linking performance (Recall@1) on biomedical benchmarks. The best results are shown in bold, the second-best results are underlined, and the "Average" column reports the mean score across the four benchmarks.

Model MM-ST21PV
(english)
QUAERO-MEDLINE
(french)
QUAERO-EMEA
(french)
SPACCC
(spanish)
Avg.
SciSpacy 53.8 40.5 37.1 13.2 36.2
SapBERT 51.1 50.6 49.8 33.9 46.4
CODER-all 56.6 58.7 58.1 43.7 54.3
SapBERT-all 64.6 74.7 67.9 47.9 63.8
ArboEL 74.5 70.9 62.8 49.0 64.2
mBART-large 65.5 61.5 58.6 57.7 60.8
+ Guided inference 70.0 72.8 71.1 61.8 68.9
+ SynCABEL (Our method) 71.5 77.1 75.3 64.0 72.0
Llama-3-8B 69.0 66.4 65.5 59.9 65.2
+ Guided inference 74.4 77.5 72.9 64.2 72.3
+ SynCABEL (Our method) 75.4 79.7 79.0 67.0 75.3

Here, we provide the source repositories for the baselines:

Speed and Memory

Model Model (GB) Cand. (GB) Speed (/s)
SapBERT 2.1 20.1 575.5
ArboEL 1.2 7.1 38.9
mBART 2.3 5.4 51.0
Llama-3-8B 28.6 5.4 19.1

Measured on single H100 GPU, constrained decoding

Downloads last month
2
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aremaki/SynCABEL_QUAERO_MEDLINE

Finetuned
(1042)
this model

Collection including Aremaki/SynCABEL_QUAERO_MEDLINE

Evaluation results