--- license: mit task_categories: - text-classification language: - en tags: - membership-inference - privacy - machine-learning-security size_categories: - n<1K --- # WIKITEXT Calibration Set for Membership Inference Attacks This dataset contains a calibration set for training membership inference attack (MIA) classifiers. ## Dataset Information - **Total Samples**: 500 - **Members**: 250 (sampled from training data) - **Non-members**: 250 (sampled from unseen test split) - **Balance**: 50.0% members / 50.0% non-members ## Data Sources ### Members (label=1) - **Source**: 10K training data (excluding eval set) - **Dataset**: mia-llm/wikitext-raw-MIA (10K training samples) - **Filtering**: Excluded samples that appear in evaluation set ### Non-members (label=0) - **Source**: Unseen test split - **Original Dataset**: - AG News: `fancyzhx/ag_news` (test split) - XSum: `EdinburghNLP/xsum` (test split) - WikiText: `Salesforce/wikitext` wikitext-2-raw-v1 (test split) ## Usage ### Load Dataset ```python from datasets import load_dataset # Load calibration set dataset = load_dataset("h0ssn/wikitext-calibration-mia") # Access data calibration_data = dataset['train'] print(f"Total samples: {len(calibration_data)}") print(f"First sample: {calibration_data[0]}") ``` ### Train MIA Classifier ```python from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler import numpy as np # Load calibration set calibration_data = load_dataset("h0ssn/wikitext-calibration-mia")['train'] # Extract features (example: using loss and gradient) # You need to compute these features using your target and reference models features = extract_features(calibration_data['text']) # Your feature extraction labels = calibration_data['label'] # Train classifier scaler = StandardScaler() features_normalized = scaler.fit_transform(features) classifier = LogisticRegression(max_iter=1000, random_state=42) classifier.fit(features_normalized, labels) # Now use this classifier to predict membership on unknown samples ``` ## Data Fields - `text` (string): The text content - `label` (int): 1 for members, 0 for non-members - `source` (string): Origin of the sample ('10k_train' or 'unseen') ## Verification ✅ **No overlap between members and non-members**: All samples have been verified to ensure no duplicates exist between the member and non-member sets. ## Citation If you use this calibration set in your research, please cite: ```bibtex @dataset{wikitext_calibration_mia, title = {WIKITEXT Calibration Set for Membership Inference Attacks}, author = {h0ssn}, year = {2026}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/h0ssn/wikitext-calibration-mia} } ``` ## Related Datasets - Evaluation Set: `mia-llm/wikitext_benchmark_roya` (700 samples) - Training Set: `mia-llm/wikitext-raw-MIA` (10K samples) ## License MIT License - Free to use for research and commercial purposes. ## Notes - This calibration set is designed to be used SEPARATELY from the evaluation set - No samples from this calibration set appear in the evaluation set - Recommended split: Use this for training MIA classifiers, use evaluation set for testing - Random seed: 42 (for reproducibility) ## Contact For questions or issues, please open an issue on the dataset repository.