--- dataset_info: features: - name: full_sequence dtype: string - name: enhancer_sequence dtype: string - name: promoter dtype: string - name: discrete_label dtype: class_label: names: '0': 0 '1': 1 '2': 2 '3': 3 '4': 4 - name: activity dtype: float32 splits: - name: train num_bytes: 3518883112 num_examples: 804592 - name: validation num_bytes: 354865790 num_examples: 81140 - name: test num_bytes: 360253942 num_examples: 82372 download_size: 611028266 dataset_size: 4234002844 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- ## Enhancer generation dataset for NTv3-generative model This dataset contains the processed STARR-seq data from the [DeepSTARR](https://www.nature.com/articles/s41588-022-01048-5) study. Here we processed it with promoter context for conditional sequence generation training for NTv3-generative. Each enhancer is paired and inserted into two promoter contexts (RpS12 and DSCP), allowing the study of promoter-specific enhancer activity. ### Source Data - **Original Study**: de Almeida et al., "DeepSTARR predicts enhancer activity from DNA sequence and enables the de novo design of synthetic enhancers" (Nature Genetics, 2022) - **Organism**: Drosophila melanogaster (fruit fly) - **Assay**: STARR-seq (Self-Transcribing Active Regulatory Region sequencing) ## Dataset Schema | Field | Type | Description | |-------|------|-------------| | `full_sequence` | `string` | 4096bp sequence with enhancer inserted into promoter backbone | | `enhancer_sequence` | `string` | Raw 249bp enhancer sequence | | `promoter` | `string` | Promoter type: `"RpS12"` (housekeeping) or `"DSCP"` (developmental) | | `discrete_label` | `int` | Discretized activity bin (0-4) | | `activity` | `float` | Original log2 enrichment value from STARR-seq | ### Discrete Label Bins Activity values are discretized using bin edges `[-2.5, 0, 2.5, 5]`: | Label | Activity Range | Interpretation | |-------|----------------|----------------| | 0 | activity < -2.5 | Very low / silencer | | 1 | -2.5 <= activity < 0 | Low / inactive | | 2 | 0 <= activity < 2.5 | Moderate | | 3 | 2.5 <= activity < 5 | High | | 4 | activity >= 5 | Very high | ### Promoter Contexts - **RpS12**: Housekeeping promoter (ribosomal protein S12), enhancer inserted at position 968 - **DSCP**: Developmental core promoter (Drosophila Synthetic Core Promoter), enhancer inserted at position 1018 ## Dataset Statistics | Split | Samples | Description | |-------|---------|-------------| | train | ~804,592 | Training set (402,296 enhancers x 2 promoters) | | validation | ~81,140 | Validation set (40,570 enhancers x 2 promoters) | | test | ~82,372 | Test set (41,186 enhancers x 2 promoters) | ## Usage ### Loading the Dataset ```python from datasets import load_dataset # Load all splits dataset = load_dataset("InstaDeepAI/NTv3_enhancer_generation") # Access specific splits train_data = dataset["train"] val_data = dataset["validation"] test_data = dataset["test"] ``` ### Accessing Samples ```python # Get a single sample sample = dataset["train"][0] print(f"Promoter: {sample['promoter']}") print(f"Activity: {sample['activity']:.2f}") print(f"Discrete label: {sample['discrete_label']}") print(f"Enhancer length: {len(sample['enhancer_sequence'])}") print(f"Full sequence length: {len(sample['full_sequence'])}") ``` ### Filtering by Promoter ```python # Get only RpS12 (housekeeping) samples rps12_data = dataset["train"].filter(lambda x: x["promoter"] == "RpS12") # Get only DSCP (developmental) samples dscp_data = dataset["train"].filter(lambda x: x["promoter"] == "DSCP") ``` ### Filtering by Activity Level ```python # Get high activity enhancers (discrete_label >= 3) high_activity = dataset["train"].filter(lambda x: x["discrete_label"] >= 3) # Get enhancers with specific activity range moderate_to_high = dataset["train"].filter(lambda x: 0 <= x["activity"] < 5) ``` ### Streaming (for large-scale processing) ```python from datasets import load_dataset # Stream without downloading entire dataset dataset = load_dataset("InstaDeepAI/NTv3_enhancer_generation", streaming=True) for sample in dataset["train"]: # Process sample pass ```