SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5

This is a sentence-transformers model finetuned from nomic-ai/nomic-embed-text-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/nomic-embed-text-v1.5
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'NomicBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'search_query: whats collected by litfx??',
    'search_document: Privacy policy: LITFX collects account information (name, email), usage data (course progress, journal entries, chat messages), and device information for security. Card details are NOT stored on LITFX servers — payments are handled by a secure payment processor. AI Mentor and journal analysis use your data only for generating responses — your data is NOT used to train AI models. The platform uses industry-standard encryption and security protections.',
    'search_document: A shooting star has a small body at the bottom with a long upper wick (at least 2x the body). When it appears after an uptrend near resistance, it suggests sellers rejected higher prices. The same shape at the bottom of a downtrend is called an inverted hammer and can signal potential buying interest. Always confirm with the next candle close and surrounding structure.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.4414, -0.1091],
#         [ 0.4414,  1.0000, -0.0413],
#         [-0.1091, -0.0413,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,030 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.14 tokens
    • max: 36 tokens
    • min: 29 tokens
    • mean: 88.72 tokens
    • max: 128 tokens
    • min: 29 tokens
    • mean: 88.69 tokens
    • max: 128 tokens
  • Samples:
    anchor positive negative
    search_query: Why is the previous close important when calculating True Range? search_document: Average True Range (ATR) is a volatility indicator that measures the average range of price movement over a specified period, typically 14 periods. Unlike most indicators, ATR does not indicate direction. It measures how much an instrument moves on average per period, providing a volatility baseline. True Range is the greatest of: current high minus current low, absolute value of current high minus previous close, or absolute value of current low minus previous close. The inclusion of the previous close accounts for gaps between periods. ATR is then the average of True Range values over the lookback period. ATR is practical for position sizing and stop loss placement. If an instrument has an ATR of 80 pips on the daily chart, a stop loss of 15 pips is likely too tight relative to normal daily movement and has a high probability of being hit by normal price fluctuation. A stop loss of 1 to 1.5 times ATR provides room for normal volatility while still defining risk. ATR-... search_document: NFP releases can trigger rapid repricing and volatility spikes. Traders should plan for uncertainty and avoid impulsive execution around high-impact data.
    search_query: Can I post multiple images in a single message? search_document: Community has 9 or more channels organized by topic. Trading channels: entries, analysis, front-testing, crypto, stock, questions, stop-losses, fundamental-analysis, trade-ideas. General channels: announcement, general-chat, homework. Results channel and sunday-talk (weekly mindset). Premium-gated channels: homework, front-testing, entries, stop-losses, questions. Features include text plus multiple images, trade result cards, emoji reactions, reply threads, pinned messages, message editing, and admin moderation tools. Exclusive premium community channels are English-language. search_document: Community moderation features: Google Cloud Vision SafeSearch auto-rejects adult, violent, or racy uploaded images. Messages with 3 or more reports are auto-hidden pending review. Community bot with engagement scheduler, filters, and slash commands. Admin audit trails for all moderation actions. User presence shows online, idle, and DND states. Community profiles display role (CEO, admin, moderator, student), subscription tier, TQS score, and badges.
    search_query: What qualities or considerations should I keep in mind when choosing a trading style? search_document: Scalping uses very short timeframes (M1-M5) for quick small profits, requiring fast execution. Day trading opens and closes positions within the same day, avoiding overnight risk. Swing trading holds positions for days to weeks, capturing larger price moves. Position trading holds for weeks to months based on higher timeframe trends. Each style suits different schedules, risk tolerances, and personalities. search_document: The forex market operates 24 hours a day, five days a week, through four major trading sessions that overlap across global time zones. The Sydney session opens first (22:00 UTC) and is typically the quietest, with lower volatility and tighter ranges on most pairs. The Tokyo session (00:00 UTC) brings more activity to JPY pairs and Asia-Pacific currencies, with moderate volatility. The London session (08:00 UTC) is the most liquid and volatile session, accounting for the largest share of daily forex volume. Major pairs like EURUSD and GBPUSD see their strongest moves during London. The New York session (13:00 UTC) overlaps with London for several hours, creating the highest-activity window of the day. This London-New York overlap (13:00-17:00 UTC) typically produces the most significant price movements, widest ranges, and highest trading volume. Understanding sessions matters because spread costs, volatility, and liquidity all change throughout the day. Trading during l...
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 4
  • gradient_accumulation_steps: 4
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0787 10 1.7278
0.1575 20 1.36
0.2362 30 1.3364
0.3150 40 1.0988
0.3937 50 0.8147
0.4724 60 0.7897
0.5512 70 1.0812
0.6299 80 0.6983
0.7087 90 1.0156
0.7874 100 0.7213
0.8661 110 1.0232
0.9449 120 0.9498
1.0236 130 0.9438
1.1024 140 0.3809
1.1811 150 0.305
1.2598 160 0.2767
1.3386 170 0.2916
1.4173 180 0.4454
1.4961 190 0.1269
1.5748 200 0.5194
1.6535 210 0.2117
1.7323 220 0.3201
1.8110 230 0.2843
1.8898 240 0.2896
1.9685 250 0.3519
2.0472 260 0.306
2.1260 270 0.0523
2.2047 280 0.0743
2.2835 290 0.1769
2.3622 300 0.0635
2.4409 310 0.1496
2.5197 320 0.1572
2.5984 330 0.2366
2.6772 340 0.0982
2.7559 350 0.201
2.8346 360 0.0738
2.9134 370 0.1147
2.9921 380 0.1934
3.0709 390 0.0377
3.1496 400 0.0868
3.2283 410 0.1168
3.3071 420 0.0919
3.3858 430 0.0835
3.4646 440 0.0841
3.5433 450 0.1205
3.6220 460 0.0294
3.7008 470 0.0525
3.7795 480 0.0754
3.8583 490 0.3049
3.9370 500 0.3836
4.0157 510 0.029
4.0945 520 0.1829
4.1732 530 0.1849
4.2520 540 0.113
4.3307 550 0.0212
4.4094 560 0.0111
4.4882 570 0.1175
4.5669 580 0.0695
4.6457 590 0.222
4.7244 600 0.0206
4.8031 610 0.0396
4.8819 620 0.0366
4.9606 630 0.2206

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.3
  • Transformers: 4.57.6
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.6.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
18
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Farrukhceo/litfx-nomic-embed

Finetuned
(27)
this model

Space using Farrukhceo/litfx-nomic-embed 1

Papers for Farrukhceo/litfx-nomic-embed