Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    ConnectionError
Message:      Couldn't reach 'marin-community/token-counts' on the Hub (ReadTimeout)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
                  builder = load_dataset_builder(
                            ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1315, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1149, in dataset_module_factory
                  raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({e.__class__.__name__})") from e
              ConnectionError: Couldn't reach 'marin-community/token-counts' on the Hub (ReadTimeout)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Marin Token Counts

Token counts for all datasets used in Marin pretraining runs.

Schema

Column Type Description
dataset string Dataset identifier
marin_tokens int Number of tokens after tokenization
category string Content domain (web, code, math, academic, books, etc.)
synthetic bool Whether the data is LLM-generated or LLM-translated

Categories

  • web — Quality-classified Common Crawl text (Nemotron-CC)
  • code — Source code and code-related documents
  • math — Math-focused extractions and competition problems
  • academic — Peer-reviewed papers and abstracts
  • reasoning — Cross-domain reasoning and formal logic
  • books — Digitized public domain and open access books
  • legal — Court decisions, regulations, patents
  • government — Parliamentary proceedings and publications
  • education — Open educational resources and textbooks
  • encyclopedic — Wiki-style reference content
  • forum — Q&A sites and chat logs
  • documents — PDF-extracted document text
  • translation — Parallel translation corpora
  • news — CC-licensed news articles
  • media — Transcribed audio/video
  • supervised — Curated task datasets
  • reference — Niche reference sites
  • general — General-domain content

Updates

This dataset is updated by running experiments/count_tokens.py from the Marin repo, which reads tokenized dataset stats from GCS and pushes the results here.

Downloads last month
107

Space using marin-community/token-counts 1