Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<titulo_oficial: string, tipo: string, numero: string, anio: int64, fecha: timestamp[s], organo: string, gaceta_oficial: null, pais: string, estado_vigencia: string, nota: string>
to
{'titulo_oficial': Value('string'), 'tipo': Value('string'), 'numero': Value('null'), 'anio': Value('int64'), 'gaceta': Value('null'), 'fecha': Value('timestamp[s]'), 'organo': Value('string'), 'pais': Value('string'), 'ratificacion': Value('string'), 'entrada_vigor_pan': Value('timestamp[s]'), 'nota': Value('string')}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2233, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2101, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<titulo_oficial: string, tipo: string, numero: string, anio: int64, fecha: timestamp[s], organo: string, gaceta_oficial: null, pais: string, estado_vigencia: string, nota: string>
              to
              {'titulo_oficial': Value('string'), 'tipo': Value('string'), 'numero': Value('null'), 'anio': Value('int64'), 'gaceta': Value('null'), 'fecha': Value('timestamp[s]'), 'organo': Value('string'), 'pais': Value('string'), 'ratificacion': Value('string'), 'entrada_vigor_pan': Value('timestamp[s]'), 'nota': Value('string')}

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

LEX_AMBIENTAL_PAN

Machine-readable corpus of 119 Panamanian environmental legal instruments parsed into 2,955 structured articles, with multilingual vector embeddings and a semantic relationship graph.

Quick start

from huggingface_hub import hf_hub_download
import json, numpy as np

# Inventario de instrumentos
inv_path = hf_hub_download("megabiodiversity/lex-ambiental-pan",
                            "data/inventario_leyes_pan.json",
                            repo_type="dataset")
inventario = json.loads(open(inv_path).read())
print(f"{len(inventario['instrumentos'])} instrumentos")

# Embeddings (2955 × 768, float32, L2-normalizado)
npz_path = hf_hub_download("megabiodiversity/lex-ambiental-pan",
                            "data/articulos_embeddings.npz",
                            repo_type="dataset")
data = np.load(npz_path, allow_pickle=True)
vecs, ids = data["embeddings"], data["art_ids"]
print(f"Shape: {vecs.shape}")

# Índice de artículos
idx_path = hf_hub_download("megabiodiversity/lex-ambiental-pan",
                            "data/articulos_index.jsonl",
                            repo_type="dataset")
index = [json.loads(l) for l in open(idx_path)]

# Búsqueda por similitud coseno
query_vec = vecs[0]
sims = vecs @ query_vec
top5 = sims.argsort()[-6:][::-1][1:]
for i in top5:
    print(ids[i], round(float(sims[i]), 4))

Dataset structure

lex-ambiental-pan/
├── data/
│   ├── inventario_leyes_pan.json     # 131 instruments, v4.0
│   ├── articulos_embeddings.npz      # 2955 × 768 float32
│   ├── articulos_index.jsonl         # row metadata
│   └── embeddings_meta.json          # model info
└── TEXTOS/
    └── <INSTRUMENT_ID>/
        └── articulos.json            # parsed articles

Coverage

Category Count
Instruments (inventory) 131
Instruments with articles 119
Total articles 2,955
International instruments ~40
National Panamanian instruments ~79

Instruments span: constitutional provisions, framework environmental laws, sector laws (fisheries, forests, mining, water, protected areas), executive decrees, international conventions (CBD, CITES, Ramsar, C169 ILO, UNCLOS, Escazú, BBNJ), EU instruments, and indigenous peoples' rights instruments.

Embeddings

Model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (768 dim, multilingual). Vectors are L2-normalized — cosine similarity = dot product.

Citation

@dataset{lex_ambiental_pan_2026,
  author    = {Rodríguez Neira, Juan Carlos},
  title     = {{LEX\_AMBIENTAL\_PAN: A Structured Corpus of Panamanian Environmental Law}},
  year      = {2026},
  publisher = {Zenodo},
  doi       = {10.5281/zenodo.19637411},
  url       = {https://doi.org/10.5281/zenodo.19637411}
}

License

CC BY 4.0 — see LICENSE file.

Downloads last month
7