NeuroVLA-CL-LoRA (LIBERO-Goal)

LoRA continual-learning checkpoint for the brain-inspired NeuroVLA model, released with the AlphaBrain framework. Provided for direct download and evaluation — no retraining needed.

A NeuroVLA Vision-Language-Action (VLA) model — Qwen2.5-VL backbone feeding a layer-wise Q-Former and a Spiking Neural Network (SNN) action head — fine-tuned sequentially over the 10 LIBERO-Goal tasks with Low-Rank Adaptation (LoRA, r=32) on the VLM, full-param training of the Q-Former and SNN head, and Experience Replay (ER, buffer 1000/task) to mitigate catastrophic forgetting. Ships the final task checkpoint only.

Overview

Architecture NeuroVLA (Qwen2.5-VL-3B + Q-Former + SNN head)
Base VLM Qwen/Qwen2.5-VL-3B-Instruct
Parameters ~3.0 B total · ~90 M trainable
LoRA r = 32, α = 16, dropout 0.05, target_modules: all-linear
Continual-learning Experience Replay, buffer 1000/task, replay ratio 0.5
Task stream LIBERO-Goal · 10 tasks · 5 000 steps/task

Results

Metric Value
Average Success Rate (Avg SR) ~28 %
Negative Backward Transfer (NBT, ↑ better) +0.25
Naive sequential fine-tuning baseline (no ER) < 10 %

Numbers are conservative estimates over our internal runs; per-run variance is a few percentage points. Reproduction numbers higher or lower than reported are expected — please file an issue / PR with details.

Files

├── README.md                                       model card
├── config.yaml                                     training config (OmegaConf)
├── dataset_statistics.json                         action normalisation (required for inference)
├── task_9_id9_steps_50000_lora_adapter/            LoRA adapter weights + config
└── task_9_id9_steps_50000_action_model.pt          non-VLM weights (Q-Former, SNN head)

Usage

git clone https://github.com/AlphaBrainGroup/AlphaBrain.git
cd VLA-Engine-Developer
pip install -e .

export PRETRAINED_MODELS_DIR=/path/to/models   # must contain Qwen2.5-VL-3B-Instruct/

huggingface-cli download AlphaBrainGroup/neurovla-cl-lora-libero-goal \
    --local-dir ./neurovla_cl_lora

python -m AlphaBrain.training.trainer_utils.peft.merge_lora_checkpoint \
    --base_config      configs/continual_learning/neurovla_cl_lora_libero.yaml \
    --lora_adapter_dir ./neurovla_cl_lora/task_9_id9_steps_50000_lora_adapter \
    --action_model_pt  ./neurovla_cl_lora/task_9_id9_steps_50000_action_model.pt \
    --output_path      ./neurovla_cl_lora_final.pt

python deployment/model_server/server_policy.py \
    --ckpt_path ./neurovla_cl_lora_final.pt --port 10093 --use_bf16

Reproduction

bash scripts/run_continual_learning_scripts/run_cl_train.sh \
    --yaml configs/continual_learning/neurovla_cl_lora_libero.yaml

License

MIT — see the parent repository.

Citation

@misc{alphabrain2026,
  title  = {AlphaBrain: A Modular Open-Source Framework for Embodied Intelligence Research},
  author = {AlphaBrain Team},
  year   = {2026},
  url    = {https://github.com/AlphaBrainGroup/AlphaBrain}
}
Downloads last month
3
Video Preview
loading

Model tree for AlphaBrainGroup/neurovla-cl-lora-libero-goal

Adapter
(124)
this model

Collection including AlphaBrainGroup/neurovla-cl-lora-libero-goal