Nemotron-Math: Efficient Long-Context Distillation of Mathematical Reasoning from Multi-Mode Supervision Paper • 2512.15489 • Published Dec 17, 2025 • 8
view article Article **Alpie-Core: A 4-Bit Reasoning Model Setting New Global Standards** Sep 24, 2025 • 2
Prithvi-Complimentary Adaptive Fusion Encoder (CAFE): unlocking full-potential for flood inundation mapping Paper • 2601.02315 • Published 13 days ago • 1
view article Article Understanding Low-Rank Adaptation (LoRA): A Revolution in Fine-Tuning Large Language Models 16 days ago • 6
view article Article Understanding NPUs with OpenVINO: Real Capabilities, Limitations & ML Use Cases 19 days ago • 1
LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention Paper • 2502.08213 • Published Feb 12, 2025 • 5
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models Paper • 2403.13269 • Published Mar 20, 2024 • 1
How to Train a Leader: Hierarchical Reasoning in Multi-Agent LLMs Paper • 2507.08960 • Published Jul 11, 2025 • 1
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report Paper • 2405.00732 • Published Apr 29, 2024 • 122
Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning Paper • 2511.16043 • Published Nov 20, 2025 • 108
BhashaBench V1: A Comprehensive Benchmark for the Quadrant of Indic Domains Paper • 2510.25409 • Published Oct 29, 2025 • 3
BhashaBench-V1 Collection BhashaBench-V1 is a domain-specific, multi-task, multilingual benchmark designed to evaluate large language models in India-centric contexts. • 6 items • Updated Oct 30, 2025 • 2