--- library_name: transformers pipeline_tag: text-generation license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - world-model - reinforcement-learning - agent - alfworld --- # *From Word to World*: Can Large Language Models be Implicit Text-based World Models? [![arXiv](https://img.shields.io/badge/arXiv-2512.18832-b31b1b?logo=arXiv)](https://arxiv.org/abs/2512.18832) [![Blog](https://img.shields.io/badge/Blog-Post-blue?logo=rss&logoColor=white)](https://macaron.im/mindlab/research/how-world-models-unlock-scalable-agentic-rl) [![HF Paper](https://img.shields.io/badge/Paper-HuggingFace-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/papers/2512.18832) [![Models](https://img.shields.io/badge/Models-HuggingFace-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/collections/X1AOX1A/llm-as-world-models) [![Dataset](https://img.shields.io/badge/Dataset-HuggingFace-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/X1AOX1A/LLMasWorldModels) This repository contains the **WorldModel-Alfworld-Qwen2.5-7B** checkpoint, a large language model fine-tuned to serve as an implicit world model for the **ALFWorld** text-based environment. It is part of the research presented in the paper "[From Word to World: Can Large Language Models be Implicit Text-based World Models?](https://huggingface.co/papers/2512.18832)". ## Introduction World models offer a potential way to improve learning efficiency in agentic reinforcement learning through simulated experience. This work reinterprets language modeling as next-state prediction under interaction. Across several environments, the authors find that sufficiently trained world models: - Maintain coherent latent states. - Scale predictably with data and model size. - Improve agent performance via action verification and synthetic trajectory generation. ## Resources - **GitHub Repository**: [X1AOX1A/Word2World](https://github.com/X1AOX1A/Word2World) - **Paper**: [arXiv:2512.18832](https://arxiv.org/abs/2512.18832) - **Dataset**: [LLMasWorldModels](https://huggingface.co/datasets/X1AOX1A/LLMasWorldModels) - **Blog Post**: [How World Models Unlock Scalable Agentic RL](https://macaron.im/mindlab/research/how-world-models-unlock-scalable-agentic-rl) ## Citation ```bibtex @misc{li2025wordworldlargelanguage, title={From Word to World: Can Large Language Models be Implicit Text-based World Models?}, author={Yixia Li and Hongru Wang and Jiahao Qiu and Zhenfei Yin and Dongdong Zhang and Cheng Qian and Zeping Li and Pony Ma and Guanhua Chen and Heng Ji and Mengdi Wang}, year={2025}, eprint={2512.18832}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2512.18832}, } ```