liaojc commited on
Commit
6354df0
·
verified ·
1 Parent(s): 4eb6949

update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -18
README.md CHANGED
@@ -1,21 +1,50 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- base_model:
6
- - baidu/ERNIE-4.5-300B-A47B-Base-PT
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  # ERNIE-4.5-300B-A47B-Base
 
9
  ## ERNIE 4.5 Highlights
 
10
  The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:
11
- - **Multimodal MoE Pretraining:** Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a heterogeneous MoE structure, incorporated three-dimensional rotary embeddings, and employed router orthogonal loss and multimodal token-balanced loss. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.
12
- - **Scaling-Efficient Architecture and Infrastructure:** To train the large multimodal MoE models efficiently, we introduce a novel heterogeneous hybrid parallelism and multi-level load balancing strategy for efficient training of ERNIE 4.5 models. By using on-device expert parallelism, memory-efficient pipeline scheduling, and FP8 mixed precision, we achieve ideal pre-training performance. For inference, we propose a quantization method with collaborative parallelism among multiple experts to achieve lossless quantization. Built on PaddlePaddle, ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.
13
- - **Modality-Specific Post-training:** To meet the diverse requirements of real-world applications, we fine-tuned variants of the pretrained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visual-language understanding and supports both thinking and no-thinking mode. Each model employed a combination of Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO) or a modified reinforcement learning method named Unified Preference Optimization (UPO) for post-training, using targeted datasets aligned with its intended usage scenario.
14
 
15
- To ensure the stability of multimodal joint training, we adopt a staged training strategy. In the first and second stage, we train only the text-related parameters, enabling the model to develop strong fundamental language understanding as well as long-text processing capabilities. The final multimodal stage extends capabilities to images and videos by introducing additional parameters including a ViT for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding. At this stage, text and visual modalities mutually enhance each other. After pretraining trillions tokens, we extracted the text-related parameters and finally obtained ERNIE-4.5-300B-A47B-Base。
 
 
16
 
 
 
 
17
 
18
  ## Model Overview
 
19
  ERNIE-4.5-300B-A47B-Base is a text MoE Base model, with 300B total parameters and 47B activated parameters for each token. The following are the model configuration details:
20
 
21
  | Key | Value |
@@ -29,19 +58,21 @@ ERNIE-4.5-300B-A47B-Base is a text MoE Base model, with 300B total parameters an
29
  | Vision Experts(Total / Activated) | 64 / 8 |
30
  | Context Length | 131072 |
31
 
32
-
33
  ## Quickstart
 
34
  ### Model Finetuning with ERNIEKit
35
 
36
  [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) is a training toolkit based on PaddlePaddle, specifically designed for the ERNIE series of open-source large models. It provides comprehensive support for scenarios such as instruction fine-tuning (SFT, LoRA) and alignment training (DPO), ensuring optimal performance.
37
 
38
  Usage Examples:
 
39
  ```bash
40
  # SFT
41
  erniekit train --stage SFT --model_name_or_path baidu/ERNIE-4.5-300B-A47B-Base-Paddle --train_dataset_path your_dataset_path
42
  # DPO
43
  erniekit train --stage DPO --model_name_or_path baidu/ERNIE-4.5-300B-A47B-Base-Paddle --train_dataset_path your_dataset_path
44
  ```
 
45
  For more detailed examples, including SFT with LoRA, multi-GPU configurations, and advanced scripts, please refer to the examples folder within the [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) repository.
46
 
47
  ### Using FastDeploy
@@ -65,12 +96,13 @@ python -m fastdeploy.entrypoints.openai.api_server \
65
  ### Using `transformers` library
66
 
67
  The following contains a code snippet illustrating how to use the model generate content based on given inputs.
 
68
  ```python
69
  from transformers import AutoModelForCausalLM, AutoTokenizer
70
 
71
  model_name = "baidu/ERNIE-4.5-300B-A47B-Base-PT"
72
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
73
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", trust_remote_code=True)
74
 
75
  prompt = "Large language model is"
76
  model_inputs = tokenizer([prompt], add_special_tokens=False, return_tensors="pt").to(model.device)
@@ -83,21 +115,20 @@ result = tokenizer.decode(generated_ids[0].tolist(), skip_special_tokens=True)
83
  print("result:", result)
84
  ```
85
 
 
86
 
 
87
 
88
- ### Using vLLM
89
- vLLM is currently being adapted, priority can be given to using our fork repository [vllm](https://github.com/CSWYF3634076/vllm/tree/ernie)
90
  ```bash
91
  # 80G * 16 GPU
92
  vllm serve baidu/ERNIE-4.5-300B-A47B-Base-PT --trust-remote-code
93
  ```
 
94
  ```bash
95
  # FP8 online quantification 80G * 8 GPU
96
  vllm serve baidu/ERNIE-4.5-300B-A47B-Base-PT --trust-remote-code --quantization fp8
97
  ```
98
 
99
-
100
-
101
  ## License
102
 
103
  The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
@@ -116,4 +147,4 @@ If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly c
116
  primaryClass={cs.CL},
117
  url={}
118
  }
119
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - ERNIE4.5
9
+ ---
10
+
11
+ <div align="center" style="line-height: 1;">
12
+ <a href="https://ernie.baidu.com/" target="_blank" style="margin: 2px;">
13
+ <img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-ERNIE_Bot-blue" style="display: inline-block; vertical-align: middle;"/>
14
+ </a>
15
+ <a href="https://huggingface.co/baidu" target="_blank" style="margin: 2px;">
16
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Baidu-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
17
+ </a>
18
+ <a href="https://github.com/PaddlePaddle/ERNIE" target="_blank" style="margin: 2px;">
19
+ <img alt="Github" src="https://img.shields.io/badge/GitHub-ERNIE-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/>
20
+ </a>
21
+ <a href="https://ernie.baidu.com/blog/ernie4.5" target="_blank" style="margin: 2px;">
22
+ <img alt="Blog" src="https://img.shields.io/badge/🖖_Blog-ERNIE4.5-A020A0" style="display: inline-block; vertical-align: middle;"/>
23
+ </a>
24
+ </div>
25
+
26
+ <div align="center" style="line-height: 1;">
27
+ <a href="LICENSE" style="margin: 2px;">
28
+ <img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/>
29
+ </a>
30
+ </div>
31
+
32
  # ERNIE-4.5-300B-A47B-Base
33
+
34
  ## ERNIE 4.5 Highlights
35
+
36
  The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:
 
 
 
37
 
38
+ 1. **Multimodal Heterogeneous MoE Pre-Training**: Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a *heterogeneous MoE structure*, incorporated *modality-isolated routing*, and employed *router orthogonal loss* and *multimodal token-balanced loss*. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.
39
+
40
+ 2. **Scaling-Efficient Infrastructure**: We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and fine-grained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose Multi-Expert Parallel Collaboration method and Convolutional Code Quantization algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.
41
 
42
+ 3. **Modality-Specific Post-training**: To meet the diverse requirements of real-world applications, we fine-tuned variants of the pretrained model for specific modalities. Our *LLMs* are optimized for general-purpose language understanding and generation. The *VLMs* focuses on visual-language understanding and supports both thinking and no-thinking mode. Each model employed a combination of *Supervised Fine-tuning (SFT)* *Direct Preference Optimization (DPO)* or a modified reinforcement learning method named *Unified Preference Optimization (UPO)* for post-training.
43
+
44
+ To ensure the stability of multimodal joint training, we adopt a staged training strategy. In the first and second stage, we train only the text-related parameters, enabling the model to develop strong fundamental language understanding as well as long-text processing capabilities. The final multimodal stage extends capabilities to images and videos by introducing additional parameters including a ViT for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding. At this stage, text and visual modalities mutually enhance each other. After pretraining trillions tokens, we extracted the text-related parameters and finally obtained ERNIE-4.5-300B-A47B-Base。
45
 
46
  ## Model Overview
47
+
48
  ERNIE-4.5-300B-A47B-Base is a text MoE Base model, with 300B total parameters and 47B activated parameters for each token. The following are the model configuration details:
49
 
50
  | Key | Value |
 
58
  | Vision Experts(Total / Activated) | 64 / 8 |
59
  | Context Length | 131072 |
60
 
 
61
  ## Quickstart
62
+
63
  ### Model Finetuning with ERNIEKit
64
 
65
  [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) is a training toolkit based on PaddlePaddle, specifically designed for the ERNIE series of open-source large models. It provides comprehensive support for scenarios such as instruction fine-tuning (SFT, LoRA) and alignment training (DPO), ensuring optimal performance.
66
 
67
  Usage Examples:
68
+
69
  ```bash
70
  # SFT
71
  erniekit train --stage SFT --model_name_or_path baidu/ERNIE-4.5-300B-A47B-Base-Paddle --train_dataset_path your_dataset_path
72
  # DPO
73
  erniekit train --stage DPO --model_name_or_path baidu/ERNIE-4.5-300B-A47B-Base-Paddle --train_dataset_path your_dataset_path
74
  ```
75
+
76
  For more detailed examples, including SFT with LoRA, multi-GPU configurations, and advanced scripts, please refer to the examples folder within the [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) repository.
77
 
78
  ### Using FastDeploy
 
96
  ### Using `transformers` library
97
 
98
  The following contains a code snippet illustrating how to use the model generate content based on given inputs.
99
+
100
  ```python
101
  from transformers import AutoModelForCausalLM, AutoTokenizer
102
 
103
  model_name = "baidu/ERNIE-4.5-300B-A47B-Base-PT"
104
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
105
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
106
 
107
  prompt = "Large language model is"
108
  model_inputs = tokenizer([prompt], add_special_tokens=False, return_tensors="pt").to(model.device)
 
115
  print("result:", result)
116
  ```
117
 
118
+ ### Using vLLM
119
 
120
+ vLLM is currently being adapted, priority can be given to using our forked repository [vllm](https://github.com/CSWYF3634076/vllm/tree/ernie). We are working with the community to fully support ERNIE4.5 models, stay tuned.
121
 
 
 
122
  ```bash
123
  # 80G * 16 GPU
124
  vllm serve baidu/ERNIE-4.5-300B-A47B-Base-PT --trust-remote-code
125
  ```
126
+
127
  ```bash
128
  # FP8 online quantification 80G * 8 GPU
129
  vllm serve baidu/ERNIE-4.5-300B-A47B-Base-PT --trust-remote-code --quantization fp8
130
  ```
131
 
 
 
132
  ## License
133
 
134
  The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
 
147
  primaryClass={cs.CL},
148
  url={}
149
  }
150
+ ```