baileyk commited on
Commit
201b764
·
verified ·
1 Parent(s): 1d31d11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -28,7 +28,7 @@ These checkpoints use the same architecture and starting checkpoint as the offic
28
  ## Inference
29
  You can access these checkpoints using the standard Hugging Face Transformers library:
30
 
31
- ```
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
  olmo_early_training = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-early-training")
34
  tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-0425-1B-early-training")
@@ -40,7 +40,7 @@ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
40
  ```
41
 
42
  To access a specific checkpoint, you can specify the revision:
43
- ```
44
  olmo_early_training = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-early-training", revision="stage1-step20000-tokens42B")
45
  ```
46
 
 
28
  ## Inference
29
  You can access these checkpoints using the standard Hugging Face Transformers library:
30
 
31
+ ```python
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
  olmo_early_training = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-early-training")
34
  tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-0425-1B-early-training")
 
40
  ```
41
 
42
  To access a specific checkpoint, you can specify the revision:
43
+ ```python
44
  olmo_early_training = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-early-training", revision="stage1-step20000-tokens42B")
45
  ```
46