the version of the transformers package and it is the latest (5.1.0).
I suspect that major update from Transformers v4 to v5 is the primary cause.
If you don’t want to change your code, you can pin to v4 (pip install “transformers<5”). However, if the solution for v5 works, that would be preferable:
What the error actually means
When you call:
from transformers import pipeline
summarizer = pipeline("summarization")
pipeline() looks up the string "summarization" in an internal registry of supported tasks. If it can’t find it, it raises “task not available” and prints the registry keys it did load.
So this specific failure is not “the model can’t summarize” — it’s “your runtime didn’t load a Transformers build/config where the summarization task alias is registered”, even though the Hugging Face course explicitly expects it to exist. (Hugging Face)
Why "summarization" can be missing (most likely causes)
1) You’re not running the environment you think you are (Colab / kernel state)
In notebooks, it’s easy to get into a state where:
pipshows one version, but Python imported another copy already in memory- you upgraded/downgraded packages after importing
transformersand didn’t restart the runtime
How to detect
Run this in the same cell right before you call pipeline():
import transformers
from transformers import pipeline
print("transformers version:", transformers.__version__)
print("transformers file:", transformers.__file__)
print("pipeline module:", pipeline.__module__)
If transformers.__file__ points somewhere unexpected (or you see anything like /content/transformers.py), you’re not using the library you think you are.
Fix
In Colab, after installing/upgrading, always Restart runtime (Runtime → Restart runtime), then re-run cells from the top.
2) Version/dependency mismatch in Transformers v5 (PyTorch too old / conflicts)
Transformers v5 tightened requirements. The PyPI installation guidance states Transformers works with Python 3.9+ and PyTorch 2.4+ and recommends installing via pip install "transformers[torch]". (PyPI)
In managed environments (Colab, Databricks, etc.), you can end up with:
- Transformers upgraded to v5.x
- but PyTorch kept at an older pinned version (because other preinstalled packages constrain it)
This kind of mismatch can produce partial imports / altered availability of components, and the pipeline task registry you see can differ from what the course assumes.
How to check
import torch, transformers
print(torch.__version__)
print(transformers.__version__)
Fix options
- Option A (stay on v5): upgrade torch to meet the requirement (may require uninstalling conflicting torch/torchvision/torchaudio first in Colab).
- Option B (most stable for courses): pin Transformers to a compatible v4 release that matches the course material better.
3) You’re using a changed/removed API (common in the v5 transition)
A related (but different) breakage in the v5 era: some classes like SummarizationPipeline were removed/relocated, which broke third-party libraries that imported them directly. (PyPI)
That’s not your exact symptom, but it’s a sign that v5 introduced pipeline-related breaking changes and some course notebooks / integrations may lag behind.
Quick workarounds that bypass "summarization" entirely
Even if "summarization" isn’t recognized as a task string, you can still create a working summarizer in two reliable ways.
Workaround A: Let the model define the task (recommended)
Transformers docs note you can omit the task if the model on the Hub already defines it. (Hugging Face)
from transformers import pipeline
# Choose a summarization checkpoint explicitly
summarizer = pipeline(model="facebook/bart-large-cnn")
print(summarizer("Long text here...", max_new_tokens=60))
This often works even when the task alias lookup is broken, because it skips the "summarization" registry key.
Workaround B: Use "text2text-generation" with a summarization model
Summarization is fundamentally a seq2seq text-to-text generation task. (Hugging Face)
from transformers import pipeline
summarizer = pipeline("text2text-generation", model="facebook/bart-large-cnn")
print(summarizer("Long text here...", max_new_tokens=60))
Output field names may differ (often generated_text instead of summary_text), but the result is still a summary.
A practical “fix-it” sequence for your exact Colab case
Run these steps in order.
Step 1 — Confirm what you imported
import transformers
print(transformers.__version__)
print(transformers.__file__)
If this doesn’t show what you expect, restart runtime and/or reinstall cleanly.
Step 2 — Check torch compatibility (important for Transformers v5)
import torch
print(torch.__version__)
If < 2.4, expect issues with v5 per the installation guidance. (PyPI)
Step 3 — Clean reinstall (then restart runtime)
In a fresh cell:
pip uninstall -y transformers
pip install -U "transformers[torch]"
Then restart runtime and test again.
Step 4 — If the notebook still fails, use Workaround A
This will usually get you unblocked immediately:
from transformers import pipeline
summarizer = pipeline(model="facebook/bart-large-cnn")
summarizer("...", max_new_tokens=60)
Similar cases / issues reported online (context)
“Unknown task summarization” (usually older Transformers or mismatched environment)
A classic StackOverflow thread shows pipeline("summarization") failing with a task-not-found style error; the resolution was effectively “you’re on a version that doesn’t include it / upgrade Transformers”. (Stack Overflow)
Your case is “v5 but missing alias”, but the debugging logic is the same: verify the imported package and kernel state.
v5 pipeline API breakage affecting downstream libraries
An MLflow issue documents that pipeline classes such as SummarizationPipeline were removed in Transformers 5.0, breaking code that imported them directly. (PyPI)
This supports the idea that the v5 transition is a realistic source of “things that used to work in notebooks suddenly don’t”.
Your exact report exists on the HF forums (Feb 2026)
The same symptom (Colab + HF course notebook + Transformers 5.1.0 + missing "summarization") has been posted to the HF forums. (Hugging Face Forums)
Good references (guides / docs / tutorials)
- HF course page that expects
pipeline("summarization")to work (the notebook you’re following) (Hugging Face) - Transformers pipeline tutorial (includes a summarization example using a specific checkpoint like
google/pegasus-billsum) (Hugging Face) - Task guide: Summarization (background + practical training/inference patterns) (Hugging Face)
- AWS tutorial using
pipeline("summarization")in practice (deployment-oriented context) (Amazon Web Services, Inc.) - HF task page (Summarization) with inputs/outputs and model discovery (Hugging Face)
One more common pitfall after you fix the task: long inputs
Even once the pipeline runs, many summarization models have input length limits. For long documents you typically need:
- chunking (split text, summarize chunks, then summarize summaries), or
- a long-context summarization model (LED, LongT5, Pegasus-X, etc.)
HF forum threads discuss this “summarization on long text” issue and model suggestions. (Hugging Face Forums)
Bottom line for your situation
Given that the HF course explicitly lists "summarization" as a pipeline task (Hugging Face) and the same failure is being reported with Transformers 5.1.0 in Colab (Hugging Face Forums), the most productive path is:
- verify import path + restart runtime,
- check torch >= 2.4 for v5, (PyPI)
- if still broken, bypass the alias with
pipeline(model="facebook/bart-large-cnn")(Workaround A). (Hugging Face)