Hmm…
What’s going on (and why your conda recipe “installed” but deps didn’t)
1) conda-build does not behave like pip install optimum-intel[extras]
In a conda recipe, your build script:
python -m pip install . --no-deps
intentionally installs only optimum-intel (no dependency resolution). The dependency solver is conda/mamba, and it only installs what you list under requirements: run, and only if those packages exist in your configured channels.
So if you list PyPI-only deps (or misnamed conda packages) in requirements: run, you’ll get exactly what you saw:
- the package builds/installs,
- but the runtime deps are missing because conda can’t find them.
2) You mixed “extras/dev/test” deps into runtime deps
Your recipe includes quality/test tooling (black, ruff, pytest, etc.) and many “extras” dependencies as if they were required at runtime. On PyPI, optimum-intel advertises many optional extras (openvino, nncf, ipex, neural-compressor, diffusers, tests, quality, …). (PyPI)
In conda packaging, you generally model these as separate outputs (metapackages) or keep them out of run: and document them as optional installs.
Important correction: several of your “missing” deps actually exist on conda-forge (names differ)
These three are the most common “it doesn’t exist” cases because the conda package names use dashes:
| You tried |
Conda(-forge) name |
Evidence |
hf_xet |
hf-xet |
(Anaconda) |
vector_quantize_pytorch |
vector-quantize-pytorch |
(Anaconda) |
open_clip_torch |
open-clip-torch (available in conda ecosystem; conda-forge is referenced by maintainers) |
(GitHub) |
So if your goal was “conda-only”, a chunk of your list becomes solvable just by renaming.
What likely blocks a clean conda(-forge) packaging: optimum-onnx + a few PyPI-only packages
1) optimum-intel depends on optimum-onnx
A metadata view of optimum-intel shows required deps include optimum-onnx (plus torch, transformers). (pypistats.org)
And Hugging Face’s Transformers docs explicitly install Optimum ONNX via pip (uv pip install optimum-onnx) when discussing ONNX export tooling. (Hugging Face)
If optimum-onnx is not available in your conda channels, then a conda-native optimum-intel package is incomplete unless you:
- package
optimum-onnx into conda first, or
- patch
optimum-intel to make that dependency optional (usually not recommended unless upstream supports it).
2) OpenVINO GenAI is not distributed via conda
OpenVINO Runtime is available via conda-forge. (docs.openvino.ai)
But OpenVINO GenAI is distributed via PyPI and archives, not conda. (docs.openvino.ai)
So if your dependency chain truly requires openvino-genai, you won’t get a “pure conda-forge solve” without creating and maintaining your own conda package for it.
3) Some of your other items are “PyPI-first”
For example these are clearly PyPI packages:
transformers-stream-generator (PyPI)
invisible-watermark (PyPI)
vocos (PyPI)
They might be packageable into conda (some are pure Python), but they’re not guaranteed to already exist in your channels.
Intel channel reality check (deps you may need from non-forge channels)
If you are willing to use “conda-only” but not strictly “conda-forge-only”:
neural-compressor exists on Intel’s channel.
intel-extension-for-pytorch exists on Anaconda’s main channel, but the listing shown is very old (v1.12.1 from 2023), which is unlikely to match modern PyTorch (2.x). (Anaconda)
That mismatch is a common reason conda recipes “can’t solve” when you try to pin intel-extension-for-pytorch>=2.8.
What I would do in your situation (conda-only, home lab, practical)
Option 1 (recommended if you truly want conda-only): build a local conda channel for the missing pieces
You’re already halfway there.
Goal: keep your runtime installs 100% conda/mamba by packaging whatever is missing into your own channel (local directory channel), then install with -c file:///....
Priority order to package:
optimum-onnx (because it’s a required dep) (pypistats.org)
- Any PyPI-only libs you actually need for your workflows (
transformers-stream-generator, invisible-watermark, vocos) (PyPI)
- Only then package
optimum-intel
Why this works: conda can only solve against packages it can see. Once you provide conda artifacts (even locally), mamba can do a clean solve without pip.
Option 2: keep optimum-intel conda-packaged, but make extras separate and optional (how conda-forge typically wants it)
Model the PyPI extras as separate metapackages:
optimum-intel (base)
optimum-intel-openvino (depends on optimum-intel, openvino, openvino-tokenizers, nncf, …)
optimum-intel-neural-compressor (depends on optimum-intel, neural-compressor)
- etc.
This matches the fact that upstream documents pip extras (e.g., optimum-intel[openvino]). (GitHub)
Also note: OpenVINO Runtime itself is well supported in conda-forge. (docs.openvino.ai)
Option 3 (if you relax the “no pip” rule slightly): use pip only for the packages that don’t exist on conda
Upstream’s own install guidance for Optimum Intel is pip-first. (GitHub)
If you ever change your mind: install the heavy compiled stack via conda (torch/openvino/nncf), then pip-install only the few missing pure-Python bits last. This minimizes conflicts.
Concrete fixes to your meta.yaml (so your package behaves predictably)
1) Don’t list “extras/test/quality” under requirements: run
Move dev/test tools to test: requires (or remove entirely). Keep runtime minimal.
2) Fix conda package names (dash vs underscore)
Replace:
hf_xet → hf-xet (Anaconda)
vector_quantize_pytorch → vector-quantize-pytorch (Anaconda)
open_clip_torch → open-clip-torch (GitHub)
3) Don’t “pip install extra deps” inside the recipe build
This is a key conda-forge norm: dependencies must be declared, not downloaded during build.
4) noarch: python is usually correct here
optimum-intel ships as a universal wheel (py3-none-any), so it’s typically noarch. (PyPI)
If you pin Python to exactly 3.10, noarch becomes awkward—prefer python >=3.10,<3.12 style constraints instead.
5) Don’t hardcode entry_points if upstream already defines them
Let pip install . install the console scripts from the package metadata unless you have a specific reason. (Hardcoding can drift from upstream.)
“Is there going to be” an official conda(-forge)/Intel release?
I can’t point to a public commitment or an existing conda-forge feedstock/PR for optimum-intel right now. Your Hugging Face forum thread (Jan 26, 2026) is exactly the kind of place that usually gets an authoritative answer from maintainers if one exists. (Hugging Face Forums)
Practically, the packaging blockers are consistent with what you ran into: required deps like optimum-onnx and optional deps like openvino-genai that are not clearly available on conda channels. (pypistats.org)
Similar cases + good references (with context)
Similar “why isn’t this on conda-forge” / packaging pattern
- Hugging Face Optimum issue requesting conda-forge packaging (historical example of the process and why it takes time). (GitHub)
- conda-forge “Contributing packages” guide (the exact staged-recipes → feedstock workflow you’d use if you upstream this). (conda-forge.org)
- open-clip-torch conda-forge packaging discussion (example of a PyPI project gaining a conda(-forge) package via community maintenance). (GitHub)
Optimum Intel / OpenVINO learning resources
- Optimum Intel repo install instructions (pip + extras, shows how upstream expects deps to be selected). (GitHub)
- OpenVINO Runtime via conda-forge (what you can keep fully conda-native). (docs.openvino.ai)
- OpenVINO GenAI install guide (why your conda-only plan may hit a hard wall here). (docs.openvino.ai)
- OpenVINO notebook demonstrating
optimum-intel for Stable Diffusion/OpenVINO flows (good for validating your environment end-to-end). (docs.openvino.ai)
- OpenVINO Toolkit article about Optimum Intel + OpenVINO (background on why this stack exists and how it’s used). (Medium)
If you want the fastest path forward without rewriting everything
- Start from your current recipe, but strip it down to base requirements only.
- Package
optimum-onnx first (local channel). (pypistats.org)
- Fix the “actually exists on conda-forge” deps by renaming (
hf-xet, vector-quantize-pytorch, open-clip-torch). (Anaconda)
- Treat
openvino-genai as non-conda unless you commit to packaging it yourself. (docs.openvino.ai)
If you do those four steps, you’ll convert your current situation from “unsatisfiable” to “mostly solvable,” and the remaining gaps will be explicit and controllable.