Conda/mamba optimum-intel

Hello all

I am setting up a AI lab at home and I would like know if there is going to be a release of the optimum-intel via intel or conda forge. I do not see them there and the option is to use pip but Since I want to stick to all conda/mamba env I do not want to mix pip with this. If you know how I can install this module that will help in the learning.

thank you

1 Like

seems complicated…


What’s available today (and why you don’t see it)

  • optimum (base Optimum) is on conda-forge. (anaconda.org)
  • optimum-intel is documented as a pip-installed, “fast-moving” project (including install-from-source examples). (Hugging Face)
  • OpenVINO’s own “Inference with Optimum Intel” guide also uses pip (pip install optimum[openvino,nncf]). (docs.openvino.ai)

Intel does have a conda channel, but it’s primarily for Intel/oneAPI component packages (not an Optimum Intel distribution channel). (Intel)

“Will there be a conda-forge / Intel-channel release?”

I did not find an official promise or timeline. Practically, conda-forge packages happen when someone contributes/maintains a recipe via staged-recipes → feedstock. (conda-forge)
Given the frequent dependency pinning/friction visible in real issues (example below), it’s also the kind of package that tends to need active maintenance. (GitHub)


How to install optimum-intel without “mixing pip into your env”

You have two realistic paths:

  1. Strict conda-only installation in the runtime env (recommended for your stated goal)
  2. Conda-first + pip-last (pragmatic fallback; still reproducible if you lock)

Below is the strict conda-only approach first.


Option A (matches your rule): build a local conda package for optimum-intel, then install via conda/mamba

This keeps your runtime env “conda-installed only.”
(You may use pip indirectly during the build step, but the resulting artifact is installed with conda.)

Background: why this works

Step-by-step

1) Create a build-tools env

mamba create -n build-optimum-intel -c conda-forge python=3.11 conda-build grayskull
mamba activate build-optimum-intel

2) Generate a recipe from PyPI

grayskull pypi optimum-intel

Grayskull’s purpose and scope are documented here. (conda.github.io)

3) Build the conda package

conda build optimum-intel

4) Install it into your runtime env using conda (no pip installs in runtime)

Create your runtime env (example includes OpenVINO from conda-forge):

mamba create -n ov-lab -c conda-forge python=3.11 openvino transformers accelerate
mamba activate ov-lab
conda install --use-local optimum-intel

conda install --use-local ... is the documented way to install your newly built local package. (Conda Documentation)

What you may need to adjust (common with ML stacks)

Expect to sometimes edit the generated recipe (dependency names/pins). A concrete example of why: users hit dependency conflicts involving numpy constraints and other pinned packages in the Optimum Intel dependency graph. (GitHub)


Option B (fallback): conda-first + pip-last, but done safely and reproducibly

If you later decide “pip in the env is acceptable as long as it’s controlled,” the safest pattern is:

  1. install as much as possible with conda/mamba
  2. then pip-install the pip-only package
  3. avoid running conda again in that env

This is explicitly recommended in conda’s environment management guidance. (Conda Documentation)

Minimal-risk pip install

Use conda-forge for OpenVINO + core deps first, then:

  • either follow the Optimum Intel docs (pip install ...) (Hugging Face)
  • or install with --no-deps after you’ve already installed dependencies via conda (prevents pip from reshuffling versions)

Reproducibility

If you ever allow pip packages, conda-lock can lock the dependencies.pip section of an environment.yml. (conda.github.io)


Why this is happening (short, practical background)

  • optimum-intel is treated upstream as pip-first and fast-moving. (Hugging Face)
  • OpenVINO’s own Optimum Intel workflow guide uses pip installs as well. (docs.openvino.ai)
  • Packaging to conda-forge is a community contribution + maintenance process (staged-recipes → feedstock). (conda-forge)

Learning resources (good starting points)

  • Optimum Intel installation docs (what upstream expects) (Hugging Face)
  • OpenVINO: “Inference with Optimum Intel” (end-to-end LLM flow) (docs.openvino.ai)
  • Hugging Face blog: deploy with OpenVINO + Optimum Intel (practical walkthrough) (Hugging Face)

Practical recommendation for your home lab

If your rule is “no pip installs in the runtime env”, do Option A now:

  • build optimum-intel once as a local conda artifact,
  • keep 1–2 “golden” runtime envs (CPU-only, and GPU later if needed),
  • only rebuild when you intentionally upgrade.

That aligns with your constraint and avoids most “dependency resolver churn” problems that show up in real-world Optimum Intel installs. (GitHub)

thank you for the information. I will look into grayskull and see if that resolved my issue.

1 Like

Hello I took option A and create the meta.yaml file. I was able to install the module but all the supporting dep were not installed since they do not exist in any of the conda channels. I will attach my file but note I had to rem out the pip install like so it would create the compress file but that also had issue.
I used google AI to assist with the troubleshooting of this since I am not able to upload the file I will post some of it here is I am allowed .

{% set version = “1.27.0” %}

package:
name: optimum-intel
version: {{ version }}

source:
url: https://pypi.org/packages/source/o/optimum-intel/optimum_intel-{{ version }}.tar.gz
sha256: 06c2b38c90912d231677118888388b8e8b073f8bd240e9e1279f48708341b3de

build:
entry_points:

  • optimum-cli=optimum.commands.optimum_cli:main
    #noarch: python # this was disable to prevent conflict
    script:
  • {{ PYTHON }} -m pip install . -vv --no-deps --no-build-isolation
    #- {{ PYTHON }} -m pip install vocos hf_xet vector_quantize_pytorch transformers_stream_generator invisible-watermark open_clip_torch neural-compressor>=3.4.1 intel-extension-for-pytorch>=2.8 openvino-genai optimum-onnx # none of these will load at all.
    number: 0

requirements:
host:

  • python 3.10.* # change to 3.10.*
  • pip
  • setuptools
    run:
  • python 3.10.* # change to 3.10.*
  • pytorch >=2.1
  • openvino >=2025.4.0 # chg to 2025.0.0 from 2025.04
  • nncf >=2.19.0
  • openvino-tokenizers >=2025.4.0
    #- openvino-genai # moved to pip script
    #- optimum-onnx >=0.1.0,<0.2.dev0 # moved to pip script
  • transformers >=4.45,<4.58 # version 4.45 but less than 4.58 altered

Extra: diffusers

- diffusers  # 

Extra: ipex

#- intel-extension-for-pytorch >=2.8 # moved to pip script
#- transformers >=4.45,<4.58  #  version 4.45 but less than 4.58 altered rem milti entry need one only
- accelerate  # 

Extra: neural-compressor

#- neural-compressor >=3.4.1 # moved to pip script
#- transformers >=4.45,<4.58  #  version 4.45 but less than 4.58 alterd rem milti entry need one only 
- datasets >=1.4.0,<4.0.0  # change the version to a grater than 1.4.0 version

Extra: nncf

#- nncf >=2.19.0  # rem milti entry need one only

Extra: openvino

#- openvino >=2025.4.0    # rem milti entry need one only
# - openvino-tokenizers >=2025.4.0  # rem milti entry need one only

Extra: quality

- black >=23.1,<24.dev0
- ruff ==0.4.4  # 

Extra: tests

- pytest >=7.2.0,<8.0.0
- parameterized
- Pillow >=10.0  # prevent it from looking at older py27 versions
- evaluate
- py-cpuinfo
- sacremoses
- sentencepiece
- torchaudio
- rjieba
- timm
#- invisible-watermark >=0.2.0   Moved to pip script above
#- transformers_stream_generator Moved to pip script above
- einops
- tiktoken
- sentence-transformers
#- open_clip_torch >=2.26.1   Moved to pip script above
- peft
# - datasets >=1.4.0,<4.0.0 # rem milti entry need one only
- tbb
- langchain-huggingface
#- hf_xet   Moved to pip script above
- num2words
#- vocos   Moved to pip script above
#- vector_quantize_pytorch   Moved to pip script above
#- openvino-genai # rem milti entry need one only

test:
imports:

  • optimum
    commands:
  • pip check
  • optimum-cli --help
    requires:
  • pip
  • python

about:
home: https://www.intel.com
summary: Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.
license: Apache-2.0
license_file: LICENSE

extra:
recipe-maintainers:

  • AddYourGitHubIdHere
1 Like

Hmm…


What’s going on (and why your conda recipe “installed” but deps didn’t)

1) conda-build does not behave like pip install optimum-intel[extras]

In a conda recipe, your build script:

python -m pip install . --no-deps

intentionally installs only optimum-intel (no dependency resolution). The dependency solver is conda/mamba, and it only installs what you list under requirements: run, and only if those packages exist in your configured channels.

So if you list PyPI-only deps (or misnamed conda packages) in requirements: run, you’ll get exactly what you saw:

  • the package builds/installs,
  • but the runtime deps are missing because conda can’t find them.

2) You mixed “extras/dev/test” deps into runtime deps

Your recipe includes quality/test tooling (black, ruff, pytest, etc.) and many “extras” dependencies as if they were required at runtime. On PyPI, optimum-intel advertises many optional extras (openvino, nncf, ipex, neural-compressor, diffusers, tests, quality, …). (PyPI)
In conda packaging, you generally model these as separate outputs (metapackages) or keep them out of run: and document them as optional installs.


Important correction: several of your “missing” deps actually exist on conda-forge (names differ)

These three are the most common “it doesn’t exist” cases because the conda package names use dashes:

You tried Conda(-forge) name Evidence
hf_xet hf-xet (Anaconda)
vector_quantize_pytorch vector-quantize-pytorch (Anaconda)
open_clip_torch open-clip-torch (available in conda ecosystem; conda-forge is referenced by maintainers) (GitHub)

So if your goal was “conda-only”, a chunk of your list becomes solvable just by renaming.


What likely blocks a clean conda(-forge) packaging: optimum-onnx + a few PyPI-only packages

1) optimum-intel depends on optimum-onnx

A metadata view of optimum-intel shows required deps include optimum-onnx (plus torch, transformers). (pypistats.org)
And Hugging Face’s Transformers docs explicitly install Optimum ONNX via pip (uv pip install optimum-onnx) when discussing ONNX export tooling. (Hugging Face)

If optimum-onnx is not available in your conda channels, then a conda-native optimum-intel package is incomplete unless you:

  • package optimum-onnx into conda first, or
  • patch optimum-intel to make that dependency optional (usually not recommended unless upstream supports it).

2) OpenVINO GenAI is not distributed via conda

OpenVINO Runtime is available via conda-forge. (docs.openvino.ai)
But OpenVINO GenAI is distributed via PyPI and archives, not conda. (docs.openvino.ai)
So if your dependency chain truly requires openvino-genai, you won’t get a “pure conda-forge solve” without creating and maintaining your own conda package for it.

3) Some of your other items are “PyPI-first”

For example these are clearly PyPI packages:

  • transformers-stream-generator (PyPI)
  • invisible-watermark (PyPI)
  • vocos (PyPI)

They might be packageable into conda (some are pure Python), but they’re not guaranteed to already exist in your channels.


Intel channel reality check (deps you may need from non-forge channels)

If you are willing to use “conda-only” but not strictly “conda-forge-only”:

  • neural-compressor exists on Intel’s channel.
  • intel-extension-for-pytorch exists on Anaconda’s main channel, but the listing shown is very old (v1.12.1 from 2023), which is unlikely to match modern PyTorch (2.x). (Anaconda)

That mismatch is a common reason conda recipes “can’t solve” when you try to pin intel-extension-for-pytorch>=2.8.


What I would do in your situation (conda-only, home lab, practical)

Option 1 (recommended if you truly want conda-only): build a local conda channel for the missing pieces

You’re already halfway there.

Goal: keep your runtime installs 100% conda/mamba by packaging whatever is missing into your own channel (local directory channel), then install with -c file:///....

Priority order to package:

  1. optimum-onnx (because it’s a required dep) (pypistats.org)
  2. Any PyPI-only libs you actually need for your workflows (transformers-stream-generator, invisible-watermark, vocos) (PyPI)
  3. Only then package optimum-intel

Why this works: conda can only solve against packages it can see. Once you provide conda artifacts (even locally), mamba can do a clean solve without pip.

Option 2: keep optimum-intel conda-packaged, but make extras separate and optional (how conda-forge typically wants it)

Model the PyPI extras as separate metapackages:

  • optimum-intel (base)
  • optimum-intel-openvino (depends on optimum-intel, openvino, openvino-tokenizers, nncf, …)
  • optimum-intel-neural-compressor (depends on optimum-intel, neural-compressor)
  • etc.

This matches the fact that upstream documents pip extras (e.g., optimum-intel[openvino]). (GitHub)
Also note: OpenVINO Runtime itself is well supported in conda-forge. (docs.openvino.ai)

Option 3 (if you relax the “no pip” rule slightly): use pip only for the packages that don’t exist on conda

Upstream’s own install guidance for Optimum Intel is pip-first. (GitHub)
If you ever change your mind: install the heavy compiled stack via conda (torch/openvino/nncf), then pip-install only the few missing pure-Python bits last. This minimizes conflicts.


Concrete fixes to your meta.yaml (so your package behaves predictably)

1) Don’t list “extras/test/quality” under requirements: run

Move dev/test tools to test: requires (or remove entirely). Keep runtime minimal.

2) Fix conda package names (dash vs underscore)

Replace:

  • hf_xet → hf-xet (Anaconda)
  • vector_quantize_pytorch → vector-quantize-pytorch (Anaconda)
  • open_clip_torch → open-clip-torch (GitHub)

3) Don’t “pip install extra deps” inside the recipe build

This is a key conda-forge norm: dependencies must be declared, not downloaded during build.

4) noarch: python is usually correct here

optimum-intel ships as a universal wheel (py3-none-any), so it’s typically noarch. (PyPI)
If you pin Python to exactly 3.10, noarch becomes awkward—prefer python >=3.10,<3.12 style constraints instead.

5) Don’t hardcode entry_points if upstream already defines them

Let pip install . install the console scripts from the package metadata unless you have a specific reason. (Hardcoding can drift from upstream.)


“Is there going to be” an official conda(-forge)/Intel release?

I can’t point to a public commitment or an existing conda-forge feedstock/PR for optimum-intel right now. Your Hugging Face forum thread (Jan 26, 2026) is exactly the kind of place that usually gets an authoritative answer from maintainers if one exists. (Hugging Face Forums)

Practically, the packaging blockers are consistent with what you ran into: required deps like optimum-onnx and optional deps like openvino-genai that are not clearly available on conda channels. (pypistats.org)


Similar cases + good references (with context)

Similar “why isn’t this on conda-forge” / packaging pattern

  • Hugging Face Optimum issue requesting conda-forge packaging (historical example of the process and why it takes time). (GitHub)
  • conda-forge “Contributing packages” guide (the exact staged-recipes → feedstock workflow you’d use if you upstream this). (conda-forge.org)
  • open-clip-torch conda-forge packaging discussion (example of a PyPI project gaining a conda(-forge) package via community maintenance). (GitHub)

Optimum Intel / OpenVINO learning resources

  • Optimum Intel repo install instructions (pip + extras, shows how upstream expects deps to be selected). (GitHub)
  • OpenVINO Runtime via conda-forge (what you can keep fully conda-native). (docs.openvino.ai)
  • OpenVINO GenAI install guide (why your conda-only plan may hit a hard wall here). (docs.openvino.ai)
  • OpenVINO notebook demonstrating optimum-intel for Stable Diffusion/OpenVINO flows (good for validating your environment end-to-end). (docs.openvino.ai)
  • OpenVINO Toolkit article about Optimum Intel + OpenVINO (background on why this stack exists and how it’s used). (Medium)

If you want the fastest path forward without rewriting everything

  1. Start from your current recipe, but strip it down to base requirements only.
  2. Package optimum-onnx first (local channel). (pypistats.org)
  3. Fix the “actually exists on conda-forge” deps by renaming (hf-xet, vector-quantize-pytorch, open-clip-torch). (Anaconda)
  4. Treat openvino-genai as non-conda unless you commit to packaging it yourself. (docs.openvino.ai)

If you do those four steps, you’ll convert your current situation from “unsatisfiable” to “mostly solvable,” and the remaining gaps will be explicit and controllable.

thank you for the detail information on how I can overcome this issue. I would like to ask what is your thinking if most of the items are Pip Installs what is the pro or cons of just moving to an All Python env and run my test there. per AI I proposed the same question and it seems Conda is c/c++ base and runs faster but since this is a learning lab. What do you think about the pip instead of conda/mamba env

1 Like

This isn’t limited to virtual environments, but for AI-related backends requiring speed, the critical parts are often implemented in C/C++ or Rust rather than pure Python. The choice of virtual or physical environment likely has a limited impact on speed.

While it’s not particularly relevant here, if you’re using multiple processes extensively in Python, making the bottleneck process a local server might be the better approach…


Background: what changes when you switch from conda/mamba to “all Python” (venv + pip)

Conda/mamba vs pip is mostly about packaging and dependency management, not “runtime speed”

  • Conda is a cross-language package manager (it can install Python and non-Python libraries, shared libraries, etc.).
  • pip installs Python packages, usually as wheels (prebuilt binary distributions) when available.

A key point for your question about performance: the environment manager (conda vs pip) does not inherently make your code faster. Performance depends on which binary builds you install (e.g., whether NumPy/PyTorch/OpenVINO are built with specific optimizations). Wheels can contain compiled extensions too, so “pip = pure Python = slower” is not true in general. (Python Packaging)


Why pip-only is attractive in your specific case (Optimum Intel / OpenVINO)

Upstream expects pip workflows

  • Hugging Face’s Optimum Intel installation docs recommend pip, including extras like OpenVINO: python -m pip install --upgrade-strategy eager "optimum-intel[openvino]". (Hugging Face)
  • OpenVINO’s “Inference with Optimum Intel” guide tells you to create a Python environment and install the needed deps via pip (e.g., pip install optimum[openvino,nncf]). (OpenVINO Document)
  • OpenVINO itself supports both pip and conda-forge installs (OpenVINO release notes/release pages routinely show both commands). (OpenVINO Document)

In other words: if most of your stack is pip-first (as you’ve already discovered by trying to conda-package optimum-intel), a pip-native workflow is often the “path of least resistance” for learning.


Pros of moving to a pip-only environment (venv + pip)

1) You stop fighting “missing packages in conda channels”

This is the biggest practical win in your situation: Optimum Intel and several extras are published and tested primarily via pip. (Hugging Face)

2) You get “what upstream means” when they say optimum-intel[openvino]

Extras are a pip ecosystem feature; conda does not have an identical built-in concept. When you stay in pip, the published dependency metadata is exactly what upstream expects. (Hugging Face)

3) You often still get compiled/optimized binaries (wheels)

Many performance-critical Python packages ship wheels with compiled code. Wheels are explicitly designed to distribute binaries and avoid recompiling during install. (Python Packaging)

4) OpenVINO’s pip distribution is a first-class supported install route

OpenVINO documents pip installation and explicitly walks through using a virtual environment (venv). (OpenVINO Document)


Cons / pitfalls of moving to pip-only (what can bite you)

1) When wheels don’t exist, pip builds from source (can fail)

If a package (or one of its dependencies) doesn’t provide a wheel for your OS/Python version, pip may attempt a source build, which can require compilers/system headers.

Conda can be easier in those edge cases because it can provide prebuilt binaries for many compiled dependencies.

2) pip does not manage “system-level” dependencies the same way conda can

Conda can install shared libraries and non-Python tools more directly. pip generally doesn’t, although many wheels bundle the needed shared libraries.

3) Mixing and matching GPU/accelerator stacks is not automatically easier

For example, PyTorch GPU installs via either pip or conda typically still depend on correct system drivers (e.g., NVIDIA drivers), and the binaries often ship the CUDA runtime dependencies themselves. (PyTorch Forums)
(For Intel GPU/NPU with OpenVINO, drivers are also outside the Python env.)

4) Reproducibility requires discipline

With pip-only, you’ll want to pin versions and use a lock strategy (requirements/lockfile tooling) so that “works today” keeps working later.


“Conda is C/C++ base and faster” — how to think about speed correctly

What can be true:

  • Conda distributions sometimes choose builds of numeric libraries (NumPy/SciPy/BLAS) that are tuned for certain CPUs.
  • Some conda channels provide very consistent binary stacks for scientific computing.

What is not generally true:

  • “Using conda makes Python run faster than pip.”

Why:

  • pip wheels can include compiled code and optimized binaries too. (Python Packaging)
  • If you install the same upstream binaries (or equivalent builds), performance is typically similar.

So, for a learning lab, speed differences from “conda vs pip” are usually not the deciding factor. The deciding factor is: how quickly can you get a working, stable stack that matches upstream docs?


Recommendation for your learning lab

If most of your required packages are pip-first: switch to pip-only for this stack

Given your experience (conda recipe installs but can’t satisfy many deps from conda channels), a pip-only venv is the most practical way to:

  • learn Optimum Intel workflows quickly,
  • run your tests,
  • stay aligned with official install docs. (Hugging Face)

Keep conda/mamba only where it clearly helps

If later you find you need:

  • a complex non-Python dependency stack,
  • or you hit frequent “no wheel available” build failures,
    then conda/mamba can be the better base for that specific environment.

This “use the right tool per environment” approach is common, and conda even documents pip interoperability (though it notes it’s experimental). (docs.conda.io)


A clean “pip-only” setup you can use for Optimum Intel testing

This follows the same pattern OpenVINO recommends (venv + pip). (OpenVINO Document)

python -m venv .venv
# Linux/macOS:
source .venv/bin/activate
# Windows (PowerShell):
# .\.venv\Scripts\Activate.ps1

python -m pip install --upgrade pip
python -m pip install --upgrade-strategy eager "optimum-intel[openvino]"

That last command is directly from Optimum Intel’s install docs. (Hugging Face)

For PyTorch, use the official selector and install command from pytorch.org (pip is fully supported). (PyTorch)


Practical decision rule

Use pip-only if:

  • your stack is mostly pip-first (Optimum Intel + extras),
  • you want to follow upstream docs exactly,
  • you’re okay pinning versions for reproducibility.

Use conda/mamba if:

  • you frequently hit source builds / missing wheels,
  • you need non-Python dependencies managed inside the env,
  • you want a curated binary stack from conda-forge for scientific libs.

In your situation (Optimum Intel + many pip-only deps), I would run pip-only for the Optimum Intel/OpenVINO test environment, and keep conda/mamba as a separate tool for environments that are conda-native.