🧠 TorchScript Models for the IMPACT Semantic Similarity Metric

This repository provides a collection of TorchScript-exported pretrained models designed for use with the IMPACT similarity metric, enabling semantic medical image registration through feature-level comparison.

The IMPACT metric is introduced in the following preprint, currently under review:

IMPACT: A Generic Semantic Loss for Multimodal Medical Image Registration
V. Boussot, C. HΓ©mon, J.-C. Nunes, J. Dowling, S. RouzΓ©, C. Lafond, A. Barateau, J.-L. Dillenseger
arXiv:2503.24121 [cs.CV]

πŸ”§ The full implementation of IMPACT, along with its integration into the Elastix framework, is available in the repository:
➑️ github.com/vboussot/ImpactLoss

This repository also includes example parameter maps, TorchScript model handling utilities, and a ready-to-use Docker environment for quick experimentation and reproducibility.


πŸ“š Pretrained Model

The TorchScript models provided in this repository were exported from publicly available pretrained networks. These include:

  • TotalSegmentator (TS) β€” U-Net models trained for full-body anatomical segmentation
  • MRSegmentator (MRSeg) β€” U-Net models trained for full-body anatomical segmentation in MRI and CT
  • Segment Anything 2.1 (SAM2.1) β€” Foundation model for segmentation on natural images
  • DINOv2 β€” Self-supervised vision transformer trained on diverse datasets
  • Anatomix β€” Transformer-based model with anatomical priors for medical images

Each model provides multiple feature extraction layers. This can be configured through the LayerMask parameter in the IMPACT configuration.

In addition, the repository also includes:

  • MIND β€” A handcrafted descriptor, wrapped in TorchScript
Model Specialization Paper / Reference Field of View License Preprocessing
MIND Handcrafted descriptor Heinrich et al., 2012 2*r*d + 1 (r: radius, d: dilation) Apache 2.0 Normalize intensities to [0, 1]
SAM2.1 General segmentation (natural images) Ravi et al., 2023 29 Apache 2.0 Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229
TS Models CT/MRI segmentation Wasserthal et al., 2022 2^l + 3 (l: layer number) Apache 2.0 Canonical orientation for all models. For MRI models (e.g., TS/M730–M733-M850–M853), standardize intensities to zero mean and unit variance. For CT models (e.g., TS/M258, TS/M291), clip intensities + normalize model dependant
MRSegmentator CT/MRI segmentation HΓ€ntze et al., 2024 2^l + 3 (l: layer number) Apache 2.0 Standardize intensities to zero mean and unit variance.
Anatomix Anatomy-aware transformer encoder Dey et al., 2024 Global(Static mode) MIT Normalize intensities to [0, 1]
DINOv2 Self-supervised vision transformer Oquab et al., 2023 14 Apache 2.0 Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including VBoussot/impact-torchscript-models

Papers for VBoussot/impact-torchscript-models