AI Development/TensorRT

[TensorRT] InstanceNormalization_TRT Plugin

꾸준희 2021. 9. 1. 17:42



아래와 같이 Engine Serialization 과정에서 InstanceNormalization_TRT 플러그인을 못찾겠다는 에러가 발생했다. 


Instance Normalization Paper :

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin InstanceNormalization_TRT version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (322) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.



나의 모델에는 ONNX parsing 과정에서 InstanceNormalization operator 가 존재한다.










ONNX operators


TensorRT Plugins




TensorRT Support Matrix


Support Matrix :: NVIDIA Deep Learning TensorRT Documentation

The section lists the supported TensorRT layers and each of the features. About this task Note: Supports broadcast indicates support for broadcast in this layer. This layer allows its two input tensors to be of dimensions [1, 5, 4, 3] and [1, 5, 1, 1], and


위 문서들에 의하면 이 operator 는 TensorRT 6 버전 부터 지원한다고 하는데 왜 플러그인을 찾지 못하는 것인지 구글링해보았다. 플러그인을 초기화 해주면 된다고 한다. 






1. Python API


아래 예시 코드와 같이 TensorRT 플러그인을 초기화 했더니 해결되었다. 

trt.init_libnvinfer_plugins(TRT_LOGGER, '')


예시 코드 1.

import tensorrt as trt
import numpy as np

TRT_LOGGER = trt.Logger()

trt.init_libnvinfer_plugins(TRT_LOGGER, '')
PLUGIN_CREATORS = trt.get_plugin_registry().plugin_creator_list

def get_trt_plugin(plugin_name):
        plugin = None
        for plugin_creator in PLUGIN_CREATORS:
            if == plugin_name:
                lrelu_slope_field = trt.PluginField("neg_slope", np.array([0.1], dtype=np.float32), trt.PluginFieldType.FLOAT32)
                field_collection = trt.PluginFieldCollection([lrelu_slope_field])
                plugin = plugin_creator.create_plugin(name=plugin_name, field_collection=field_collection)
        return plugin

def main():
    builder = trt.Builder(TRT_LOGGER) 
    network = builder.create_network()
    config = builder.create_builder_config()
    config.max_workspace_size = 2**20
    input_layer = network.add_input(name="input_layer", dtype=trt.float32, shape=(1, 1))
    lrelu = network.add_plugin_v2(inputs=[input_layer], plugin=get_trt_plugin("LReLU_TRT"))
    lrelu.get_output(0).name = "outputs"










2. C++ API



사용 예시 1.

 initLibNvInferPlugins(&gLogger.getTRTLogger(), "");


사용 예시 2.

	 * load NV inference plugins
	static bool loadedPlugins = false;

	if( !loadedPlugins )
		LogVerbose(LOG_TRT "loading NVIDIA plugins...\n");

		loadedPlugins = initLibNvInferPlugins(&gLogger, "");

		if( !loadedPlugins )
			LogError(LOG_TRT "failed to load NVIDIA plugins\n");
			LogVerbose(LOG_TRT "completed loading NVIDIA plugins.\n");












참고자료 1 :


Unable load Tensor RT SavedModel after conversion in Tensorflow 2.1

I have been attempting to convert a YOLOv3 model implemented in Tensorflow 2 to Tensor RT by following the tutorial on the NVIDIA website (


참고자료 2 :


TensorRT: NvInferPlugin.h File Reference

This is the API for the Nvidia provided TensorRT plugins. The BatchedNMS Plugin performs non_max_suppression on the input boxes, per batch, across all classes. It greedily selects a subset of bounding boxes in descending order of score. Prunes away boxes t


참고자료 3 :


[TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin InstanceNormalization_TRT version 1 [12/22/2020-12:46:22] [E] [TR

[TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin InstanceNormalization_TRT version 1 [12/22/2020-12:46:22] [E] [TRT] safeDeserializationUtils.cpp (322) - Serialization Error in load: ...


참고자료 4 :] [TensorRT.ERROR]: getPluginCreator could not find plugin Normalize_TRT version 1 namespace

The giexec generate ssd.engine. It’s OK! But get error when inferencing in ssd.engine. CUDA:10 TRT:5.0.2 Driver:410.48