728x90
반응형

 

아래와 같은 에러는 

ImportError: No module named 'tensorrt.parsers'; 'tensorrt' is not a package

 

TensorRT 예제에서 아래코드와 같이

tensorrt.parsers 의 uffparser를 import 하여 uff 파일을 로드하고 builder 를 생성하였을 때 나는 에러이다. 

 

 

import tensorflow as tf

import tensorrt as trt

from tensorrt.parsers import uffparser


uff_model = uff.from_tensorflow(tf_model, ["fc2/Relu"])
    
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
    
parser = uffparser.create_uff_parser()
parser.register_input("Placeholder", (1,28,28), 0)
parser.register_output("fc2/Relu")

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)
    
parser.destroy()

# get image
# ...

runtime = trt.infer.create_infer_runtime(G_LOGGER)
context = engine.create_execution_context()
    
# 10 is classification size
output = np.empty(10, dtype = np.float32)
    
# Alocate device memory
d_input = cuda.mem_alloc(1 * img.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)
    
bindings = [int(d_input), int(d_output)]
    
    
stream = cuda.Stream()

# Transfer input data to device
cuda.memcpy_htod_async(d_input, img, stream)

# Execute model
context.enqueue(1, bindings, stream.handle, None)

# Transfer predictions back
cuda.memcpy_dtoh_async(output, d_output, stream)

# Syncronize threads
stream.synchronize()


print("Test Case: " + str(label))
print ("Prediction: " + str(np.argmax(output)))

trt.utils.write_engine_to_file("./tf_mnist.engine", engine.serialize())
    
    
context.destroy()
engine.destroy()
new_engine.destroy()
runtime.destroy()

 

https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/python_api/workflows/tf_to_tensorrt.html

 

Generating TensorRT Engines from TensorFlow — TensorRT 4.0 documentation

The UFF Toolkit allows you to convert TensorFlow models to UFF. The UFF parser can build TensorRT engines from these UFF models. For this example, we train a LeNet5 model to classify handwritten digits and then build a TensorRT Engine for inference. Please

docs.nvidia.com

 

 

 

 

 

NVIDIA에 의하면 아래와 같다. 

 

 

The format has changed with the new python API. The documentation has more details on how this can be done.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_model_python

You would do something like this with the new API
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.CaffeParser() as parser:

------------------------
To continue using the legacy API, you would need to `import tensorrt.legacy` instead of `import tensorrt`

 

 

출처 : https://devtalk.nvidia.com/default/topic/1042377/importerror-no-module-named-tensorrt-parsers-/

 

https://devtalk.nvidia.com/default/topic/1042377/importerror-no-module-named-tensorrt-parsers-/

 

devtalk.nvidia.com

 

 

uff 파일을 로드하고 builder 를 생성하여 engine 을 생성하기 위해서는 아래와 같이 변경한다. 

 

 

1. 아래와 같이 콘솔에서 pb 파일을 uff 파일로 변환 

$ convert-to-uff frozen_inference_graph.pb

 

2. import 부분 변경 및 builder 방식 수정 

import tensorrt as trt


...


model_file = '/data/mnist/mnist.uff'


with builder = trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    	parser.register_input("Placeholder", (1, 28, 28))
    	parser.register_output("fc2/Relu")
		parser.parse(model_file, network)

 

728x90
반응형