728x90
반응형

 

 

ONNX를 TRT로 변환하면 아래와 같은 Warning 메세지가 뜬다. 

Your ONNX model has been generated with INT64 weights,
while TensorRT does not natively support INT64. Attempting to cast down to INT32.

 

성능상의 손실은 없는 것으로 판단되나 정말 성능에 영향이 없는지 실험이 필요하다. 왜 뜨는 것일까?

 

관련 이슈 : https://github.com/onnx/tensorflow-onnx/issues/883#issuecomment-614561227

 

Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down t

Hi, I'm trying to use the TensorFlow-ONNX-TensorRT workflow described here: https://devblogs.nvidia.com/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/ Versions: Tensorf...

github.com

 

onnx-simplifier를 사용하여 ONNX 모델을 심플하게 변환해보는 것도 방법일 듯 하다. 

https://github.com/NVIDIA/TensorRT/issues/284

 

(Upsample) How can I use onnx parser with opset 11 ? · Issue #284 · NVIDIA/TensorRT

Description onnx-parser is basically built with ir_version 3, opset 7 (https://github.com/onnx/onnx-tensorrt/blob/master/onnx_trt_backend.cpp) Is there any way to use onnx parser with opset 11 supp...

github.com

https://github.com/NVIDIA/TensorRT/issues/284#issuecomment-572835659

 

(Upsample) How can I use onnx parser with opset 11 ? · Issue #284 · NVIDIA/TensorRT

Description onnx-parser is basically built with ir_version 3, opset 7 (https://github.com/onnx/onnx-tensorrt/blob/master/onnx_trt_backend.cpp) Is there any way to use onnx parser with opset 11 supp...

github.com

 

728x90
반응형