[TensorRT] Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ONNX를 TRT로 변환하면 아래와 같은 Warning 메세지가 뜬다.
Your ONNX model has been generated with INT64 weights,
while TensorRT does not natively support INT64. Attempting to cast down to INT32.
성능상의 손실은 없는 것으로 판단되나 정말 성능에 영향이 없는지 실험이 필요하다. 왜 뜨는 것일까?
관련 이슈 : https://github.com/onnx/tensorflow-onnx/issues/883#issuecomment-614561227
Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down t
Hi, I'm trying to use the TensorFlow-ONNX-TensorRT workflow described here: https://devblogs.nvidia.com/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/ Versions: Tensorf...
github.com
onnx-simplifier를 사용하여 ONNX 모델을 심플하게 변환해보는 것도 방법일 듯 하다.
https://github.com/NVIDIA/TensorRT/issues/284
(Upsample) How can I use onnx parser with opset 11 ? · Issue #284 · NVIDIA/TensorRT
Description onnx-parser is basically built with ir_version 3, opset 7 (https://github.com/onnx/onnx-tensorrt/blob/master/onnx_trt_backend.cpp) Is there any way to use onnx parser with opset 11 supp...
github.com
https://github.com/NVIDIA/TensorRT/issues/284#issuecomment-572835659
(Upsample) How can I use onnx parser with opset 11 ? · Issue #284 · NVIDIA/TensorRT
Description onnx-parser is basically built with ir_version 3, opset 7 (https://github.com/onnx/onnx-tensorrt/blob/master/onnx_trt_backend.cpp) Is there any way to use onnx parser with opset 11 supp...
github.com
'AI Development > TensorRT' 카테고리의 다른 글
[TensorRT] trtexec dumpProfile (0) | 2022.03.22 |
---|---|
[NVIDIA TAO Toolkit] TAO(Train, Adapt, and Optimize) Toolkit (0) | 2022.03.15 |
[TensorRT] ONNX 및 TRT에서 Group Normalization 사용하기 (+ Instance Normalization 이슈) (0) | 2022.02.23 |
[TensorRT] QuickStartGuide (6) | 2021.08.31 |
[TensorRT] trtexec 사용하기 (2) | 2021.03.30 |