AI Development/TensorRT
[TensorRT] 지원되는 연산자 목록 (2020.04.29 기준)
꾸준희
2020. 4. 29. 00:09
728x90
반응형
TensorRT 에서 지원되는 Operators 는 다음과 같다. (2020.04.29 기준)
Caffe
- BatchNormalization
- BNLL
- Clip
- Concatenation
- Convolution
- Crop
- Deconvolution
- Dropout
- ElementWise
- ELU
- InnerProduct
- Input
- LeakyReLU
- LRN
- Permute
- Pooling
- Power
- Reduction
- ReLU, TanH, and Sigmoid
- Reshape
- SoftMax
- Scale
Clip : When using the Clip operation, Caffe users must serialize their layers using ditcaffe.pb.h instead of caffe.pb.h in order to import the layer into TensorRT.
TensorFlow
- Add, Sub, Mul, Div, Minimum and Maximum
- ArgMax
- ArgMin
- AvgPool
- BiasAdd
- Clip
- ConcatV2
- Const
- Conv2D
- ConvTranspose2D
- DepthwiseConv2dNative
- Elu
- ExpandDims
- FusedBatchNorm
- Identity
- LeakyReLU
- MaxPool
- Mean
- Negative, Abs, Sqrt, Recip, Rsqrt, Pow, Exp and Log
- Pad is supported if followed by one of these TensorFlow layers: Conv2D, DepthwiseConv2dNative, MaxPool, and AvgPool.
- Placeholder
- ReLU, TanH, and Sigmoid
- Relu6
- Reshape
- Sin, Cos, Tan, Asin, Acos, Atan, Sinh, Cosh, Asinh, Acosh, Atanh, Ceil and Floor
- Selu
- Slice
- SoftMax
- Note: If the input to a TensorFlow SoftMax op is not NHWC, TensorFlow will automatically insert a transpose layer with a non-constant permutation, causing the UFF converter to fail. It is therefore advisable to manually transpose SoftMax inputs to NHWC using a constant permutation.
- Softplus
- Softsign
- Transpose
ONNX
- Abs
- Acos
- Acosh
- And
- Asin
- Asinh
- Atan
- Atanh
- Add
- ArgMax
- ArgMin
- AveragePool
- BatchNormalization
- Cast
- Ceil
- Clip
- Concat
- Constant
- ConstantOfShape
- Conv
- ConvTranspose
- Cos
- Cosh
- DepthToSpace
- DequantizeLinear
- Div
- Dropout
- Elu
- Equal
- Erf
- Exp
- Expand
- Flatten
- Floor
- Gather
- Gemm
- GlobalAveragePool
- GlobalMaxPool
- Greater
- GRU
- HardSigmoid
- Identity
- ImageScaler
- InstanceNormalization
- LRN
- LeakyRelU
- Less
- Log
- LogSoftmax
- Loop
- LRN
- LSTM
- MatMul
- Max
- MaxPool
- Mean
- Min
- Mul
- Neg
- Not
- Or
- Pad
- ParametricSoftplus
- Pow
- PRelu
- QuantizeLinear
- RandomUniform
- RandomUniformLike
- Range
- Reciprocal
- ReduceL1
- ReduceL2
- ReduceLogSum
- ReduceLogSumExp
- ReduceMax
- ReduceMean
- ReduceMin
- ReduceProd
- ReduceSum
- ReduceSumSquare
- Relu
- Reshape
- Resize
- RNN
- ScaledTanh
- Scan
- Selu
- Shape
- Sigmoid
- Sin
- Sinh
- Size
- Slice
- Softmax
- Softplus
- Softsign
- SpaceToDepth
- Split
- Sqrt
- Squeeze
- Sub
- Sum
- Tan
- Tanh
- ThresholdedRelu
- Tile
- TopK
- Transpose
- Unsqueeze
- Upsample
- Where
참고자료 : https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#fntarg_12
728x90
반응형