NVIDIA Jetson Benchmarks 참고자료
https://developer.nvidia.com/embedded/jetson-benchmarks
Jetson is used to deploy a wide range of popular DNN models and ML frameworks to the edge with high performance inferencing, for tasks like real-time classification and object detection, pose estimation, semantic segmentation, and natural language processing (NLP). The table below shows inferencing benchmarks for popular vision DNNs across the Jetson family with the latest JetPack. These results can be reproduced by running the open jetson_benchmarks project from GitHub.
* Latency more than 15ms.
On Jetson Xavier NX and Jetson AGX Xavier, both NVIDIA Deep Learning Accelerator (NVDLA) engines and the GPU were run simultaneously with INT8 precision, while on Jetson Nano and Jetson TX2 the GPU was run with FP16 precision.
Notes:
- Each Jetson module was run with maximum performance
- MAX-N mode for Jetson AGX Xavier
- 15W for Jetson Xavier NX and Jetson TX2
- 10W for Jetson Nano
- Minimum latency results
- The minimum latency throughput results were obtained with the maximum batch size that would not exceed 15ms latency (50ms for BERT) — otherwise, a batch size of one was used.
- Maximum performance results
- The maximum throughput results were obtained without latency limitation and illustrate the maximum performance that can be achieved.
This methodology provides a balance between deterministic low-latency requirements for real-time applications and maximum performance for multi-stream use-case scenarios. All results are obtained with JetPack 4.4.1.
'기타 > 참고자료' 카테고리의 다른 글
CV4ARVR(Computer Vision for AR/VR) (0) | 2022.06.27 |
---|---|
숫자 42의 의미 (0) | 2022.03.29 |
[참고자료] Online Ruler 로 image pixel 측정하기 (0) | 2022.02.22 |
[참고자료] Github 에서 이모티콘 사용하기 (0) | 2021.08.16 |
[참고자료] paperswithcode dataset (0) | 2021.05.04 |