Tag: TensorRT

How Quantization Aware Training Enables Low-Precision Accuracy Recovery
Deploy High-Performance AI Models in Windows Applications on NVIDIA RTX AI PCs
NVIDIA AI Inference Backends
BOXER-8741AI: NVIDIA Jetson T5000
Introducing NVIDIA Jetson Thor, the Ultimate Platform for Physical AI | NVIDIA Technical Blog
Ultralytics YOLO11
Access to NVIDIA NIM Now Available Free to Developer Program Members
Generate Traffic Insights Using YOLOv8 and NVIDIA JetPack 6.0 | NVIDIA Technical Blog
Sparsity in INT8: Training Workflow and Best Practices for NVIDIA TensorRT Acceleration
End-to-End AI for NVIDIA-Based PCs: CUDA and TensorRT Execution Providers in ONNX Runtime
Get Started With the NVIDIA DeepStream SDK
YOLOv7: YOLO with Transformers and Instance Segmentation, with TensorRT acceleration!
Real-Time Object Detection with DeepStream on Nvidia Jetson AGX Orin
The practical guide for Object Detection with YOLOv5 algorithm
NVIDIA Announces TensorRT 8.2 and Integrations with PyTorch and TensorFlow
NVIDIA Announces TensorRT 8 Slashing BERT-Large Inference Down to 1 Millisecond
Using MATLAB and TensorRT on NVIDIA GPUs | NVIDIA Developer Blog
NVIDIA open sources parsers and plugins in TensorRT – NVIDIA Developer News Center

Subscribe to our Digest