proxy.golang.org : github.com/nvidia/tensorrt-inference-server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Registry
-
Source
- Documentation
- JSON
purl: pkg:golang/github.com/nvidia/tensorrt-inference-server
Keywords:
cloud
, datacenter
, deep-learning
, edge
, gpu
, inference
, machine-learning
License: BSD-3-Clause
Latest release: about 1 month ago
First release: over 6 years ago
Namespace: github.com/nvidia
Stars: 4,958 on GitHub
Forks: 1,125 on GitHub
See more repository details: repos.ecosyste.ms
Last synced: 13 days ago