An open API service providing package, version and dependency metadata of many open source software ecosystems and registries.

Top 3.4% on pypi.org
Top 1.5% downloads on pypi.org
Top 7.4% dependent packages on pypi.org
Top 6.8% dependent repos on pypi.org
Top 1.6% forks on pypi.org
Top 3.2% docker downloads on pypi.org

pypi.org : vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Registry - Source - Documentation - JSON
purl: pkg:pypi/vllm
Keywords: amd , cuda , gpt , inference , inferentia , llama , llm , llm-serving , llmops , mlops , model-serving , pytorch , rocm , tpu , trainium , transformer , xpu
License: Apache-2.0
Latest release: 19 days ago
First release: about 2 years ago
Dependent packages: 46
Dependent repositories: 5
Downloads: 4,672,527 last month
Stars: 25,904 on GitHub
Forks: 3,780 on GitHub
Docker dependents: 27
Docker downloads: 16,137
Total Commits: 2550
Committers: 523
Average commits per author: 4.876
Development Distribution Score (DDS): 0.828
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Funding links: https://github.com/sponsors/vllm-project, https://opencollective.com/["vllm"]
Last synced: about 5 hours ago

    Loading...
    Readme
    Loading...