Ecosyste.ms: Packages

An open API service providing package, version and dependency metadata of many open source software ecosystems and registries.

Top 3.4% on pypi.org
Top 1.5% downloads on pypi.org
Top 7.4% dependent packages on pypi.org
Top 6.8% dependent repos on pypi.org
Top 1.6% forks on pypi.org
Top 3.2% docker downloads on pypi.org

pypi.org : vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Registry - Source - Documentation - JSON
purl: pkg:pypi/vllm
Keywords: amd, cuda, gpt, inference, inferentia, llama, llm, llm-serving, llmops, mlops, model-serving, pytorch, rocm, trainium, transformer
License: Apache-2.0
Latest release: about 1 month ago
First release: 12 months ago
Dependent packages: 46
Dependent repositories: 5
Downloads: 409,025 last month
Stars: 17,633 on GitHub
Forks: 2,286 on GitHub
Docker dependents: 27
Docker downloads: 16,137
See more repository details: repos.ecosyste.ms
Last synced: about 8 hours ago

byzerllm 0.1.89
ByzerLLM: Byzer LLM
96 versions - Latest release: 22 days ago - 1 dependent package - 2 dependent repositories - 3.05 thousand downloads last month - 17,633 stars on GitHub - 1 maintainer
vllm-xft 0.3.3.1
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: 30 days ago - 436 downloads last month - 17,633 stars on GitHub - 2 maintainers
vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMs
8 versions - Latest release: about 1 month ago - 306 downloads last month - 17,633 stars on GitHub - 1 maintainer
vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: about 1 month ago - 42 downloads last month - 17,633 stars on GitHub - 1 maintainer
nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMs
6 versions - Latest release: about 1 month ago - 33 downloads last month - 17,633 stars on GitHub - 1 maintainer
tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: about 2 months ago - 29 downloads last month - 17,633 stars on GitHub - 1 maintainer
tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMs
1 version - Latest release: 2 months ago - 9 downloads last month - 17,633 stars on GitHub - 1 maintainer
llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 3 months ago - 48 downloads last month - 17,633 stars on GitHub - 1 maintainer
superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
6 versions - Latest release: 3 months ago - 76 downloads last month - 17,633 stars on GitHub - 1 maintainer
hive-vllm 0.0.1
a
1 version - Latest release: 3 months ago - 10 downloads last month - 17,633 stars on GitHub - 1 maintainer
llm_atc 0.1.7
Tools for fine tuning and serving LLMs
6 versions - Latest release: 6 months ago - 47 downloads last month - 17,633 stars on GitHub - 1 maintainer
vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMs
5 versions - Latest release: 8 months ago - 16 downloads last month - 12,665 stars on GitHub - 1 maintainer