An open API service providing package, version and dependency metadata of many open source software ecosystems and registries.

pypi.org : vllm-emissary

A high-throughput and memory-efficient inference and serving engine for LLMs

Registry - Source - Documentation - JSON
purl: pkg:pypi/vllm-emissary
Keywords: amd , cuda , deepseek , gpt , hpu , inference , inferentia , llama , llm , llm-serving , llmops , mlops , model-serving , pytorch , qwen , rocm , tpu , trainium , transformer , xpu
License: Other
Latest release: 16 days ago
First release: 16 days ago
Downloads: 211 last month
Stars: 44,312 on GitHub
Forks: 6,781 on GitHub
Total Commits: 4106
Committers: 701
Average commits per author: 5.857
Development Distribution Score (DDS): 0.882
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Funding links: https://github.com/sponsors/vllm-project, https://opencollective.com/vllm
Last synced: 1 day ago

    Loading...
    Readme
    Loading...