pypi.org : airllm
AirLLM allows single 4GB GPU card to run 70B large language models without quantization, distillation or pruning. 8GB vmem to run 405B Llama3.1.
Registry
-
Source
- Documentation
- JSON
purl: pkg:pypi/airllm
Keywords:
chinese-llm
, chinese-nlp
, finetune
, generative-ai
, instruct-gpt
, instruction-set
, llama
, llm
, lora
, open-models
, open-source
, open-source-models
, qlora
License: MIT
Latest release: 7 months ago
First release: over 1 year ago
Downloads: 1,981 last month
Stars: 5,756 on GitHub
Forks: 455 on GitHub
Total Commits: 236
Committers: 10
Average commits per author: 23.6
Development Distribution Score (DDS): 0.068
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Funding links: https://github.com/sponsors/lyogavin, https://buymeacoffee.com/lyogavinq
Last synced: 11 days ago