npmjs.org : node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Registry
-
Source
- Homepage
- JSON
purl: pkg:npm/node-llama-cpp
Keywords:
llama
, llama-cpp
, llama.cpp
, bindings
, ai
, cmake
, cmake-js
, prebuilt-binaries
, llm
, gguf
, metal
, cuda
, vulkan
, grammar
, embedding
, rerank
, reranking
, json-grammar
, json-schema-grammar
, functions
, function-calling
, token-prediction
, speculative-decoding
, temperature
, minP
, topK
, topP
, seed
, json-schema
, raspberry-pi
, self-hosted
, local
, catai
, mistral
, deepseek
, qwen
, qwq
, typescript
, lora
, batching
, gpu
, nodejs
License: MIT
Latest release: 5 days ago
First release: almost 2 years ago
Dependent packages: 3
Dependent repositories: 1
Downloads: 29,583 last month
Stars: 1,500 on GitHub
Forks: 127 on GitHub
Total Commits: 174
Committers: 6
Average commits per author: 29.0
Development Distribution Score (DDS): 0.063
More commit stats: commits.ecosyste.ms
See more repository details: repos.ecosyste.ms
Funding links: https://github.com/sponsors/giladgd
Last synced: 3 days ago