@realtimex/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
Ecosystem
npmjs.org
npmjs.org
Latest Release
3 days ago
0.3.1
3 days ago
Versions
5
5
Downloads
173 last month
173 last month
@realtimex/node-llama-cpp-win-x64-cuda-ext 0.3.1
Extension of @realtimex/win-x64-cuda - prebuilt binary for node-llama-cpp for Windows x64 with CU...4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext 0.3.1
Extension of @realtimex/linux-x64-cuda - prebuilt binary for node-llama-cpp for Linux x64 with CU...4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-06 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (6/6)4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-05 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (5/6)4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-04 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (4/6)4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-03 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (3/6)4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-02 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (2/6)4 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-01 0.3.1
Chunk package for the Windows x64 CUDA fallback backend used by node-llama-cpp (1/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-06 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (6/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-05 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (5/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-04 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (4/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-03 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (3/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-02 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (2/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-01 0.3.1
Chunk package for the Linux x64 CUDA fallback backend used by node-llama-cpp (1/6)4 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-08 0.0.0-bootstrap.0
Bootstrap package reserved for chunked CUDA ext publishing1 version - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda-ext-chunk-07 0.0.0-bootstrap.0
Bootstrap package reserved for chunked CUDA ext publishing1 version - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-08 0.0.0-bootstrap.0
Bootstrap package reserved for chunked CUDA ext publishing1 version - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda-ext-chunk-07 0.0.0-bootstrap.0
Bootstrap package reserved for chunked CUDA ext publishing1 version - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-win-x64-vulkan 0.2.4
Prebuilt binary for node-llama-cpp for Windows x64 with Vulkan support2 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64-cuda 0.2.4
Prebuilt binary for node-llama-cpp for Windows x64 with CUDA support2 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-x64 0.2.4
Prebuilt binary for node-llama-cpp for Windows x642 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-win-arm64 0.2.4
Prebuilt binary for node-llama-cpp for Windows arm642 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-mac-x64 0.2.4
Prebuilt binary for node-llama-cpp for macOS x642 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-mac-arm64-metal 0.2.4
Prebuilt binary for node-llama-cpp for macOS arm64 with Metal support2 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-vulkan 0.2.4
Prebuilt binary for node-llama-cpp for Linux x64 with Vulkan support2 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-linux-x64-cuda 0.2.4
Prebuilt binary for node-llama-cpp for Linux x64 with CUDA support2 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-x64 0.2.4
Prebuilt binary for node-llama-cpp for Linux x642 versions - Latest release: 3 days ago - 0 stars on GitHub - 1 maintainer
@realtimex/node-llama-cpp-linux-armv7l 0.2.4
Prebuilt binary for node-llama-cpp for Linux armv7l2 versions - Latest release: 3 days ago - 1 maintainer
@realtimex/node-llama-cpp-linux-arm64 0.2.4
Prebuilt binary for node-llama-cpp for Linux arm642 versions - Latest release: 3 days ago - 1 maintainer
Links
| Registry | npmjs.org |
| Source | Repository |
| Homepage | Homepage |
| JSON API | View JSON |
| CodeMeta | codemeta.json |
Package Details
| PURL |
pkg:npm/%40realtimex/node-llama-cpp
spec |
| License | MIT |
| Namespace | realtimex |
| First Release | 6 days ago |
| Last Synced | 3 days ago |
Keywords
llama llama-cpp llama.cpp bindings ai cmake cmake-js prebuilt-binaries llm gguf metal cuda vulkan grammar embedding rerank reranking json-grammar json-schema-grammar functions function-calling token-prediction speculative-decoding temperature minP topK topP seed xtc json-schema raspberry-pi self-hosted local catai mistral deepseek qwen qwq gpt gpt-oss typescript lora batching gpu
llama llama-cpp llama.cpp bindings ai cmake cmake-js prebuilt-binaries llm gguf metal cuda vulkan grammar embedding rerank reranking json-grammar json-schema-grammar functions function-calling token-prediction speculative-decoding temperature minP topK topP seed xtc json-schema raspberry-pi self-hosted local catai mistral deepseek qwen qwq gpt gpt-oss typescript lora batching gpu
Repository
| Stars | 0 on GitHub |
| Forks | 0 on GitHub |