Search results
10 packages found
Control what LLMs can, and can't, say
React Native binding of llama.cpp
llama.cpp gguf file parser for javascript
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- grammar
- json-grammar
- View more
A library for generating syntactically valid code from an LLM.
A simple grammar builder compatible with GBNF (llama.cpp)
<a href="https://www.npmjs.com/package/contort"><img alt="Latest Contortionist NPM Version" src="https://badge.fury.io/js/contort.svg" /></a> <a href="https://github.com/thekevinscott/contortionist/blob/master/LICENSE"><img alt="License for contortionist"
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- similique
- id
- llama.cpp
- bindings
- ai
- sequi
- blanditiis
- error
- llm
- laboriosam
- metal
- cuda
- grammar
- json-grammar
- View more
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp