pypi.org : llm-elasticsearch-cache
[IMPORTANT: This library is now part of LangChain, follow its official documentation] A caching layer for LLMs that exploits Elasticsearch, fully compatible with LangChain caching, both for chat and embeddings models.
Registry
-
Source
- Documentation
- JSON
purl: pkg:pypi/llm-elasticsearch-cache
Keywords:
langchain
, elasticsearch
, openai
, llm
, chatgpt
, chatgpt-api
, embeddings
, langchain-python
, llmops
, llms
, openai-api
, prompt-engineering
License: MIT
Latest release: 11 months ago
First release: about 1 year ago
Downloads: 174 last month
Stars: 7 on GitHub
Forks: 0 on GitHub
See more repository details: repos.ecosyste.ms
Last synced: 30 days ago