All activity
Chen Kinnrot
left a comment
Hey Shiv,
A few questions:
- can I moderate hallucinations in real time and stop my llm from sending response to user for example? Or is it just alerting?
- is it possible to evaluate rag when the vector store changes ? Do you have side by side retrieval evaluation?
- if my app has multiple chained llm calls, can I evaluate the entire flow ?
Thx
Athina AI
Monitor LLMs and automatically detect hallucinations in prod