Hi Makers!
I'm Elena, a co-founder of Evidently AI. I'm excited to share that our open-source Evidently library is stepping into the world of LLMs! ๐
Three years ago, we started with testing and monitoring for what's now called "traditional" ML. Think classification, regression, ranking, and recommendation systems. With over 20 million downloads, we're now bringing our toolset to help evaluate and test LLM-powered products.
As you build an LLM-powered app or feature, figuring out if it's "good enough" can be tricky. Evaluating generative AI is different from traditional software and predictive ML. It lacks clear criteria and labeled answers, making quality more subjective and harder to measure. But there is no way around it: to deploy an AI app to production, you need a way to evaluate it.
For instance, you might ask:
- How does the quality compare if I switch from GPT to Claude?
- What will change if I tweak a prompt? Do my previous good answers hold?
- Where is it failing?
- What real-world quality are users experiencing?
It's not just about metricsโit's about the whole quality workflow. You need to define what "good" means for your app, set up offline tests, and monitor live quality.
With Evidently, we provide the complete open-source infrastructure to build and manage these evaluation workflows. Here's what you can do:
๐ Pick from a library of metrics or configure custom LLM judges
๐ Get interactive summary reports or export raw evaluation scores
๐ฆ Run test suites for regression testing
๐ Deploy a self-hosted monitoring dashboard
โ๏ธ Integrate it with any adjacent tools and frameworks
It's open-source under an Apache 2.0 license.
We build it together with the community: I would love to learn how you address this problem and any feedback and feature requests.
Check it out on GitHub: https://github.com/evidentlyai/e..., get started in the docs: http://docs.evidentlyai.com or join our Discord to chat: https://discord.gg/xZjKRaNp8b.
@kjosephabraham Thanks for the support! We always appreciate any feedback and help in spreading the word. As an open-source tool, it is built together with the community! ๐
Love this are toolkits as part of Evidently AI to cover some of the most important AIOps use cases like monitoring model hallucinations, etc. I'm new to building products with AI, I'm curious if there are learning resources for someone like me to learn more about topics like how to test AI generated results. Or does the AI has some suggestions on what methods to use?
Also, big props for making the Evidently platform open source - you have my support for making this available to the world!
Congrats on the launch @elenasamuylova and team!
@tonyhanded Hi Tony! Thanks a lot for the support. We are huge believers in the open-source, too!
We are working on quite a lot of content on this topic in our blog (and we'll soon add more!): you may find it useful. For example, this recent blog on regression testing for LLM outputs https://www.evidentlyai.com/blog... Soon more to come!
One popular approach we implemented is using LLM as a judge, where you effectively use another LLM / different prompt to label your outputs by certain criteria (we recommend using a binary True/False scale here). This is one of the approaches we implemented with this release!
Hi everyone! I am Emeli, one of the co-founders of Evidently AI.
I'm thrilled to share what we've been working on lately with our open-source Python library. I want to highlight a specific new feature of this launch: LLM judge templates.
LLM as a judge is a popular evaluation method where you use an external LLM to review and score the outputs of LLMs.
However, one thing we learned is that no LLM app is alike. Your quality criteria are unique to your use case. Even something seemingly generic like "sentiment" will mean something different each time. While we do have templates (it's always great to have a place to start), our primary goal is to make it easy to create custom LLM-powered evaluations.
Here is how it works:
๐ Define your grading criteria in plain English. Specify what matters to you, whether it's conciseness, clarity, relevance, or creativity.
๐ฌ Pick a template. Pass your criteria to an Evidently template, and we'll generate a complete evaluation prompt for you, including formatting it as JSON and asking the LLM to explain its scores.
โถ๏ธ Run evals. Apply these evaluations to your datasets or recent traces from your app.
๐ Get results. Once you set a metric, you can use it across the Evidently framework. You can generate visual reports, run conditional test suites, and track metrics in time on a dashboard.
You can track any metric you like - from hallucinations to how well your chatbot follows the brand guidelines.
We plan to expand on this feature, making it easier to add examples to your prompt and adding more templates, such as pairwise comparisons.
Let us know what you think! To check it out, visit our GitHub: https://github.com/evidentlyai/e..., docs http://docs.evidentlyai.com or Discord to chat: https://discord.gg/xZjKRaNp8b.
This solves so many of my current pain points working with LLMs. I'm developing AI mentors and therapists and I need a better way to run evals for each update and prompt optimization. Upvoting, bookmarking, and going to try this out!
Thank you Elena!
@danielwchen Thank you! Let us know how it works for you. We see a lot of usage with healthcare-related apps; these are the use cases where quality is paramount - you can't just ship on vibes!
Congrats with the launch! Great milestone @elenasamuylova and @emeli_dral! Evidenlty is part of my MLOps stack and I recommend it to my friends and clients! I'm happy to contribute to Evidently and look forward to collaboration!
I have to give major respect to the team at Evidently AI for their outstanding open-source product.
The introduction of evaluation for LLM apps is a game-changer. It's incredibly easy to integrate into my product, and the monitoring capabilities are top-notch.
What I love most is that they provide ready-to-use tests, which easily customizable.
Kudos to Evidently AI for making such a valuable tool available to the community!
Congrats on the launch! Have you found that Evidently's suggestions are pretty consistent across solutions, or does it really depend on the application at hand? For example, does it always recommend ChatGPT over Claude (or vice versa), or does it depend on the use case? (And if you can share two use cases where the answer is different, that'd be super cool!)
Fantastic launch! We've been searching for an effective solution like this for quite some time. How do you tailor your solution to meet the varying needs of your clients?
@datadriven Thanks for the support, Dina!
On the infrastructure side, our open-source tooling has a Lego-like design: it's easy to use specific Evidently modules and fit them into existing architecture without having to bring all the partsโuse what you need.
On the "contents" side, we created an extensive library of evaluation presets and metrics to choose from, as well as templates to easily add custom metrics so that users can tailor the quality criteria to their needs.
So, some may use Evidently to evaluate, for example, the "conciseness" and "helpfulness" of their chatbots, while others can evaluate the quality and diversity of product recommendationsโall in one tool!
I hope we managed to put all the right ingredients together to allow all users to start using Evidently regardless of the specific LLM/AI use case. I'm looking forward to more community feedback to improve it further!
@elenasamuylova That sounds great! I hope youโll consider hosting workshops or a hackathon to demonstrate how it works.
By the way, have you come across any interesting examples of LLM judges?
@datadriven Maybe even a course! :)
The most interesting examples of LLM judges I've seen are something very custom to the use case. Typically, users who are working on an application want to catch or avoid specific failure modes. Once you identify them, you can create an evaluator to catch them. For example:
- Comments from an AI assistant that are positive but not actionable. ("Actionable" eval).
- Conversations on a sensitive topic where a chatbot does not show empathy. ("Empathy" eval).
etc.
It does take some work to figure it out right, but that is hugely impactful.
Hey Elena and Emeli,
How does Evidently AI handle potential biases in the LLM judges themselves?
Do you have any plans to incorporate human feedback loops into the evaluation process?
Congrats on the launch!
Hey @kyrylosilin ,
Thank you for bringing this up!
We design judge templates based on a binary classification model, where we thoroughly define the classification criteria and strictly structure the response format. Users also have the option to choose how to handle cases with uncertainty, whether itโs by refusing to respond, detecting an issue, or deciding not to detect an issue. This approach is already implemented and helps achieve more consistent outcomes.
In the next update, we plan to add the ability to further iterate on the judges using examples of classifications they have made in previous iterations. This will help address potential biases. Users will be able to select (and even fix wrong labels if needed) complex cases and explicitly pass these examples to the judge, which will, over several iterations, improve accuracy and consistency for specific cases.
Hey @elenasamuylova! Exciting stuff with Evidently stepping into the LLM space! The challenges you've outlined around evaluating generative AI are real. Love the
@elke_qin Great question!
We tried making it easy for users to add custom criteria that will automatically convert into "LLM judges." The user only needs to add the criteria in plain English, and we automatically create a complete evaluation prompt (that will be formatted as JSON, ask LLM to provide the reasoning before outputting the score, specify how to handle uncertainty, etc.).
I agree that LLM output quality is highly custom, so instead of simply providing hard-coded "universal" judge prompts, we believe it's better to help users create and iterate on their judges.
We generally recommend using binary criteria, as they make it easier to test alignment and interpret the results (compared to sparse scales). We also have a workflow for evaluating the quality of judge classification against your own labels to measure alignment.
If you have a reference output (for example, when you do regression testing to compare outputs with a new prompt against the old ones), there are also different approaches to compare old answers against new ones. From semantic similarity to another comparison judge, you can tune to detect specific changes that are important to you.
We do not have a labeling interface inside the tool itself, but we are thinking of adding one. We also have tracing functionality that allows us to collect user feedback if it's available in the app (think upvotes/downvotes).
Evidently AI