Evidently AI
p/evidently-ai
Collaborative AI observability platform
Michael Seibel

Evidently AI โ€” Open-source evaluations and observability for LLM apps

Featured
77
โ€ข
Evidently is an open-source framework to evaluate, test and monitor AI-powered apps.

๐Ÿ“š 100+ built-in checks, from classification to RAG.
๐Ÿšฆ Both offline evals and live monitoring.
๐Ÿ›  Easily add custom metrics and LLM judges.
Replies
Best
Elena Samuylova
Hi Makers! I'm Elena, a co-founder of Evidently AI. I'm excited to share that our open-source Evidently library is stepping into the world of LLMs! ๐Ÿš€ Three years ago, we started with testing and monitoring for what's now called "traditional" ML. Think classification, regression, ranking, and recommendation systems. With over 20 million downloads, we're now bringing our toolset to help evaluate and test LLM-powered products. As you build an LLM-powered app or feature, figuring out if it's "good enough" can be tricky. Evaluating generative AI is different from traditional software and predictive ML. It lacks clear criteria and labeled answers, making quality more subjective and harder to measure. But there is no way around it: to deploy an AI app to production, you need a way to evaluate it. For instance, you might ask: - How does the quality compare if I switch from GPT to Claude? - What will change if I tweak a prompt? Do my previous good answers hold? - Where is it failing? - What real-world quality are users experiencing? It's not just about metricsโ€”it's about the whole quality workflow. You need to define what "good" means for your app, set up offline tests, and monitor live quality. With Evidently, we provide the complete open-source infrastructure to build and manage these evaluation workflows. Here's what you can do: ๐Ÿ“š Pick from a library of metrics or configure custom LLM judges ๐Ÿ“Š Get interactive summary reports or export raw evaluation scores ๐Ÿšฆ Run test suites for regression testing ๐Ÿ“ˆ Deploy a self-hosted monitoring dashboard โš™๏ธ Integrate it with any adjacent tools and frameworks It's open-source under an Apache 2.0 license. We build it together with the community: I would love to learn how you address this problem and any feedback and feature requests. Check it out on GitHub: https://github.com/evidentlyai/e..., get started in the docs: http://docs.evidentlyai.com or join our Discord to chat: https://discord.gg/xZjKRaNp8b.
Joseph Abraham
@elenasamuylova Congrats on bringing your idea to life! Wishing you a smooth and prosperous journey. How can we best support you on this journey?
Elena Samuylova
@kjosephabraham Thanks for the support! We always appreciate any feedback and help in spreading the word. As an open-source tool, it is built together with the community! ๐Ÿš€
Tony Han
Love this are toolkits as part of Evidently AI to cover some of the most important AIOps use cases like monitoring model hallucinations, etc. I'm new to building products with AI, I'm curious if there are learning resources for someone like me to learn more about topics like how to test AI generated results. Or does the AI has some suggestions on what methods to use? Also, big props for making the Evidently platform open source - you have my support for making this available to the world! Congrats on the launch @elenasamuylova and team!
Elena Samuylova
@tonyhanded Hi Tony! Thanks a lot for the support. We are huge believers in the open-source, too! We are working on quite a lot of content on this topic in our blog (and we'll soon add more!): you may find it useful. For example, this recent blog on regression testing for LLM outputs https://www.evidentlyai.com/blog... Soon more to come! One popular approach we implemented is using LLM as a judge, where you effectively use another LLM / different prompt to label your outputs by certain criteria (we recommend using a binary True/False scale here). This is one of the approaches we implemented with this release!
Hamza Tahir
Amazing team + product. Been using evidently for years now and can confidently say its one of the best in the market!
Elena Samuylova
@hamza_tahir Thanks for the support โค๏ธ
Emeli Dral
Hi everyone! I am Emeli, one of the co-founders of Evidently AI. I'm thrilled to share what we've been working on lately with our open-source Python library. I want to highlight a specific new feature of this launch: LLM judge templates. LLM as a judge is a popular evaluation method where you use an external LLM to review and score the outputs of LLMs. However, one thing we learned is that no LLM app is alike. Your quality criteria are unique to your use case. Even something seemingly generic like "sentiment" will mean something different each time. While we do have templates (it's always great to have a place to start), our primary goal is to make it easy to create custom LLM-powered evaluations. Here is how it works: ๐Ÿ† Define your grading criteria in plain English. Specify what matters to you, whether it's conciseness, clarity, relevance, or creativity. ๐Ÿ’ฌ Pick a template. Pass your criteria to an Evidently template, and we'll generate a complete evaluation prompt for you, including formatting it as JSON and asking the LLM to explain its scores. โ–ถ๏ธ Run evals. Apply these evaluations to your datasets or recent traces from your app. ๐Ÿ“Š Get results. Once you set a metric, you can use it across the Evidently framework. You can generate visual reports, run conditional test suites, and track metrics in time on a dashboard. You can track any metric you like - from hallucinations to how well your chatbot follows the brand guidelines. We plan to expand on this feature, making it easier to add examples to your prompt and adding more templates, such as pairwise comparisons. Let us know what you think! To check it out, visit our GitHub: https://github.com/evidentlyai/e..., docs http://docs.evidentlyai.com or Discord to chat: https://discord.gg/xZjKRaNp8b.
Emeli Dral
@hamza_afzal_butt Thank you so much!
Mihail Eric
Cracked team solving one of the hardest problems in LLMs today. If anyone is going to solve it, it's them.
Elena Samuylova
@mihail_eric Thanks for the support!
Daniel W. Chen
This solves so many of my current pain points working with LLMs. I'm developing AI mentors and therapists and I need a better way to run evals for each update and prompt optimization. Upvoting, bookmarking, and going to try this out! Thank you Elena!
Elena Samuylova
@danielwchen Thank you! Let us know how it works for you. We see a lot of usage with healthcare-related apps; these are the use cases where quality is paramount - you can't just ship on vibes!
Giuseppe Della Corte
@elenasamuylova and @emeli_dral great that you are building an open-source tool in the LLM evaluation space. Congrats!
Elena Samuylova
@gdc Thank you! Let us know if you have the chance to try it. We appreciate all feedback!
Mikhail Rozhkov
Congrats with the launch! Great milestone @elenasamuylova and @emeli_dral! Evidenlty is part of my MLOps stack and I recommend it to my friends and clients! I'm happy to contribute to Evidently and look forward to collaboration!
Elena Samuylova
@emeli_dral @mikhail_rozhkov Thanks for your support! I hope Evidently will fit in the updated LLMOps stack as well ๐Ÿš€๐Ÿš€
Ivan Shcheklein
Congrats @elenasamuylova and @emeli_dral . It's an amazing product. I've been recommending your course to our users and customer (https://www.evidentlyai.com/ml-o...) - it's one of the best I think. It's an exciting progress on the LLM side.
Elena Samuylova
@emeli_dral @shcheklein Thanks for your support!๐Ÿš€
Dima Demirkylych
I have to give major respect to the team at Evidently AI for their outstanding open-source product. The introduction of evaluation for LLM apps is a game-changer. It's incredibly easy to integrate into my product, and the monitoring capabilities are top-notch. What I love most is that they provide ready-to-use tests, which easily customizable. Kudos to Evidently AI for making such a valuable tool available to the community!
Elena Samuylova
@dima_dem Thanks for the support @dima_dem! ๐Ÿ™๐Ÿป
Mark Abramovitz
Congrats on the launch! Have you found that Evidently's suggestions are pretty consistent across solutions, or does it really depend on the application at hand? For example, does it always recommend ChatGPT over Claude (or vice versa), or does it depend on the use case? (And if you can share two use cases where the answer is different, that'd be super cool!)
Dina Karakash
Fantastic launch! We've been searching for an effective solution like this for quite some time. How do you tailor your solution to meet the varying needs of your clients?
Elena Samuylova
@datadriven Thanks for the support, Dina! On the infrastructure side, our open-source tooling has a Lego-like design: it's easy to use specific Evidently modules and fit them into existing architecture without having to bring all the partsโ€”use what you need. On the "contents" side, we created an extensive library of evaluation presets and metrics to choose from, as well as templates to easily add custom metrics so that users can tailor the quality criteria to their needs. So, some may use Evidently to evaluate, for example, the "conciseness" and "helpfulness" of their chatbots, while others can evaluate the quality and diversity of product recommendationsโ€”all in one tool! I hope we managed to put all the right ingredients together to allow all users to start using Evidently regardless of the specific LLM/AI use case. I'm looking forward to more community feedback to improve it further!
Dina Karakash
@elenasamuylova That sounds great! I hope youโ€™ll consider hosting workshops or a hackathon to demonstrate how it works. By the way, have you come across any interesting examples of LLM judges?
Elena Samuylova
@datadriven Maybe even a course! :) The most interesting examples of LLM judges I've seen are something very custom to the use case. Typically, users who are working on an application want to catch or avoid specific failure modes. Once you identify them, you can create an evaluator to catch them. For example: - Comments from an AI assistant that are positive but not actionable. ("Actionable" eval). - Conversations on a sensitive topic where a chatbot does not show empathy. ("Empathy" eval). etc. It does take some work to figure it out right, but that is hugely impactful.
Ilia Semenov
@elenasamuylova, @emeli_dral Way to go! The space needs more OSS!
Elena Samuylova
@emeli_dral @iliasemenov Thanks for your support! ๐ŸŽ‰
Alexey Dral
+1 amazing team +1 amazing product in addition: friendly open-source support (easy to add suggestions and see it in the next release)
Elena Samuylova
@aadral Thanks for your support! Looking forward to LLM-related feature requests ๐Ÿš€
Arseny Kravchenko
Wish you luck! AI reliability is still a massive problem
Elena Samuylova
@arseny_info Thanks for the support! Hopefully we can contribute to solving in. Looking forward to the feedback from the community ๐Ÿš€
Kyrylo Silin
Hey Elena and Emeli, How does Evidently AI handle potential biases in the LLM judges themselves? Do you have any plans to incorporate human feedback loops into the evaluation process? Congrats on the launch!
Emeli Dral
Hey @kyrylosilin , Thank you for bringing this up! We design judge templates based on a binary classification model, where we thoroughly define the classification criteria and strictly structure the response format. Users also have the option to choose how to handle cases with uncertainty, whether itโ€™s by refusing to respond, detecting an issue, or deciding not to detect an issue. This approach is already implemented and helps achieve more consistent outcomes. In the next update, we plan to add the ability to further iterate on the judges using examples of classifications they have made in previous iterations. This will help address potential biases. Users will be able to select (and even fix wrong labels if needed) complex cases and explicitly pass these examples to the judge, which will, over several iterations, improve accuracy and consistency for specific cases.
Anna Veronika Dorogush
Congrats with the launch! That's a huge milestone!
Elena Samuylova
@annaveronika Thanks for your support! โค๏ธ
Ema Elisi
Hey @elenasamuylova! Exciting stuff with Evidently stepping into the LLM space! The challenges you've outlined around evaluating generative AI are real. Love the
Elena Samuylova
@ema_elisi Thanks for the support!
Vasili Shynkarenka
congrats with the launch! love the video :)
Elena Samuylova
@flreln Open-source production! :)
Elke
This sounds promising, Elena! How does Evidently handle the subjective nature of evaluating LLMs? Are there specific criteria you recommend using?
Elena Samuylova
@elke_qin Great question! We tried making it easy for users to add custom criteria that will automatically convert into "LLM judges." The user only needs to add the criteria in plain English, and we automatically create a complete evaluation prompt (that will be formatted as JSON, ask LLM to provide the reasoning before outputting the score, specify how to handle uncertainty, etc.). I agree that LLM output quality is highly custom, so instead of simply providing hard-coded "universal" judge prompts, we believe it's better to help users create and iterate on their judges. We generally recommend using binary criteria, as they make it easier to test alignment and interpret the results (compared to sparse scales). We also have a workflow for evaluating the quality of judge classification against your own labels to measure alignment. If you have a reference output (for example, when you do regression testing to compare outputs with a new prompt against the old ones), there are also different approaches to compare old answers against new ones. From semantic similarity to another comparison judge, you can tune to detect specific changes that are important to you. We do not have a labeling interface inside the tool itself, but we are thinking of adding one. We also have tracing functionality that allows us to collect user feedback if it's available in the app (think upvotes/downvotes).