Dioptra is a data centric model validation platform. It monitors models in production, finds error patterns, helps you build a dataset to fix them, and validates the impact of the new model iteration.
Hey Product Hunters,
Pierre from Dioptra here. Thanks Michael for hunting our product.
With @farah_dioptra and @jacques_arnoux we’re super excited to share with you what we’ve been working on for the past year.
As an ML Engineer I found that after the first iteration of a model was in production, the measure-learn-improve loop was way longer than I’d liked it to be!
It was hard to understand how my model was performing in production. I had to move a ton of data around to do error analysis. Building a new dataset to fix issues required a bunch of glue code. And I had to rally several teams to run a simple AB test to validate the impact of each iteration. As a result, most model updates were done based on gut feelings. I selected my training data with very simple rules instead of looking for the best data for the job.
Dioptra is a continuous ML improvement platform that leverages data from production to measure model performance, identify error patterns and help build the next training dataset. And because the ML practitioner’s job doesn’t end there, we also help validate that the new model is actually better by running benchmarks and A/B tests.
With Dioptra, we strive to automate the last part of the ML lifecycle, so data scientists can focus more on the science and less on the pipes.
We would LOVE to hear your feedback. What can we improve? 🙏
Speechly Reactive Voice UI API