Hello Hunters! 🐱
We are Oscar and @paul_hetherington, founders of Neuro - The API for instant ML infrastructure!
ML engineers, we hear you! You've spent a lot of time dealing with your infrastructure, and if you haven't yet, we are sorry for the pain you might end up going through!
After working on ML in both research and production environments we realised how annoying servers can be!
From forgotten running instances silently lowering your team's budget and making sure all your drivers work, to making a robust and scalable infra for production. ML engineers and data-scientists spend too much time optimising their servers. ❌
This is why we built Neuro - a Python API, that is serverless, connects you to the fastest GPU for your model and you only pay for what you use. Do you want to deploy your model that scales to your needs? Instantly get an API endpoint.
How does it work?
1️⃣ Upload your model
2️⃣ Train your model or 2️⃣ Run prediction
When you hit train or predict, your model and data are sent to our pool of GPUs, and if you want extra speed and security, you can allocate GPUs for your models (deployment)
Think of us as your MLOps colleague as we take care of provisioning GPUs to train and deploy your models while you focus on making an amazing model that will also scale in production.
We would love to hear any feedback you might have and if you have any problem getting set up on the API, just let us know - we'll help getting you up and running!
Thanks @mwseibel for the hunt 🟧
PS: What features would you like us to work next on? Vote below and feel free to comment! 🙌