Because it's the one-stop solution for local inference. At OrangeCat, we aim to give users the highest level of customizability, and there's no better way to do that than to use local LLM inference.
Ollama is the best way to run LLMs locally and the easiest way to test, build, and deploy new models. It has opened my eyes to the world of LLMs, a fantastic product.