Where's the moat around LLMs? 🏰

Justin Cranshaw
1 reply
With LLMs, you don't need a PhD in machine learning to do some pretty amazing things. But doesn’t that mean it will be easy for someone to copy your product? Where’s the moat around LLMs? From my experience, it's actually not so straightforward. There's a really big gap between building a prototype and deploying LLMs in a business context. There are still difficult engineering problems you need to solve when you use LLMs for specific scenarios. Each of these technical problems creates opportunities for building solutions that are difficult to copy. My co-founder recently wrote up some thoughts about this. https://maestroai.substack.com/p/building-technical-moats-with-llms I’m curious about what other LLM hackers think. Are you thinking about defensibility? What are you doing to build a moat?

Replies

Justin Cranshaw
In our case, there's a lot of work that goes into pre- and post-processing data. For example, many of the scenarios we're building for require structured data, but LLMs return unstructured text. So we've built many custom workflows for extracting structured data from LLM output. Another example is in how we're thinking about building human-in-the-loop feedback to get better over time. These things require a ton of domain expertise that is difficult to copy.