The LLM Explorer: Navigate the World of Large Language Models. Perfect for ML researchers, developers, and AI enthusiasts. Discover the latest in NLP, integrate it into your projects, and stay at the forefront of AI advancements.
I'm Greg, the brains behind the LLM Explorer. Today is an exciting day for me as I finally get to launch my latest project right here on PH!
Let me take you back to when I first delved into the world of NLP and LLMs. I was completely overwhelmed by the sheer variety of models out there. Which one should I choose? How do I even navigate through them? I couldn't understand why the HuggingFace search engine was falling short and giving me irrelevant results. That's when I decided to invest development time to create a comprehensive directory of Large Language Models that would address all these gaps.
With the LLM Explorer, you can effortlessly discover, compare, and rank LLMs, saving you valuable time. Say goodbye to manually digging into complex config files or getting lost in the maze of model properties. We've meticulously compiled and presented all the key internals and specifications of each LLM, giving you unprecedented insight into these powerful tools.
But it doesn't stop there. The user interface is dynamic, highly interactive, and designed with tech-savvy folks like you in mind. You can easily sort and filter LLMs based on a wide range of parameters, including architecture, language, context length, and more. Thanks to our global search feature, finding a model tailored to your exact needs is just a few keystrokes away.
Looking for the best model to understand English text with a context length of 8192 tokens? Simply adjust the filters, and presto - the LLM Explorer instantly presents you with a ranked list of suitable models. Curious about the reigning architecture this month? A quick glance at our interactive table will provide you with the answer.
The TABLUM LLM Explorer empowers you to navigate the LLM world with confidence and efficiency. Whether you're a machine learning researcher, a developer looking to integrate NLP into your next project, or simply an enthusiast eager to explore the cutting edge of AI, the LLM Explorer will be your trusted companion, offering indispensable support every step of the way.
Congratulations! LLM Explorer seems like a game-changer in simplifying the complex world of NLP and LLMs. How frequently will you update the directory to keep it up-to-date with the latest models?
@sentry_co Thank you for the feedbacl. They all can be run offline, just a matter of hardware. E.g. you can run "ggml" or GPTQ-quantized models on a fairly weak hardware. I'm running the most of 7b LLMs on my MacBook Pro M1 with Metal support. I know, people run them on Macbook Air.
@gregz Awesome! Maybe add sort by requirements, I know llama.ccp requires 64gb ram and maxed out MacBook Pro, unless they improved it since I last checked?
@sentry_co llama.cpp handles quantized ggml models quite well on low resources. I'm running it on 16GB with Metal support. Thus, 7b ggml-quantized models infer well enough on 8GB or 16GB with M1/M2 (Metal) support or Cuda. But it can be inferred on a regular CPU just slow. 13b models' inference running well on 24GB.
LLM Explorer