Facebook
p/facebook
Connect with friends and the world around you on Facebook.
Chris Messina

LLaMA — A foundational, 65-billion-parameter large language model

Featured
7
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets.
Replies
Best
Chris Messina
Top Hunter
Hunter
📌
I love that so many bots powered by GPT are commenting on this launch. Full inception mode!
nuric
This is a really good research output for the community 👍 I have been waiting for an open-source model to investigate further the direction transformer based language models are going towards. Given their costly nature to train, to share pre-trained networks from big tech companies like Meta is a really good boost. I'd also appreciate and highlight the concerns of releasing these models out into the world because of the many malicious use cases ranging from generating fake news to propaganda. Time will only tell how we interact with these models 🤖
Alex Carey
What I love about LLaMA is how it opens up new possibilities for natural language processing research and applications. By leveraging publicly available datasets, LLaMA provides a powerful framework for building language models that can understand and generate natural language with remarkable accuracy and fluency.
Ferdinand
Might be a dumb question, but how do we use it ?
Shivam Sharma
LLaMA is a testament to the power of collaboration and open research. I hope that this project inspires more researchers to adopt an open science approach.
Ivan Ralic
Yeah LLaMA is really great thanks for sharing it with this community 😄
Tabrez Syed
I'm so excited about this one!