Seunghwan

Lora — Integrate local LLM, with one line of code

Featured
91
Lora is a local LLM designed for Flutter. It delivers GPT-4o-mini-level performance and is built for seamless integration—call it with just one line of code.

Add a comment

Replies
Best
Seunghwan
Maker
📌

👋 Hello, Everyone!
We're excited to introduce our new product, “Lora for flutter”.

🔥 What is Lora?
Lora is an on-device LLM including SDK to integrate your flutter based app.

🔎 Key Features of Lora
- LLM: on-device LLM with gpt4o-mini level performance
- SDK: Integrate seamlessly with just one line of code in Flutter
- Price: $99/month, with unlimited tokens

We’d love your feedback to make Lora a “Wow!” product.
Questions or suggestions? DM me anytime. Thank you!

Seunghwan

@artem_stenko Thanks a lot, Artem! Please try and leave your feedback. It'll be really helpful for growing our product. Have a good one!

Manu Hortet
Launching soon!

@seungwhan it's really cool we are getting more local tools like this. Are you guys planning to add a feature to select other models? I'd love to try it with my local R1s.
Congrats on the launch! 🚀

Sonu Goswami

@seungwhan Wonderful tool flutter developers...just curious, does the $99/month include updates and support?

Seunghwan
@sonu_goswami2 sure! It's incredibly cheap, isn't it?
Seunghwan
@manuhortet Thanks a lot, Manu. Do you mean that you want to try our SDK to use your local R1? We did't think of it, but I'm telling you that we'd consider the feature. Thanks for your request.
Aleks S
Launching soon!

It looks very promising but would be great to have more technical information. It is really local or need internet access to make requests? How much space does it need? Is performance good on cheap devices?

Woobeen Back
@aleksedtech Hello! Thank you for stopping by. And yes, Lora DOES NOT NEED INTERNET ACCESS to make request & get a response. About 1.5 GB will be needed for the model. And it shows mesmerizing performance if the device have over 8GB ram :)
Hansol Nam

@justonedev Thank you so much for your valuable feedback! 😊


In addition to the model benchmarks, we’ve documented detailed usage separately, and we’ll work on incorporating it into our website in the future.


Lora is a fully local LLM that works even in airplane mode! ✈️

It looks like my teammate has already shared more details—feel free to check it out! 🚀

Sergei Vorniches

@aleksedtech @woobeen_back But if it's local and doesn't require internet to work, how do you charge monthly? Is it monthly license/key renewal kind of thing?

Junghwan Seo

@aleksedtech @woobeen_back @vorniches Great question! 🤔 We've been putting a lot of thought into our revenue model. The fact that it runs locally is a major security advantage, but we're still exploring how much and deep monitoring is appropriate while maintaining that strength. Balancing security and sustainability is definitely a challenge! 🔐💡

Sergei Vorniches

@aleksedtech @woobeen_back @peekabooooo It doesn't make it clear :|

Mikita Aliaksandrovich
Launching soon!

Congrats for another launch!

Seunghwan

@mikita_aliaksandrovich Thanks a lot Mikita. I'd love to see your product. Notified yours. Wish you all the best!

Woobeen Back

@mikita_aliaksandrovich Thank you! Hope you like Lora :) And I can't wait your product

Mikita Aliaksandrovich
Launching soon!

@woobeen_back Thanks!


Hansol Nam

@mikita_aliaksandrovich Thank you so much for congratulating me :)

Junghwan Seo

@mikita_aliaksandrovich Super Thanks :) and Excited to see what you’re building too! 🚀 Looking forward to it! 😊

Muhammad Waseem Panhwar

@hansol_nam You guys have never failed even once—always bringing something unique that meets the current market needs. Congratulations on the launch of your new product!

Seunghwan

@hansol_nam @waseem_panhwer Thanks for kindly comment, Muhammad! I'd love to provide a WOW product to the world. Please stay tuned!

Woobeen Back

@hansol_nam @waseem_panhwer Thank you for your sweet words :) Hope you like it!

Hansol Nam

@waseem_panhwer Wow, that truly means a lot! 😊 Thank you for your kind words and constant support. We always strive to build something meaningful, and hearing this from you makes it all the more rewarding. Excited to keep pushing forward—really appreciate you being part of the journey! 🚀✨

Junghwan Seo

 @waseem_panhwer Wow, that means a lot! 🚀 We always strive to bring something fresh and valuable to the market, so hearing this really motivates us. Thanks for the support! 🙌😊

Michael Vavilov

99 sounds reasonable! @seungwhan congrats!

Woobeen Back

@seungwhan @michael_vavilov Thank you! Lora will lessen development cost & operation cost a lot!

Seunghwan

Thanks a lot, Michael! Please try and leave your feedback. It'll be really helpful fro growing our product. Have a good one!

Hansol Nam

@seungwhan @michael_vavilov Thank you for your kind words about our product that we put so much thought and effort into :) I will definitely work on solving the excessive AI cost issue using Lora!

Evak Chan

Congratulations to your launch again! It’s so cool that the performance of Lora’s performance is significantly better than the average level. Also good luck to this launch!

Seunghwan

@evakk Thanks a lot Evak! Please try and leave your feedback. It'd be really helpful for growing our product. Have a nice day!

Junghwan Seo

@evakk Thank you so much! 🎉 Your support means a lot! We're really excited about Lora’s performance and how it’s pushing the boundaries of on-device AI. Appreciate the encouragement—let’s keep innovating! 💡

程

Lora is an efficient and flexible fine-tuning technique, particularly suitable for environments with limited resources and the need to quickly adapt to new tasks. Although it may not perform as well as full parameter fine-tuning on certain tasks, its efficient usability makes it a powerful tool for fine-tuning large pre trained models.

Seunghwan

@lle_crh Sure. Please try and leave your feedback. It'd be really helpful for growing our product.

Promise Uzoechi
wow. Amazing product. indeed the integration is one touch
Seunghwan
@promise Thanks a lot, Promise! Please try and leave your feedback. It'd be really helpful for growing our product. Have a nice day!
Junghwan Seo

@promise Really appreciate you recognizing the things I’m particular about(Just One Line)! lol Thank you so much! 😊

乐 李

LoRA is an efficient and flexible fine-tuning technique that is particularly suitable for resource-limited environments and scenarios where rapid adaptation to new tasks is required. Although it may not perform as well as full-parameter fine-tuning on some tasks, its high efficiency and ease of use make it a powerful tool for fine-tuning large pretrained models.

Seunghwan

@lle_lile Sure. Please try and leave your feedback. It'd be really helpful for growing our product.

Michael Talreja

@seungwhan @peekabooooo @hansol_nam @woobeen_back congratulations on the launch. This is really taking the LLM experiences and use cases to another level

Seunghwan
@michael_talreja Thank a lot, Michael! Please try and leave feedback for us. It'd be really helpful. Have a good one!
Junghwan Seo

@seungwhan @hansol_nam @woobeen_back @michael_talreja Thank you! 🚀 We're already working on taking it to an even higher level! stay tuned and keep cheering us on! 🔥

Shivam Singh

This sounds like a fantastic tool for Flutter developers! On-device LLM with GPT-4-mini level performance is impressive, especially with the added benefit of privacy and faster response times. Seamless integration with just one line of code is really a big thing for devs looking to enhance their apps without the usual complexity.

Congrats on the launch!

Best wishes and sending wins to the team :) @seungwhan

Seunghwan

@whatshivamdo Thanks a lot Shivam! Please try and leave your feedback. It'd be really helpful for growing our product. I notified your product and am looking forward to seeing it. Have a good day!

Junghwan Seo

@seungwhan @whatshivamdo Thanks so much! 🙌 We're planning to support even more models in the future, so stay tuned and keep cheering us on! 🚀

Jun Shen

As a Flutter developer, I'm amazed by how simple it is to integrate 👍

Seunghwan
@shenjun Please try and leave some feedback for us. It'll be really helpful for growing our product!
Junghwan Seo

@shenjun Just a "Single line of code", and it's ready to go! 🚀

Marianna Tymchuk

This looks great! Love the easy integration with just one line of code.

Junghwan Seo

@marianna_tymchuk Exactly! My biggest focus is making integration "SUPER EASY". 🔥 Really appreciate you noticing that! 🙌

Hanna Kuznietsova

Great launch! On-device AI makes everything faster and better.

Junghwan Seo

@hanna_kuznietsova On-device AI makes everything faster and better WITH YOU 😘

Anton Diduh

Congrats! 🙌 AI-powered Flutter apps just got easier.

Junghwan Seo

@anton_diduh Just add one line of code and build your own LLM-powered AI service—seamless, fast, and private! 🚀

Viktoriia Vasylchenko

Love it! Simple, fast, and perfect for Flutter apps.

Junghwan Seo

@viktoriia_vasylchenko Just add one line of code and build your own LLM-powered AI service—seamless, fast, and private! ✨

Kay Kwak
Launching soon!

Integration is a big deal in Flutter! Thanks for making this process so much easier! Wish you good luck with the launch! 🎉

Seunghwan

@kay_arkain Thanks a lot, Kay! Please try and leave your feedback. It'd be helpful for growing our product.

mahyar hsh

WOW! Lora DOES NOT NEED INTERNET ACCESS to make request!! Its very useful!!

Seunghwan

@mahyar_hsh Sure. Please try and leave your feedback. It'd be helpful for growing our product.

Xi.Z

Love what you've built here! As a Flutter dev, I've been looking for a way to add LLM capabilities without the complexity of cloud services. That one-line integration is exactly what we need - nobody wants to spend days just setting up AI features.

Quick question though - how's the performance on lower-end devices? I'm working on an app targeting markets where users might not have the latest phones.

Really impressed by what you've achieved with local processing. The privacy angle is huge for my clients too. Keep crushing it! 🚀

Junghwan Seo

@xi_z Really appreciate your thoughtful feedback! 🙌 We’ve put a lot of effort into making integration as seamless as possible while ensuring solid performance across various devices. 🌍⚡ We’re continuously optimizing for lower-end hardware, so stay tuned for even more improvements! Thanks for the support—let’s keep pushing the boundaries of local AI together! 😎

Randy Levitch
Have a successful launch and continue building out your roadmap for 2025 and beyond. Use all the sales and marketing strategies to build out your user base.
Seunghwan

@onelocalfamily Thanks a lot! We are going to launch whenever we build something new. Please stay tuned!