Use LLMs like ChatGPT without any fear of your data being tracked or harvested to train AI, your private thoughts and business secrets getting into wrong hands, and injection attacks.
I am extremly excited and happy on ZeroTrusted launch today on Product Hunt! 🚀 Being part of this journey and witnessing our idea transform into a great product for data privacy is beyond rewarding. A huge shoutout to everyone who made this dream a reality. ZeroTrusted is here to revolutionizing how we protect our digital conversations and data with intelligence and ease. Can't wait to dive into the discussions and see how ZeroTrusted empowers each of you. Let's make the digital world a safer place, together!
@sidraref congrats on the launch! seems like something setup as a compliance tool for companies that interact with LLMs, are you thinking of it that way? More thoughts
@frank_denbow, first of all, thank you for your review. I believe we need to do a better job of highlighting all our value propositions in the hero section.
You are right; our product assists companies with their compliance needs.
To address some of the questions you raised:
1) ZeroTrusted.ai acts as middleware between users and Large Language Models (LLMs) through our secure chat or API.
2) There's no need for you to have a separate account or key for each LLM. Instead, we provide our own keys, allowing access without revealing your identity to the LLMs.
3) Your point about the potential exposure of scrubbed data in the event of a breach at a third-party LLM's network is partially correct. However, your identity will not be linked to this data. This scenario is particularly critical for both individual users and businesses.
4) We offer features that maintain context when sanitizing sensitive data.
Example #1
Suppose a medical expert needs to process the following input:
"Create a summary of this patient's diagnosis: Patient Name: Paul Smith Date of Birth: 12/08/1987 SSN: 666-555-5555 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 434444444"
Before we pass it on to any LLM, we'll alter the sensitive (PHI compliance violations) details and transform to:
"Create a summary of this patient's diagnosis: Patient Name: John Doe Date of Birth: 01/01/1970 SSN: 123-45-6789 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 987654321"
This way, we preserve the context, while ensuring that your PHI compliance is met and sensitive data isn't exposed to LLM.
After getting a response from the LLM, we revert the swapped sensitive value to your original while preserving the result context.
We will be adding features that will allow RLHF to learn and adjust to customer preferences.
Example #2:
if a customer (individual or corporate) submits legal content containing sensitive information, our "Fictionalize" feature replaces this data with fictional values. This ensures the preservation of context while protecting the actual sensitive information. The original data is restored once a response is received.
Hope this helps. Please us know if there are any more questions you would like us to address.
Once again we appreciate your feedback and we hope your views can gain from the privacy and security solution that we provide.
Absolutely agree! @sidraref In today's digital age, privacy and security are paramount concerns. It's reassuring to see products like ZeroTrusted.ai emerge, dedicated to safeguarding our data privacy. Excited to explore how it ensures our information remains secure and protected!
@sidraref Congratulations on the launch. As someone who frequently uses AI technologies like ChatGPT, the idea of engaging with such tools without worrying about data tracking or privacy breaches is incredibly comforting. I can see ZeroTrusted becoming an essential tool for protecting both personal conversations and sensitive business information. Well done!
Congratulations on achieving such a remarkable milestone with your Product Hunt launch! Your product's success is a testament to your hard work, innovation, and commitment to excellence. Wishing you continued growth and prosperity as you continue to make waves in the industry. Well done!
wow, very cool! Are there any limitations or everything is the same as using the native apps but through the privacy firewall?
Great idea! Congrats on the launch :) !
@anthony_latona
Currently, it's hard for us to show the history like ChatGPT since we don't store it anywhere. The history is only visible per session and it's securely stored in session on your browser, so it'll be cleared after the logout. So, it's the limitation that we've exposed for security reasons.
That's the most limitation as of now.
Thanks! Glad you liked it.
Hello, Product Hunt community! I am excited to share my first launch ZeroTrusted.ai's LLM Firewall with you all. In a world where data privacy and security are paramount, our solution offers a robust shield for your Large Language Model interactions.
Features like prompt anonymization, advanced encryption, and our unique ztPolicyServer ensure your AI experiences are not just secure but seamlessly integrated with your daily workflows.
We're here to support you every step of the way, with detailed documentation, responsive email support, and live training sessions for higher-tier plans. Your feedback is invaluable to us—it drives us to continuously improve and adapt our offerings to meet your needs.
Explore our plans, try out our features, and let us know how we can make your AI interactions not just safer, but smarter.
Thank you for your support and curiosity❤️
Congratulations on the launch !
It's great to see new products that start addressing privacy for LLMs!
Can you elaborate on what LLMs do you support right now?
And what do you think of automations with Zerotrusted, can you give some utility examples to the Zapier integration?
Best of luck !
Hey @abdellah_abbous ,
Here are your answers,
- We currently support ChatGPT 3.5, ChatGPT 4, Claude, Llama 2, CoHere & Palm LLMs.
- We have Zapier integrations to automatically send chat reports.
For example, if you have an automation to automatically create blog posts using ChatGPT and post it in your site, then your Zapier integration can be used in between to see if the ChatGPT generated post is Plagiarism & Copyright free as well as we give an AI text detection score to make your blog/posts stronger.
- Additionally, we have our Adapter library using the LangChain which allows you to quickly integrate our APIs.
I would love to show you a demo on Zapier integrations and LangChain integration, please reach out to contact@zerotrusted.ai to initiate a discussion on this.
ZeroTrusted addresses 2 major pain points of the LLM,
1. Privacy
2. Ensembling of multiple LLMs
Amazing, looking forward to dive deep into the ZeroTrusted.
I think privacy and security is one of the most important issues nowadays, and it's great to see that we have products to actually take care of our data privacy!
Congratulations, guys!💪🏼
ZeroTruested perfectly addresses my concerns. I've been seen many AI products on Product Hunt, and data privacy always comes up.
I'm curious about how this ZeroTruested technology works to protect data. Can you explain it simply?
@bonvisions We find out sensitive information in your prompt and then we sanitize it before passing it over to LLM.
For example, your prompt: Hey ChatGPT, my name is Bon Visions, My credit card information is 123-4567-8910. Give me payment integration code for my site.
Our modified prompt: Hey ChatGPT, my name is John Doe, My credit card information is 111-4444-9999. Give me payment integration code for my site.
Now we pass the modified prompt to the ChatGPT, so your sensitive information is passed sanitized and thus protected.
@sentry_co we support openAI and many other LLMs. In our premium plan, we enhance accuracy by running your query on multiple LLMs and then we use our scoring model to determine most reliable - think Kayak. Users can also select 1 LLM.
Our main role is to be the privacy layer for our customers.
@sentry_co this is partly true, however:
1) any sensitive information are already masked by our system.
2) most importantly, your identity isn't tied to that query you searched and it cant be traced back to you since we don't keep any data and wont even know who sent what. Your queries are just so anonymous text embedded within large corpus amount of data.
BTW - this is only aspect of the value we provide.
@femitfash Right. But human data is extremely identifiable. And if you obfuscate it too much, then OpenAI doesn't understand it right? or? Im just trying to understand here. Some kind of illustration or example would make this more clear I think. A segment of text goes into your service then goes to openai etc. What does the data look like at each juncture etc? You know what I mean? Btw. Love the problem your trying to solve here!
@sentry_co
Here is an example - suppose a medical expert needs to process the following input:
"Create a summary of this patient's diagnosis: Patient Name: Paul Smith Date of Birth: 12/08/1987 SSN: 666-555-5555 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 434444444"
Before we pass it on to any LLM, we'll alter the sensitive (PHI compliance violations) details and transform to:
"Create a summary of this patient's diagnosis: Patient Name: John Doe Date of Birth: 01/01/1970 SSN: 123-45-6789 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 987654321"
This way, we preserve the context, while ensuring that your PHI compliance is met and sensitive data isn't exposed to LLM.
After getting a response from the LLM, we revert the swapped sensitive value to your original while preserving the result context.
We will be adding features that will allow RLHF to learn and adjust to customer preferences.
Hope this helps.
@johny_burg While integrating for any business needs, if you use the Zapier integration, it'll be a low-code/no-code solution. It'll be easy for you to quickly integrate and maintain.
@sophia_watt we utilize LLM ensemble - we essentially run your query through multiple LLMs and use our scoring model to rank the result. This is quite challenging but our results great and they reduce like hood of data poison when compared to direct output from any single LLM. We are constantly working to improve this process to become as accurate as possible.
Huge congratulations on the launch ZeroTrusted team! The platforms commitment to safeguarding and data with features like data poisoning prevention and integrity checks is exactly what the industry needs.