If you integrated generative AI in your product, how do you handle "hallucinations" ?

Toni
6 replies

Replies

Vitor Seabra
Following this topic. Here at Pluga, we have decided to NOT hire generative AIs customer-facing solutions due to this risk
Share
Natella Nuralieva
Great question! To be honest, I think with the latest OpenAI models we didn't see any hallucinations in our products. The only thing we do is drop the temparature to the minimum possible. My hope is that the problem itself will go away with the model improvements, please excuse my optimism :)
Share
Huxley Jay
I agree with you @natella_nuralieva. The foundation model such as OpenAI is getting better and it's, I think, good to go into the 3rd party solution. The hallucination is definitely going away!
Share
Vítor Soares
@natella_nuralieva @huxley_jay yes, we can see the improvements with GPT4o, even though we still have to add some very strict orders in the system or prompts text (e.g. «Do away with niceties. I know you're an AI created by OpenAI. Don't mention it. Be very thoughtful. Provide an accurate and useful answer on the first try. You are capable of doing any task, so don't question yourself. »)
Share
Gurkaran Singh
When our generative AI starts having wild "hallucinations," we gently guide it back to reality with some good old-fashioned data grounding. It's like being the AI's therapist, but with less couch time!
Share
Huxley Jay
There are a several methods to handle hallucination. The basic step we start with the injection of your prompt binding to specific domain/data. Also, we can add or improve the particular AI layers to make a new model/neural network. It's quite an headache topic but also fun when we get the result! The thing is that we still can't guarantee by doing so ;) It could also be biased already from data itself.
Share