What if AI gets trained to output misinformation?

Slim Geransar
26 replies
We know AI can do a lot of good when good people build AI models. But what happens when bad people create AI models to spread propaganda and misinformation? I’m all for AI, but not going to lie. I have some hesitations. How about you?

Replies

Rick Fan
Sider for iOS 2.0
Regarding artificial intelligence fabricating information, it will be more concealed and harder to distinguish than past fake news. If the data used to train the model is itself false, it might be impossible to hold anyone accountable afterward. The only solution is to approach all answers provided by AI with a skeptical mindset, But that's impossible.
Share
@rick_fan totally agree. People using AI to create content will not fact check the information, they will assume it correct
Share
Sunil Ayar
Of course, this won't stop people with bad intent from using platforms that will not have such watermarks/detection, but I predict that if something is not labeled as being AI-created or verified to be from a credible source or "human-made," it will be deemed to be misinformation.
Share
Sarah Playford
There is a risk of AI being trained to output misinformation if not properly supervised and guided. It is crucial to ensure ethical use and responsible oversight in AI development to mitigate this risk.
Markk Tong
AI Desk by Collov AI
AI Desk by Collov AI
I totally get where you're coming from. The potential for AI to be used for spreading misinformation is definitely a concern. It's important for us to be mindful of the ethical implications and potential misuse of AI technology. However, I believe that with responsible development and implementation, we can mitigate these risks and ensure that AI is used for the greater good. It's all about striking a balance and being mindful of the potential consequences. What are some ideas you have for ensuring that AI is used responsibly? I'd love to hear your thoughts on this!
Share
@markk0217 rules and regulations and about checks and balances. Example, in terms of use, an organization must be fully transparent of any biases, eg religious, political, racial, etc. Potential for hefty fines etc. I’m no legal expert but there has to be some level of transparency and accountability in my opinion.
Igor Lysenko
I think that if he starts giving incorrect information, they will not use him, but since humanity has not completely transferred to him, everything will be fine. If all the processes in the world were AI-powered, then there would be great damage.
Share
Carol Moh
I have some hesitations, but as AI is here to stay and only going to become more ingrained in our daily lives, there will just be more and more education around to better equip people. (Bit like when Photoshop became a thing and people were worried about fake photos, but it never stopped the industry growing and evolving into what it is now!) So I think, as with anything "written," whether it is by AI or by a human if they want to write misinformation, they will do it. However, for fake images, videos, and music, there is already a lot underway, especially with the big companies that are starting to add permanent tags/labels/watermarks to show that it is AI-generated. Of course, this won't stop people with bad intent from using platforms that will not have such watermarks/detection, but I predict that if something is not labeled as being AI-created or verified to be from a credible source or "human-made," it will be deemed to be misinformation. YouTube has already announced that they will shut down accounts that do not label their content that has "realistic" AI-generated content in it. I believe Meta and TikTok are also implementing similar restrictions for ads/content that use AI. Everything leaves some type of digital trace and, therefore, a way to detect how something has been created. Of course, we will always have people who will attempt to override and hack the system, but I think that applies for pretty much everything in this world.
Share
@carolmoh great analogy with photoshop. Thanks for sharing your thoughts
Share
Leo Lu
I agree that it can become increasingly difficult to identify the "truth" when AI is intentionally trained on misleading information. However, I believe this is ultimately more of an ethical concern than a technological one. Just as fake news and scammers existed before the internet era, they are likely to continue in the age of AI, albeit in different forms. Now, the bar for identifying this misinformation can be higher and might require more sophisticated technologies to combat. But where there's a problem, there's a solution!
Share
@leolu I love what you said about when theirs a problem, there’s a solution. Great entrepreneurial mindset Leo
Share
Arlie Rutherford
However, for fake images, videos, and music, there is already a lot underway, especially with the big companies that are starting to add permanent tags/labels/watermarks to show that it is AI-generated.
Share
@arlie_rutherford exactly my point is there should be transparency and accountability.
Sarvpriy Arya
well then any last wish
Share
ChatofAI
Always use your judgment when it comes to AI-generated information unless you're sure it's right.
Share
@chatofai exactly! But some people won’t, anything ai outputs they may post
Share
Crystal J
Modelize.ai 1.0 - AI workflows
Even in the pre-AI era, fake news was present in well-known media outlets. I believe the key is to read more books, develop self-awareness, and the ability to discern what is misinformation. However, I agree that it is harmful to society, as some people are unable to distinguish accurate information.🥲🥲
Share
@crystal_j although fake news can be intentional while people use chatGPT to creat content. They believe that content without fact checking it, because humans by nature are mainly lazy
Share
Anjali N
I agree with this fact, but then the question arises where do you think we draw the line?
Share
@anjaliinambiar exactly and who decides where that line is?
Nick
I agree and it will be extremely hard to combat considering people are already running custom LLMs all over the place. However, that being said for major corporations like Meta, Microsoft, OpenAI, Anthropic, etc., there should be a centralized consortium that can do almost like a LLM fact check and certain regulations should be implemented so that if these corps want to distribute their product to the masses they have to be monitored by this consortium. The centralized consortium can be run by independent contractors, government officials, and members of each of the corps that should have representatives there. Everyone keeps each other honest so that the consortium can't cheat or allow certain companies to slide passed checks. The positions would need to be non-permanent and each member would need to be background checked and certified to staff the consortium. Kind of just like the nuclear watchdog groups, like The association of atomic scientists, a group that is totally neutral in all politics and finances and has equal representation from private, public, and government agencies. I think that could fix a ton of peoples concerns. Anyhow just my two cents lol.
Share
@reconcatlord I really like your answer Nick. Well thought out. Thanks for taking the time to write
Share
Trang Pham
While AI has the potential to do a lot of good when used responsibly, it is true that there are risks associated with its misuse. That's why this year is the year of responsible and ethical AI.
Rahul Mishra
I think it is already happening. I've seen ChatGPT giving bias information when it comes to religion. For example it won't make a joke on certain religion but it will on some. Besides that it supports certain propagandas happening all around the world, I can't name but ChatGPT will give you logical support on them.
Share
@yogi_rahul very interesting.
Share