Subscribe
Sign in

How do you deal with fake news, violent content, offensive gestures, or languages?

Minh Dang
9 replies
Image Classification and Moderation is a process that moderators will moderate images and videos in real-time in order to detect offensive and illegal imagery, such as; pornography, gore, guns, abuse, and extremist content. The work involving filler and label custom categories such as dating profile, ad compliance, image category and manage illegal content escalation.

Replies

Jim Zhou
Use AI based solutions to do a first pass deletion/hide, and if tjhe decision is appealed, let a human review it. TYou're exercising your freedom of speech, they're violating your Constitutional right, provided that you aer in the US in any way, if they force compelled speech on you.
Minh Dang
@jim_zhou Exactly what we are doing here at Pure Moderation. Thank you for your response. I wish you a good week ahead.
Paul Woodthorpe
They use AI for level 1, and humans for level 2. It depends on the social network concerned as to what they are moderating. For Facebook, Twitter, etc. AI can detect naked or graphic images or keywords/phrases. They are then flagged and removed from view. If the poster feels it has removed an innocent post then it gets sent to a human to manually review it. Level 2 is human moderation but it is usually on AI flagged posts, reported posts or posts that have been removed by the AI and the poster asks for a review. Some networks throw in random posts to be checked as well but usually, there are far too many AI posts to go through as it is and simply too many posted content for humans. I believe I read somewhere that to human moderate everything posted to Facebook would require more than half of the world's population working for Facebook. On top of that, there is usually public reporting. People using the "Report this post" options that many social networks use. This is where the community self-moderates, but again, depending on the network it often goes to an AI first that determines if it should be removed or passed on to a human to review it. To say that AI is not yet available to understand everything is simply wrong. It can detect people in photos, detect blood, sex acts, nudity, phrases, words, items and so on, and it constantly learns from every single one it moderates to improve accuracy and it is far more efficient than any human with a high success rate. Don't quote me but I believe one social network claims that over 98% of harmful content is caught by its algorithms whereas humans were only catching around 80%. But that is not to say it is perfect. Tik Tok for example has AI that is prone to flag items in live streams or behaviour on videos incorrectly. Many people have been blocked or content removed because it wrongly detected nudity when, for example, nipple outlines could be seen through a certain colour t-shirt, while allowing skimpy bikinis to be permitted. The problem with TikTok is that there it is incredibly difficult to get a human review of a removed post. But of course, all of this comes at a massive cost and for the likes of Facebook, Twitter, Instagram, TikTok and Youtube who have entire governments breathing down their neck for swiftly moderating of content, they spend close to $billion on systems. But for much smaller networks or platforms with only thousands of users, and much smaller budgets and responsibilities, they often can get away with public reporting and small teams of moderators.
Minh Dang
@exopaul Wow. Well said. Would love to connect with like-minded people and same good things. What is the best way for me to connect with you? Do you use Linkedin?
Paul Woodthorpe
@exopaul @minh_dang_ngoc I do but I never use it. I accidently cancelled your Friend Request on Facebook so I have re-sent you one. :)
Anyfactor
The way everyone does this. VAs and Moderators. Even Big tech can't solve these issues with AI. They are using living breathing human being to moderate and censor.
Minh Dang
@anyfactor There's not an AI system in the world that can understand context in the way 'human moderators' can. They won't understand back story and the complexities of online relationships. They can pick out some hate speech but hate speech doesn't appear in certain words - it can be subtle and it can be complex.
Norman GDrum
Use AI-based solutions to perform the first-pass deletion, and then have a human evaluate the judgment if it is disputed. Celebrities have also been victims of such violence and have begun to speak out about it. Read this blog https://www.femestella.com/9-celebrities-affected-by-domestic-violence/ that tells you about 9 Celebrities Affected by Domestic Violence. If you know someone who is going through something similar and has wanted to talk about it, now is the moment.