TUNiBridge
p/tunibridge
Every Natural Language APIs for Your Businesses
Allie Sung

Safety Check (KOR v0.5.1) โ€” Introducing your one & only safety check model

Featured
8
โ€ข
Safety Check (KOR) is a safety engine that classifies unethical expressions into 11 different categories. With a deep-learning classification model, Safety Check classifies the given text into its precise category and displays the likelihood of its prediction.
Replies
Best
Gladys Atienza
That sounds like an incredible service! It'll definitely save businesses time and money in the long run by making their data easier to work with.
Allie Sung
Hi, Product Hunt! Iโ€™m Allie, the project manager at TUNiB. Comments containing hate speech and passive-aggressive cyberbullying became serious among people nowadays. Although manually checking use-reported comments or chats has been the convention for filtering these unethical texts, this approach has clear limitations in terms of time and ethicality. Too many hate words are exchanged online, and the people who filter them are too few. TUNiB has a vision to prevent harmful comments from hindering our Internet experiences, and would like to introduce Safety Check as the ideal safeguard that protects users from all kinds of unethical texts. Classification categories currently supported are: insult, swear words, obscenity, violence, and aversion of gender, age, race, disabled, religion, politics, and occupation. We are offering a one-month free trial (limited to 10K API calls). Leave an inquiry at [https://tunibridge.ai/#talkToSal..., and we will contact you shortly! Please let me know if you have any questions. ๐Ÿ™Œ
Farisa Ottaviano
great work! congratulation on the launch!
Rhymer Espinosa
Congrats on the launch! Wishing you all the best. <3
Rhymer Espinosa
@allie2022 You're welcome. An avid fan here. I'll be on the lookout for more amazing things to happen with your product.