Safety Check (KOR) is a safety engine that classifies unethical expressions into 11 different categories. With a deep-learning classification model, Safety Check classifies the given text into its precise category and displays the likelihood of its prediction.