TikTok, a short-video sharing app, has announced that it will use more automation to remove videos that violate its community guidelines from its platform.
Normally, videos uploaded to the platform are analyzed by technology tools that identify and flag any potential infractions, which are then assessed by a member of the safety team. If a violation is discovered, the video is taken down and the user is alerted, TikTok said.
Over the next few weeks, the ByteDance-owned company will begin automatically eliminating some sorts of content that breach rules for minor safety, adult nudity and sexual activities, violent and graphic content, illegal activities, and regulated items.
According to the company, the removal of such content will allow its safety team to focus on areas that are more contextual and nuanced, such as bullying and harassment, misinformation, and hateful behavior.
TikTok also added it will issue a warning in the app upon first violation. However, in case of repeated violations, the user will be notified and the account can also be permanently removed.
The revisions come as social media giants such as Facebook and TikTok have been criticized for propagating hate speech and misinformation on their platforms around the world.
Recently, TikTok began operating again in Pakistan after a provincial court lifted the suspension of the popular social media service but ordered it to address complaints that it hosted objectionable content. The Pakistan Telecoms Authority (PTA) blocked access to the service for the third time after a ruling by court hearing a private citizen’s petition as it promotes vulgarity and LBGQT content.
Related: TikTok’s new feature for job seekers; Lets users send video resumes to brands