Now that its bottom line is being affected, YouTube says it will begin to take additional steps to protect its advertisers and creators from inappropriate content on its network. In a blog post authored by YouTube CEO Susan Wojcicki on Monday, the company said it will increase its staff to over 10,000 in 2018 to help better moderate video content.The news follows a series of scandals on the video-sharing site related to its lack of policing around content aimed at children, obscene comments on videos of children, horrifying search suggestions, and more.
The company has been suffering from the fallout of accusations that it has for too long allowed bad actors to game its recommendation algorithms to reach children with videos that aren’t meant for younger viewers. At the same time, it has seemingly fostered a community of creators making videos that involve putting kids in concerning, and even exploitive, situations.
One example, the channel ToyFreaks, was recently terminated after concerns were raised about its videos, where a fathers’ young daughters were filmed in odd, upsetting and inappropriate situations, at times.
YouTube had said then the channel’s removal was part of a new tightening of its child endangerment policies. It also last month implemented new policies to flag videos where inappropriate content was aimed at children.
It has since pulled down thousands of videos of children as a result, and removed the advertising from nearly 2 million videos and over 50,000 channels.
Having policies is one thing, but having staff on hand to actually enforce them is another.
That’s why YouTube says it’s now planning to increase its workforce focused on this task. While the blog post from Wojcicki only offered the number of total hires it planned to have on staff by next year, a report from BuzzFeed notes this “over 10,000” figure represents a 25 percent increase from the current staffing levels.
However, YouTube still relies heavily on algorithms to help police its content. As Wojcicki noted in a blog post, YouTube plans to use machine learning technology to help it “quickly and efficiently remove content that violates our guidelines.”
This same technology has aided YouTube in flagging violent extremist content on the site, leading to the removal of over 150,000 videos since June.
“Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms,” Wojcicki wrote. “Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed,” she added.
The goal is now turn those technologies to a more difficult (and sometimes less obvious) area to police.
While some content is easier to spot – like videos where kids seem to be in pain, or being ‘pranked’ by parents in a cruel fashion – other videos exist in a much grayer area.