YouTube said machine learning was helping its human moderators remove almost five times as many videos that they were previously, and that 98% of videos removed for violent extremism are now flagged by algorithms. For more aggressive action on comments, new comment moderation tools are in the works, and in some cases, comments will be shut down entirely.
Wojcicki added to her statement that, "We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018". In addition, algorithms flagged about 98% of the videos removed for violent extremism.
"Human reviewers remain essential to both removing content and training machine learning systems because human judgement is critical to making contextualised decisions on content", she said.
It said adverts for major brands were appearing alongside some of the videos, which led to several big brands including Mars and Adidas pulling advertising from the site.
The Mountain View tech giant has been facing a revolt by advertisers over ads paired with disturbing videos, such as those made by hate groups and religious extremists.
Google is going on a hiring spree to try to stamp out offensive videos and comments on YouTube. Equally, we want to give creators confidence that their revenue won't be hurt by the actions of bad actors. "We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should", concluded Susan.
"Our advances in machine learning let us now take down almost 70 percent of violent extremist content within eight hours of upload and almost half of it in two hours and we continue to accelerate that speed.", said Susan.
The technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess, according to Wojcicki.
Wojcicki says that YouTube is taking lessons learned from the first wave of the Adpocalypse - which was ultimately triggered by violent and extremist content - and applying them to tackle other forms of problematic content (as well as video comments), including those that pose child safety concerns and feature hate speech.