YouTube and its partners have recently taken a more rigorous stance when it comes to videos on the service that include elements of extremism, white supremacy, or terrorist recruitment. A feature rolled out at the end of July that automatically redirects these search results to videos that debunk hate speech.

YouTube has not released the details of how they flag relevant videos, but they have said it will be part of a larger push to prevent their service being used as a terrorist recruitment method or an acceptable place for pushing negative agendas.

“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all,” wrote Google’s general counsel Kent Walker. “Google and YouTube are committed to being part of the solution. We are working with government, law enforcement, and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.”

YouTube and its parent company Google have been working for years to decrease and remove the amount of content on their services that violates policies. However, it can be tricky, given the nebulous parameters of free speech. And when it comes to videos of terrorist attacks or rallies, it can be difficult to differentiate between a news report and a glorification of hate.

Still, with the increasing amount of racial violence across the US, YouTube, Google, Twitter, and other big names in tech have decided it’s time to step up the pace when it comes to reviewing the content on their services.

Several new steps have been introduced to cut down on problematic content:

  • YouTube engineers have developed technology that will prevent the re-uploading of known terrorist content after it’s been taken off the site.
  • New partnerships with NGOs, counter-extremism agencies, and other expert organizations are now in place to help review content.
  • YouTube has instituted its most advanced machine learning to train new “content classifiers” to help remove offensive content more quickly.
  • Not content to rely solely on technology, YouTube’s Trusted Flagger program—a collection of actual human beings who review content—will be expanded to include 63 more organizations supported by grant money.
  • Videos that don’t, strictly speaking, violate YouTube terms—for example, videos including inflammatory religious or supremacist content—will only appear behind warnings and will not be eligible for monetization or endorsements.

While YouTube has definitely had issues in the past with flagging content (remember the failed “restricted mode” that targeted content from LGBT creators?), this latest round of content review appears to be more in keeping with YouTube’s stated value system—to allow for freedom of expression while still upholding a desire to avoid hate speech.

About 

Martin Ackerman is a freelance writer and current editor originally from Staten Island, NY. His university schooling focused on English education and Japanese. He has a (not so secret) passion for art history and political science. When he isn't writing or editing you can find him at sci-tech conventions, building the latest LEGO city or pampering his cat, Tea. You can follow him on Twitter @MarMackerman.