“Google AI Faces Challenges in Banishing Mosque Shooting Video from YouTube”
YouTube has been committed to eliminating violent and hateful videos from its platform for many years. The Google subsidiary has invested in thousands of human moderators and engaged some of the brightest minds in artificial intelligence to tackle this complex issue.
However, this commitment faced a significant challenge last Thursday when a gunman used social media to livestream his horrific attack on a mosque in New Zealand. Subsequently, countless users manipulated YouTube’s software to disseminate the shooter’s video.
When law enforcement alerted Facebook about the live stream, the social network acted quickly to remove the footage. Unfortunately, it had already been captured and reposted by others on YouTube.
Google stated it is “working vigilantly to remove any violent footage” and had reportedly deleted the video thousands of times by Friday afternoon. Alarmingly, many hours after the incident, the video remained accessible, serving as a stark reminder of how much work remains for major internet companies to effectively control and moderate the content shared on their platforms.
“Once content is identified as illegal, extremist, or in violation of their terms of service, there is absolutely no reason that this content cannot be eliminated automatically within a relatively short timeframe at the point of upload,” remarked Hany Farid, a computer science professor at UC Berkeley’s School of Information and a senior advisor to the Counter Extremism Project. “We’ve had the technology to do this for years.”
Listen to Bloomberg’s Decrypted Podcast on YouTube’s ongoing battle against the darkest corners of the internet.
YouTube has consistently worked to prevent certain videos from appearing on its site. Its Content ID tool, in use for over a decade, empowers copyright holders—like film studios—to claim their content, receive compensation, and remove unauthorized copies. Similar technology is also employed to blacklist other illegal or unwanted content, including child pornography and terrorist propaganda videos.
Approximately five years ago, Google disclosed that it was utilizing AI techniques, such as machine learning and image recognition, to enhance many of its services, including YouTube. As of early 2017, only 8 percent of videos flagged and removed due to violent extremism had fewer than 10 views. However, following the introduction of a machine learning-powered flagging system in June 2017, over half of the videos removed for violent extremism were taken down with fewer than 10 views, according to a company blog.
Google executives have consistently appeared before the U.S. Congress to discuss the prevalence of violent and extremist videos circulated through YouTube. A recurring theme in their testimonies is YouTube’s improvement in refining its algorithms and expanding its workforce to better manage this issue. Google is widely regarded as the top company equipped to address these challenges, given its strengths in AI technology.
So, why was Google unable to prevent a clearly extreme and violent video from being reposted on YouTube?
“There are countless methods to deceive computers,” said Rasty Turek, CEO of Pex, a startup that develops technology competing with YouTube’s Content ID. “It’s like playing whack-a-mole.”
Making minor alterations to a video, such as framing it with borders or rotating it, can confuse algorithms trained to detect problematic content, Turek explained.
Another significant issue is live streaming, which, by its nature, does not permit AI software to analyze an entire video before it goes live. Seasoned users can stream an existing video known to be banned on YouTube, broadcasting it in real-time to evade detection by Google’s algorithms. By the time YouTube identifies the ongoing situation, the video may have already been active for 30 seconds or a minute, regardless of how advanced the algorithm is, Turek noted.
“Live streaming slows the process to a human pace,” he continued, highlighting a challenge that YouTube, Facebook, Pex, and other companies are striving to overcome.
This rebroadcasting strategy complicates YouTube’s method of blacklisting videos that contravene its rules. Once a problematic video is pinpointed, the company adds it to a blacklist. Its AI-driven software is then trained to recognize and block the clip if another user attempts to upload it again.
However, there is still a time lag before the AI can effectively detect other copies. By definition, the video must first exist online before YouTube can initiate this machine-learning process. This issue is compounded when individuals begin slicing the offensive content into short live-stream clips.
Further complicating matters is that reputable news organizations have also been posting edited clips of the shooting video as part of their coverage of the tragedy. If YouTube were to remove a news segment simply because it included a screenshot of the video, it could face objections from advocates of press freedom.
The shooter in New Zealand adeptly utilized social media to maximize his reach. He made posts on forums frequented by right-wing and anti-Muslim groups, tweeted about his intentions, and then initiated a Facebook live stream en route to execute his attack.
He also released a manifesto laden with references to internet and alt-right culture, likely intended to furnish journalists with additional material, thereby amplifying his notoriety, according to Jonas Kaiser, a researcher associated with Harvard’s Berkman Klein Center for Internet and Society.
“The patterns appear very similar to past incidents,” Kaiser reflected.
I have appeared before the U.S. Congress several times to discuss the proliferation of violent and extremist videos on YouTube. The consistent takeaway: YouTube is improving its technology, refining its algorithms, and expanding its workforce to tackle this issue. Google is often regarded as the frontrunner in combating this challenge, thanks to its advanced AI capabilities.
So, why is Google unable to prevent even a single, blatantly violent, and extreme video from being re-uploaded on YouTube?
“There are countless ways to deceive algorithms,” remarked Rasty Turek, CEO of Pex—a startup that creates technology to compete with YouTube’s Content ID. “It’s like playing whack-a-mole.”
By making subtle alterations to a video, such as adding a frame or rotating it, creators can easily confuse software designed to detect inappropriate content, Turek explained.
Another significant hurdle is live streaming, which inherently limits AI software’s ability to review an entire video before it goes live. Ingenious users can take a video they know YouTube would block and stream it live in real-time, effectively bypassing Google’s detection systems. By the time YouTube realizes what is happening, the video has often already been running for 30 seconds to a minute, no matter how sophisticated the algorithm is, according to Turek.
“Live streaming essentially brings this challenge down to a human scale,” he stated. This is an ongoing issue that YouTube, Facebook, Pex, and other companies in this field are striving to address.
This rebroadcasting method poses a significant challenge to YouTube’s strategy of blacklisting violating videos. Once identified, the video is placed on a blacklist, and YouTube’s AI is trained to recognize it automatically and block any subsequent uploads. However, this training process takes time, meaning the offending video must first be online before YouTube can initiate its machine-learning process. This becomes even more complicated when users begin cutting the content into shorter, live-streamed segments.
Additionally, edited clips of these violent videos are often shared by established news outlets as part of their reporting. If YouTube were to remove a news report solely for including a frame from the video, it would face backlash from advocates for press freedom.
The shooter in New Zealand strategically utilized social media to amplify his message. He shared his plans on forums frequented by right-wing and anti-Muslim communities, tweeted about his intentions, and initiated a Facebook live stream as he carried out the attack.
He published a manifesto laden with references to online and alt-right communities, likely aiming to provide journalists with material that would inadvertently boost his infamy, stated Jonas Kaiser, a researcher connected to Harvard’s Berkman Klein Center for Internet and Society.
“The patterns echo those of previous incidents,” Kaiser noted.