YouTube is trying to stamp out its second major brand revolt in as many years, putting executives on an emergency call with top marketers and agencies, and sending a memo to Madison Avenue to let advertisers know it’s working on a fix.
The problem: Pedophiles are viewing videos of children, leave disturbing comments and then share links to even worse content in the comments section. On February 21, AT&T became the latest brand to say it was taking a break from YouTube after learning of the child exploitation.
“Until Google can protect our brand from offensive content of any kind, we are removing all advertising from YouTube,” AT&T said in an emailed statement.
Hasbro also announced it would halt its ads on the site, according to CNBC. And earlier last week, Disney, Epic Games and Nestlé announced they would freeze spending on the platform, setting off the “adpocalypse”.
Some brands and Madison Avenue agency leaders say that YouTube held the conference call on February 20. “It came together very fast,” says one marketing executive, who listened to the call.
The exec says YouTube, which is owned by Google, took responsibility and promised action.
The first ad debacle on the platform was in 2017 after ads were found running alongside extremist and racist content. Since that first adpocalypse, YouTube has developed ways to monitor where ads run, so brands can get an audit – but only after the fact.
However, ad executives on the February 21 call said that YouTube “dodged” questions about implementing a system that would vet all videos before ads run. They believe Google won’t allow that type of monitoring because it wants to keep its inner workings private. YouTube says it has to think about user privacy before opening to third parties that could poke around its platform.
The offending videos were exposed earlier last week by a YouTube personality who has been criticizing the company for months for featuring videos of children that could be construed as sexual. In many of them, children are playing or trying on clothes, and while this could be considered harmless content, the comments sections are filled with child predators sharing links to worse content or directing other pedophiles to moments in the videos.
YouTube’s algorithm was also suggesting videos that featured children, even when a viewer wasn’t looking for them or only viewed a tangentially related video. Additionally, the search bar autofilled in search suggestions that appeared to be favorite terms of pedophiles. YouTube has said it removed the search terms and is taking offending videos out of suggestions.
“Any content – including comments – that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube,” a company spokeswoman says in an emailed statement. “We took immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”
Advertisers have been hammering the digital ad industry for years now, especially sites like YouTube and Facebook, over where their messages appear. Facebook, too, has faced issues with violent, extremist and racist content, and is working with advertisers on controlling where ads run.
The brand safety issues, however, have not harmed the companies’ profits. Google posted $33 billion in ad sales in the fourth quarter and Facebook generated $17 billion. The two digital powerhouses together will have 60 percent of digital ad revenue in the US in 2019, according to eMarketer.
Last week, YouTube said it disabled comments on tens of millions of videos and took down 400 channels, and it is working with authorities to investigate any illegal activity.
This site uses cookies: Find out more.