Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Mailchimp, MailPoet and Vimeo all suspended Steve Bannon’s media outlet Pandemic War Room after Nandini flagged it for them. This was after he was banned from Twitter for suggesting Dr. Fauci be beheaded.
  • Pubmatic took disinformation outlet The National Pulse from their inventory after we flagged it for them.
  • Nandini spoke to NBC News about hate groups using payment processors.
  • Claire spoke to Sonoo Singh on Conscious Ad Network’s podcast Conscious Thinking.

Four years ago, Steve Bannon was headed to the White House as Chief Strategist to the President. As Editor-in-Chief of Breitbart, he had perfectly executed a strategy he refers to as “flooding the zone with bullshit.” That is, turning on a firehose of fake stories and propaganda with the express purpose of sowing chaos in the media ecosystem. Bannon successfully orchestrated the rise of President Donald Trump, and he did it with millions of your advertising dollars.

He has since moved on to other pastures. On Saturday, Bannon was on “War Room: Pandemic” a show he co-hosts with Raheem Kassam, where he called for the beheading of Dr. Anthony Fauci. This time, Kassam, former editor of Breitbart London and Bannon protégé, has a platform of his own with which to flood the zone with bullshit: he is breathlessly updating a “Stop The Steal” liveblog on his new website, The National Pulse.

It’s incredible to watch. Four years after Sleeping Giants began, the same people behind Breitbart continue to operate on the same business model we first brought to advertisers’ attention. They’ve started over with new domains, new account IDs and a new storyline: election fraud. It’s Breitbart all over again, and once again, it’s monetized by us.

How is the adtech ecosystem letting this happen again?

The National Pulse is the perfect encapsulation of what the collective efforts of the ad tech industry have achieved in brand safety: nothing. One could (and today, one will) argue that the range of solutions they’ve brought to market have actively made the industry worse:

  • Keyword blocking. We’ve previously reported on how this “boomer-era” technology doesn’t help you stay off bad faith publishers and has ended up blocking $3 billion globally from the news media.
  • Semantic intelligence. We took a demo for a spin recently and found that it labels everything from Black Lives Matter stories to lesbians as “negative” while giving white nationalists a pass. This technology is a scam. No thanks.
  • Ads.txt & Sellers.json. We’ve reported on how these industry-wide measures could be helpful for cross-checking ad placements if Google didn’t keep most of the information shrouded in secrecy and/or if IAB Tech Lab made the directory free and open to the public.
  • Content taxonomy. A dizzying list of categories and scores (“low risk,” “medium risk,” or high risk.”) that still do not differentiate between bad faith publishers who exploit and inflame socially sensitive topics and the responsible media outlets that cover them.

These solutions have one thing in common: they are designed to scale. Folks in ad tech like to think big and they love to automate. So do they work? You tell us.

This year alone, we have found AdRoll, Criteo, and Magnite (formerly Rubicon Project) monetizing The Gateway Pundit. We have found Pubmatic monetizing The National Pulse. We have found MediaMath monetizing The Epoch Times, The Federalist and RT.com.

Last summer, One Angry Gamer, a gaming site promoting Holocaust denial was running on ads served by Criteo, AdRoll, and OpenX, among others.

Don’t worry, they don’t serve ads on these websites anymore. We took screenshots, emailed them to the companies and asked them how these sites made it through their inventory vetting process. Then, they blocked them.

That’s why The National Pulse is not a fluke or a one-time oversight, as these companies like to say. It is the norm. It is the normal course of business to recklessly grow their inventory and hope no one notices.

The ad tech industry may claim they’re on the cutting edge of brand safety. But from where we stand, it sure looks like they’re living off 1950’s technology tacked together with some duct tape and a couple of random people on Twitter.

Are ad exchanges ever going *look* at the websites they monetize?

There is one foolproof brand safety method that would definitely work. It’s called “looking at the websites.” Literally put the URL in your browser and look at them. Read the articles. Check who owns them. Determine if they’re brand safe against a set of brand safety criteria.

This appears to be completely off the table in the adtech world. We’ve been laughed at for even suggesting it. We believe ad exchanges won’t hire qualified humans to vet their inventory for two reasons:

They don’t want to make difficult decisions. One inventory manager told us that the only research they did before approving conspiracy theory website The Epoch Times for their inventory was to check out the All Sides media bias chart. (It would have been helpful if they looked at this extensive NBC News investigation too.)

It’s more cost-efficient to keep you in the dark. It’s expensive to hire people and it’s cheap to build shitty AI and tell your customers you’ve got it covered.

If brand safety automation worked, we would pack up our bags and go home. But not only does it not work, four years into this situation… it’s getting worse. The only thing the ad tech industry seems to have effectively deployed is a PR strategy that tells marketers everything’s going to be OK.

This system isn’t working for the thousands of brands who rely on ad exchanges and brand safety tech vendors. So, it’s time for brand safety to become our job.

It’s time to take brand safety in-house.

Brands, it’s time to enter the ring

By now, marketers have tried in many ways to communicate that we don’t want to fund hate speech.

This summer, the Stop Hate for Profit campaign became the largest ad boycott in history. The Global Alliance for Responsible Media (GARM) released a new definition of brand safety in September. Last month, Pernod Ricard launched Engage Responsibly, a campaign that suggests you can offset your hate speech footprint the way you offset your carbon footprint.

These initiatives may yield tangible results for the industry one day, but they’re never going to protect your brand. To get what you want, you are going to have to be specific.

At Check My Ads, we equip brands with an immediate plan for you to make specific, documented standards for your media decisions.

  • Check your ads. We look at site placements for disinformation and hate speech, and help you wipe them out of your media buy.
  • Draw your line. We work with leadership teams to develop your brand-specific criteria for what is appropriate and not appropriate use of your ad budget. We hold a series of workshops, and emerge with a clear picture of how your brand should show up online.
  • Develop a playbook. This is a standard operating procedure document that explains exactly how to operationalize your brand values when it comes to buying ad space, managing brand safety, and responding in the case of a brand safety crisis.

At the end of this, you are armed with a set of brand policy guidelines. Anyone spending your ad budget will be empowered to make media buying decisions that reflect your brand standards.

Yes, I need brand policy guidelines!

When it comes to brand safety, the ad industry has passed the buck onto you. Take it.

We’re booking for 2021.

Thanks for being here!

Dini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.