The next fleet of Breitbarts is already raking in your ad dollars

The ad tech ecosystem is failing us again, so here’s what we’re gonna do.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Mailchimp, MailPoet and Vimeo all suspended Steve Bannon’s media outlet Pandemic War Room after Nandini flagged it for them. This was after he was banned from Twitter for suggesting Dr. Fauci be beheaded.

  • Pubmatic took disinformation outlet The National Pulse from their inventory after we flagged it for them.

  • Nandini spoke to NBC News about hate groups using payment processors.

  • Claire spoke to Sonoo Singh on Conscious Ad Network’s podcast Conscious Thinking.


Four years ago, Steve Bannon was headed to the White House as Chief Strategist to the President. As Editor-in-Chief of Breitbart, he had perfectly executed a strategy he refers to as “flooding the zone with bullshit.” That is, turning on a firehose of fake stories and propaganda with the express purpose of sowing chaos in the media ecosystem. Bannon successfully orchestrated the rise of President Donald Trump, and he did it with millions of your advertising dollars. 

He has since moved on to other pastures. On Saturday, Bannon was on “War Room: Pandemic” a show he co-hosts with Raheem Kassam, where he called for the beheading of Dr. Anthony Fauci. This time, Kassam, former editor of Breitbart London and Bannon protégé, has a platform of his own with which to flood the zone with bullshit: he is breathlessly updating a “Stop The Steal” liveblog on his new website, The National Pulse. 

It’s incredible to watch. Four years after Sleeping Giants began, the same people behind Breitbart continue to operate on the same business model we first brought to advertisers’ attention. They’ve started over with new domains, new account IDs and a new storyline: election fraud. It’s Breitbart all over again, and once again, it’s monetized by us.

How is the adtech ecosystem letting this happen again?

The National Pulse is the perfect encapsulation of what the collective efforts of the ad tech industry have achieved in brand safety: nothing. One could (and today, one will) argue that the range of solutions they’ve brought to market have actively made the industry worse:

  • Keyword blocking. We’ve previously reported on how this “boomer-era” technology doesn’t help you stay off bad faith publishers and has ended up blocking $3 billion globally from the news media.

  • Semantic intelligence. We took a demo for a spin recently and found that it labels everything from Black Lives Matter stories to lesbians as “negative” while giving white nationalists a pass. This technology is a scam. No thanks.

  • Ads.txt & Sellers.json. We’ve reported on how these industry-wide measures could be helpful for cross-checking ad placements if Google didn’t keep most of the information shrouded in secrecy and/or if IAB Tech Lab made the directory free and open to the public. 🤷🏽‍♀️

  • Content taxonomy. A dizzying list of categories and scores (“low risk,” “medium risk,” or high risk.”) that still do not differentiate between bad faith publishers who exploit and inflame socially sensitive topics and the responsible media outlets that cover them.

These solutions have one thing in common: they are designed to scale. Folks in ad tech like to think big and they love to automate. So do they work? You tell us.

This year alone, we have found AdRoll, Criteo, and Magnite (formerly Rubicon Project) monetizing The Gateway Pundit. We have found Pubmatic monetizing The National Pulse. We have found MediaMath monetizing The Epoch Times, The Federalist and RT.com. 

Last summer, One Angry Gamer, a gaming site promoting Holocaust denial was running on ads served by Criteo, AdRoll, and OpenX, among others. 

Don’t worry, they don’t serve ads on these websites anymore. We took screenshots, emailed them to the companies and asked them how these sites made it through their inventory vetting process. Then, they blocked them.

That’s why The National Pulse is not a fluke or a one-time oversight, as these companies like to say. It is the norm. It is the normal course of business to recklessly grow their inventory and hope no one notices.

The ad tech industry may claim they’re on the cutting edge of brand safety. But from where we stand, it sure looks like they’re living off 1950’s technology tacked together with some duct tape and a couple of random people on Twitter.

Are ad exchanges ever going *look* at the websites they monetize?

There is one foolproof brand safety method that would definitely work. It’s called “looking at the websites.” Literally put the URL in your browser and look at them. Read the articles. Check who owns them. Determine if they’re brand safe against a set of brand safety criteria.

This appears to be completely off the table in the adtech world. We’ve been laughed at for even suggesting it. We believe ad exchanges won’t hire qualified humans to vet their inventory for two reasons: 

They don’t want to make difficult decisions. One inventory manager told us that the only research they did before approving conspiracy theory website The Epoch Times for their inventory was to check out the All Sides media bias chart. (It would have been helpful if they looked at this extensive NBC News investigation too.)

It’s more cost-efficient to keep you in the dark. It’s expensive to hire people and it’s cheap to build shitty AI and tell your customers you’ve got it covered.

If brand safety automation worked, we would pack up our bags and go home. But not only does it not work, four years into this situation… it’s getting worse. The only thing the ad tech industry seems to have effectively deployed is a PR strategy that tells marketers everything’s going to be OK.

This system isn’t working for the thousands of brands who rely on ad exchanges and brand safety tech vendors. So, it’s time for brand safety to become our job.

It’s time to take brand safety in-house.

Brands, it’s time to enter the ring

By now, marketers have tried in many ways to communicate that we don’t want to fund hate speech.

This summer, the Stop Hate for Profit campaign became the largest ad boycott in history. The Global Alliance for Responsible Media (GARM) released a new definition of brand safety in September. Last month, Pernod Ricard launched Engage Responsibly, a campaign that suggests you can offset your hate speech footprint the way you offset your carbon footprint.

These initiatives may yield tangible results for the industry one day, but they’re never going to protect your brand. To get what you want, you are going to have to be specific.

At Check My Ads, we equip brands with an immediate plan for you to make specific, documented standards for your media decisions. 

  • Check your ads. We look at site placements for disinformation and hate speech, and help you wipe them out of your media buy.

  • Draw your line. We work with leadership teams to develop your brand-specific criteria for what is appropriate and not appropriate use of your ad budget. We hold a series of workshops, and emerge with a clear picture of how your brand should show up online. 

  • Develop a playbook. This is a standard operating procedure document that explains exactly how to operationalize your brand values when it comes to buying ad space, managing brand safety, and responding in the case of a brand safety crisis.

At the end of this, you are armed with a set of brand policy guidelines. Anyone spending your ad budget will be empowered to make media buying decisions that reflect your brand standards. 

Yes, I need brand policy guidelines!

When it comes to brand safety, the ad industry has passed the buck onto you. Take it. 

We’re booking for 2021.

Thanks for being here!

Dini and Claire

Joe Biden’s campaign has been unwittingly funding Breitbart all this time

How Google Search Network is funneling advertiser dollars to Breitbart without the campaign’s knowledge.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Nandini did a podcast interview with Bridget Todd, host of There Are No Girls On The Internet.

  • Claire was quoted in the Press Gazette’s The future of programmatic advertising: What brands and publishers need to know.

  • Nandini spoke about the importance of developing acceptable use policies and a transparent review process at Business of Software conference.


In previous newsletters, we’ve explored how bad faith publishers like Breitbart could still be collecting your ad dollars through sneaky backdoor methods. Today, we’re going to show you how Google itself has opened a tiny front door for Breitbart, allowing Breitbart to continue to earn ad revenues from advertisers who want nothing to do with them — including Joe Biden’s campaign.

This time, Zach Edwards, the data supply researcher who figured out the secret way Breitbart collects ad revenues, has uncovered how Google sends your ad money to shady media organizations without your knowledge, against your consent and without any form of accountability.

We know that brand safety should always be a priority for brands, but the stakes are next level for the Biden campaign. Because of Google, the Biden campaign may have already spent tens of thousands of dollars with Breitbart during this election cycle. There is no way for them to know.

Here’s how it works.

Search ads go further than you think

For over a decade, Google has provided a useful free product for publishers and website owners, called Google Custom Search Engine (CSE). This widget allows publishers to embed the power of a Google search into their websites and give their visitors an easy search experience. It’s all hosted by Google and embeds onto a website like this:

It’s a no-brainer for publishers. Not only is CSE free and hosted by Google — it’s also monetized. CSE integrates with AdSense, which gives publishers the ability to collect ad revenues from the paid ads at the top of the search results. This makes publishers — even disinformation outlets — an instant part of the Google Search Network.

It’s an easy yes for advertisers too. When you launch a paid search campaign, Google gives you the option to check the box that says “Include Google search partners.” Can you think of any reason not to check the box? We can’t.

There’s just one problem

If you blocked Breitbart from your ad buy, Google is still placing your search ads through Google Search Network. Google’s Search Network is different from its Display network. And, they didn’t think it needed the same advertiser controls. 

Google Search Network is incomplete. Adsense (the part that marketers see) is missing two crucial accountability features. If you checked the box that says “Include Google search partners,” then... 

  1. There is no way for you to check which websites your ads appeared on through Google CSE. Google does not give you a domain-by-domain breakdown of impressions or clicks, and you have no way to know how much of your ad budget went to say, Breitbart. 

  1. You cannot upload your block list to Google Search campaigns. Even if you know there are websites you don’t want to be on, there’s no way to block them. 

This would be bad enough for advertisers trying to optimize their Search budgets. But if you are struggling to avoid funding fake news, disinformation and hate speech, this blindfold is actually a nightmare. How are you supposed to do that if you don’t have a breakdown of spend across websites? How are you supposed to know which of the over million domains across Google CSE to block?

How bad is this?

You might be wondering, how big of a problem is this really? How many people actually click on those ads? The problem isn’t how many people. It’s what publishers like Breitbart can do when they have access to your wallet and you don’t have the data to know what’s happening.

What can they do, exactly? Well, one thing is they can copy and paste their Custom Search Engine widget code around the web and then run bots to click on the ads. Simple but genius, right? 

Zach actually found Breitbart’s Google search widget (code) on about four pages of search results. If you want to see them, you can search for that code here [NSFW].) This would be used as a way to monetize the search ads that run through their CSE. They wouldn’t need people to click if they can just have browsers refreshing and clicking the ads over and over again. Honestly, it’s impressive.

Want an example of a made-for-bots site that you can click on at work? The Breitbart ones are mostly NSFW, so here’s another one: Tiziran.com. This is what a fake site looks like with a CSE widget on top. This is obviously not a site built for humans. So why is there a CSE here? Because you can send bots there to search and click on ads.

So I’m still advertising with Breitbart?

Yeah, probably. If you’re running Search ads and you ticked the checkbox that says “Google search partners,” you’re still advertising on Breitbart. Even if you’ve blocked Breitbart from your display ads, Google is still directing your Search ads there. 

This means brands and organizations that have already publicly confirmed with Sleeping Giants they’re no longer advertising with Breitbart are in fact, still advertising with Breitbart: We found Etsy, Bob’s Red Mill, Cornell University, Amnesty International, Doctors Without Borders, AARP, and the ACLU among others. 

Who else is advertising there? Act Blue, Nancy Pelosi for Congress, the Democratic National Committee, and until Zach flagged it for them, the Joe Biden campaign.

So what can marketers do?

If you are advertising on Google Search Network and have ticked the “Google Search Partners” box, you have options. Here are three steps you can take:

  1. Ask your Adsense sales rep to send you a site list. Request a list of the domains where you bought search queries via Google CSE Adsense. Ask for the list to include the amount you spent per website (even if they don’t have it available for you, they should know to prioritize this list as a feature).

  2. Send your Adsense sales rep your blocklist. Google says the only way you can block your ads from websites like Breitbart is by requesting your Adsense rep to block it for you.

  3. Turn off Google Search Network. If you don’t have any visibility (let alone control) into your ad spend, it might be better to avoid it altogether. 

Thanks for reading,

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin. And send all the research praise to @thezedwards.

The first tech company to ban the Trump campaign

Banning the GOP & Trump 2020 for promoting racism is uncharted territory. Here’s why Hotjar did it anyway.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:


Hotjar is a popular behavior analytics platform that you’ve probably only heard of if you work in product or marketing. It’s by no means a household name. But last week, Hotjar made software history. 

They became the first tech company to openly suspend the accounts of the Trump 2020 campaign and the GOP. In an industry where banning hate groups rarely happens, banning a political party is unthinkable. What were they thinking?

We had a front seat ticket to this story in more ways than one: Nandini kicked off the Twitter firestorm that first alerted Hotjar to their ties with the Trump campaign.

This week, we spoke to Hotjar’s Mohannad Ali about what made this decision the right move for them, and what happened behind the scenes in their unprecedented move to terminate the account of a sitting president’s re-election campaign.

Hotjar had a problem

On August 17, Nandini called attention to Hotjar for claiming to be antiracist and providing services for the Trump campaign. Her tweet created a flurry of attention and caught the eye of Twitter user @CarterD, who sent an email to Hotjar’s customer support team.

This is the reply he received back:

Talk about making things worse. Mohannad says they initially decided to play it low-key and neutral, but neither the team nor the leadership was happy about it.  

“In the beginning, we tried to take a bit of a neutral approach to the situation and our response was a “business as usual” thing. We didn’t really spend a lot of time with leadership to work on this. The result was unfortunately that the people involved in this. Particularly, the customer support rep who responded to the ticket, felt very uncomfortable with the message we had suggested. This isn’t how we wanted to show up and we certainly didn’t want to put [our rep] in that kind of position.

But their response received universally negative feedback — and the situation was starting to slip out of control. 

After catching heat both in public and internally, they decided they needed to step back and try again. 

We thought, “Hey let’s take a second to think about how we want to act as an organization, as a team, and as individuals. This is when we started to have a more organized response and leadership discussion.”

“We want to be able to act quickly on our values”

Mohannad admits they were clueless about where to start. He and Hotjar’s CEO David Darmanin had a vision, but weren’t yet sure how to translate it into a company-wide policy:

We all land on different parts of the ideological spectrum, but there is a consensus on core values at our company. So we started thinking about how we can word our Acceptable Use Policy in a way that really maps to our values.

We realized that adding a statement like this opens up room for abuse. People might end up bombarding us with all kinds of websites. But the way we thought about it is: What’s the worst that can happen? We just want to have enough ammunition to make a strong case for why we’re taking a website down. For example, a merchandise shop that benefits a white supremacist group. 

We want to have the right policy to be able to take accounts like this down quickly, even if the merchandise itself isn’t hateful.”

There is no known way to be objective

The team needed a way to think about this issue. Two tech companies inspired them and helped them hone their thinking:

  1. Patreon’s approach to content moderation called Manifest Observable Behavior

  2. Shopify’s Acceptable Use Policy, which Shopify updated in 2018 to ban products that were not explicitly illegal, but “intended to harm.” 

These companies had both initially tried to be neutral, but found the position to be untenable.

In their search for a new policy, Mohannad realized it would be fruitless to try and achieve objectivity or universal consensus. They would have to bring their own values to the table.

If you ask people whether they think certain things Trump said are racist or a certain executive order is racist, many of us might agree that it’s unequivocally racist, but there will never be universal consensus. 

At the end of the day, there’s a degree of human arbitration that will have to happen. There are certain individuals who will have to make a decision on whether they think this is hateful or not.

In the end, they decided to implement an Acceptable Use Policy that gives them the right to suspend accounts that promote hate, division and discrimination both directly and indirectly.

You can read their full statement here.

“The most important thing: the trust of our team”

Hotjar leadership opened up a discussion in Discourse for their employees, who work remotely from 24 countries. 

The biggest responsibility you have as a leader is to your team, even before your customers. Our risk was losing confidence or our team. Of admitting that we are falsely advertising who we are. Backlash is something you might get over, but breaking the trust of your team is not. 

I think one of the good moves we made was to ask for everyone’s opinions. We didn’t intend to have a majority vote kind of thing. We just wanted to explore different opinions and surface all the arguments and bring the team with us on our journey. They need to be part of the exercise and discourse.”

While the final decision was up to them, Mohannad and David still worked hard to present their decision to their team, because buy-in was important to them.

Our internal message was different than the one we shared with the public. We went a lot more in-depth, showed our methodology, and a long list of evidence. 

In the internal statement we made to the team, we explained our identity and values. We started talking about some of the behavior we saw Trump demonstrate. The point we were trying to make: It’s not about the person. It’s not about Trump. We’re not here to be a judge of character. But it’s clear this particular statement was racist. This particular action did target marginalized groups and so on. And we believe that these are credible threats to the community.

We presented this to the team. We said we expect it might lead to some backlash, we acknowledge this might happen on our team as well, but this is what we’re going to stand by.

“I was always more worried than I should have been”

We asked Mohannad if he had concerns before he took the message public. He said he was worried the whole time — about employees, about the GOP and the potential ways this could go wrong for them.

At every step of the way, I was always more worried than I should have been. Nothing happened. The team took it extremely well. A lot of people who initially suggested we don’t suspend anyone, after we explained it, they said “You know what? You’re right.” What you’re saying makes a lot of sense, and I stand by you.

It crossed our minds that we could be sued. We worked with lawyers and confirmed that our terms of service are legal. Especially in the US, the law is liberal for denial of service.

But the point isn’t to hold yourself to an impossible standard of perfection. It’s to improve, be transparent and keep building trust over time.

We are now holding ourselves to a new standard of how we enforce our policy. This will continue to be the biggest challenge we have now as a self-serve business. The fact remains we still don’t know everyone who is using our tool and script.

We’re going to try to improve the ways we enforce our policy, but for the time being, we’re going to rely on our community and team to flag things.

We plan to create more transparency for the community on what happens after you submit a violation report. Soon, we’re going to publish a page that explains what our internal processes are, how we investigate and who will be making these judgment calls.

Mohannad’s advice for tech executives

What advice does Mohannad have for a tech executive who might find themselves in the same position as him? He says:

  1. Take it seriously, take the time to respond. It’s a good thing when someone is holding you accountable. Embrace it rather than ignore it.

  2. Ignore the temptation to try to be objective. People think standing by the sideline is an objective standpoint, but being neutral is still a subjective opinion of how you’re dealing with a certain situation. So don’t be afraid to bring yourself into the conversation. This is where we got paralyzed in the beginning: “What’s the ‘right’ objective way to do this?”

  3. Everything is intersectional. At the end of the day, companies don’t exist in a vacuum. Companies are the people within the company. It’s impossible to ask people to disconnect who they are outside of work. Ultimately you’ll make a decision based on personal convictions and that’s not a bad thing. 

That is definitely some hard-won advice. “We came THAT close to making the wrong decision,” says Mohannad. “Sometimes you need a slap on the face to remind you who you are.

Thanks for reading,

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.

Thank you for destroying the free press

Inside the brand safety tech that’s keeping ad revenues away from the news media

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Nandini will be on Resource Alliance’s panel “The ethics of Facebook for nonprofits” on September 3rd (tomorrow) at 10am EST/3pm GMT. Sign up here!

  • Nandini will also be on Media Rumble’s panel on September 4th discussing hate speech in Indian media along with Stop Funding Hate’s Richard Wilson and News Laundry’s Manisha Pande. Sign up here!


For a brief and glorious window of time last week, we received a rare glimpse into the secret algorithm that determines who on the internet gets to receive advertising dollars and who gets blocked.

Every year, brand safety technology companies block about $3 billion from reaching news organizations based on a single myth: that placing your ads on negative news stories is unsafe for your brand.

There is not a shred of evidence that this is true, but that hasn’t stopped them from providing at least two half-baked ways to do it: 

  1. Keyword blocklists — a list of “bad words” brands don’t want their ads placed near (which we’ve previously covered here)

  2. Contextual intelligence — AI that “reads” a page and classifies its topic, category, and overall sentiment rating as positive, negative or neutral.

Thanks to the war on “negative” news, the sentiment rating has become one of the most critical factors behind whether a news article gets monetized. It’s also one of the shadiest, because no one knows how the machine works.

But last week, Integral Ad Science launched its “Context Control” demo, inviting users to test its version of the tool, which allegedly reads “like a human” and spits out a real-time decision of whether it makes readers feel overall good (positive) or bad (negative). 

We took the tool for a spin, testing it on (what else?) hate outlets. The next day, the demo had vanished.

We can guess why. The demo reveals a machine that is actually alarmingly and dangerously mixed up. It does not understand North from South or up from down. It is a college student’s computer science project, which she turns in to the professor at the end of the semester and says, “Good thing no one’s ever going to use this in real-life haha.”

Except IAS does use this machine in real-life and at scale, to control billions of impressions and ultimately, the fate of newsrooms around the world. 

IAS’s ‘context control’ demo: The results

Here’s what we found: 

First, we looked up a handful of extremist and extremist-adjacent sites. The sentiment machine didn’t seem to catch on:

  • American Renaissance (amren[dot]org) is a white nationalist site run by Jared Taylor. It was rated neutral.

  • Liberty Hangout (libertyhangout[dot]org) is the website of Kaitlin Bennett, the Kent State grad who harasses students on college campuses about homophobia and guns. It was rated positive. 

  • Drudge Report (drudgereport[dot]com) is the aggregation and disinformation site run by Matt Drudge. It had no rating.

Then we tested an article about “lesbian bed death.” We’ve long hypothesized that this socially accepted topic would be unfairly dinged because it contains two “bad words” (guess which ones?). It turns out we were right: 

We also looked at Kenosha’s local newspaper covering Jacob Blake’s story: 

Then we looked at coverage of Jacob Blake’s story from The Root, which provides an unflinching analysis of issues in the Black community:

  • The Root’s coverage was categorized as “sensitive social issues.” It was rated negative. 

IAS’s demo was removed before we could investigate further. But we saw enough to confirm that if you’re taking IAS’s advice to avoid negative content and using their technology to do it, you’re actually keeping your brand dollars away from some of the most responsible media coverage out there today. And probably still funding white nationalism.

They’re not against the news, just the negative news

Why are we measuring negative sentiment to begin with? If you’re not in the adtech bubble, let’s catch you up. For years, the adtech industry has coalesced around the myth that placing ads on negative news could harm your brand.

It’s a fabrication that is as bold as it is bonkers. The fact is, no brand has faced a brand safety crisis for placing their ads on the news. In fact, any brand that takes this advice is forfeiting its spot on the most highly trafficked, highly reputable domains in the world.

It’s ridiculous and it’s coming from the top. At the start of the global pandemic, the CEO of IAS urged clients to use their technology to target “positive hero-related content.” What she didn’t mention was that there is precious little good news to go around in 2020, and that following this advice means withholding revenues from news organizations that are producing the essential coronavirus reporting we all depend on.

The anti-bad-news campaign worked, too. Buzzfeed reported in March that one major brand blocked 2.2 million ads from appearing next to “coronavirus-related keywords,” which resulted in up to 56% of impressions being blocked from the Washington Post, New York Times, and Buzzfeed News. 

jonathon springs @JonathonSprings
@nandoodles @integralads @TheRoot @FDRLST This is a contributing factor to why we don't have
thinkprogress.org anymore :/ we were asking questions in 2016 and all they could say is it inputs and results of the model are proprietary 🙃Think Progress Header Logo Climatethinkprogress.org

“But Nandini and Claire, they’re still getting the money”

Do publishers still receive the revenues if the ad is blocked? No one can say for sure. Ad tech folks will tell you that blocking the news happens so quickly on a page-by-page basis that it often has to happen after the “bid” has taken place on ad exchanges — after the budget has already been spent. This is somehow meant to defend this type of blocking, as if to say “it’s harmless anyway, so why complain?” Here are some reasons:

  • Because no one outside brand safety companies can tell when the block is pre-bid or post-bid. 

  • Because publishers are definitely getting blocked from collecting revenues, but we don’t know by how much.

  • Because post-bid blocking would mean that marketers are spending money on ads that don’t even appear. 

If the block happens pre-bid, it’s bad for publishers. If it happens post-bid, it’s bad for marketers. Folks, this would all be a lot easier if we just didn’t block the news. 

Who built this program anyway?

Our Twitter thread made its way to the adops subreddit, where one user had a message for Nandini (“leftist psycho”) and all the other idiots:

“If you think any ‘machine learning’ or other bullshit is going to fit into your subjective interpretation of all pages, think again.”

They nailed it. How is a machine supposed to be subjective?

Interpreting what we read is a human thing. We decide how we feel about an article based on our knowledge, values, cultural identity, and the sum of our experiences as sentient creatures. The only thing that matters in brand safety is what humans think. 

Machines don’t have any of those. They only know what the humans who built them taught them. And that’s where we hit a wall. The algorithms are built by a team of people that Integral Ad Science (and DoubleVerify and Oracle) prefer to keep under wraps.

We don’t know who made it. We don’t know their backgrounds, their cultural experiences, whether they are a diverse group. We do not know whether the developers understand that racism and white nationalism is bad and whether they have a baseline understanding of how to identify hate speech and disinformation.

And that means we are handing over one of our most critical brand safety decisions — where our brand appears — to a group of unknown people and the black box they built. Both publishers and brands are left in the dark. 

If you’re uncomfortable about censorship, how about these advertising decisions that we never even see? You couldn’t find a more dystopian way to kill the free press. 

A product looking for a problem to solve

We’ve covered this before, but let’s reiterate: Brand safety is not about a page-by-page analysis. No social media crisis will come from an awkward ad placement. What will create a brand safety crisis, though, is funding organizations that peddle dangerous rhetoric (hate speech, conspiracy theories, dangerous disinformation). 

Brand safety is about ensuring that your ad spend aligns with your brand values. When the technology doesn’t fit the goal, it does more bad than good. You don’t need to scan every page on The Boston Globe. It should be on your inclusion list.

What can you do?

If you use brand safety technology, here’s what you should do:

  • Review your keyword blocklist. Ask for a copy of it and send it back with heavy edits. This list should be short. 

  • Review your inclusion list. When Chase Bank reduced their inclusion list from 400,000 to 5,000 websites, their performance stayed the same. What websites would you include in your list of 5,000?

  • Finally, and you probably saw this coming… check your ads! If you’re curious about what’s happening in your ad spend, the very first place to start is your site list of placements. 

Thanks for reading!

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.

The secret way they collect ad revenues

Google’s secrecy around sales houses is the gift that keeps on giving — to global propaganda rings.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us (we’ve been BUSY!):


Last BRANDED, we revealed an undercover(ish) scheme that enables unrelated groups of publishers to quietly share account IDs meant for only one of them, pool the collective ad revenues and then split it up amongst themselves. 

We’ve always understood Breitbart’s business model as a classic programmatic play: they publish hateful content and earn $$ through impressions and clicks. The logic behind the Sleeping Giants campaign was: block their website and they won’t get your money.

And yes, that shut them out of earning ad revenue the normal way. But we’ve now identified another lucrative way for them to continue collecting within the same ad tech ecosystem: by sharing DIRECT account IDs and cashing out through unknown entities called dark pool sales houses.

(This is roughly equivalent to one restaurant sharing its liquor license and card reader with a bunch of other bars in your city and then splitting the cash at the end of the night — it’s both a zany potential episode of It’s Always Sunny In Philadelphia and illegal.)

One company — the world’s biggest ad exchange — could help us end this practice overnight. But they’re holding out on us. 

Today, we’re going to explore how unknown entities around the world are extracting unlimited sums of money from unsuspecting advertisers because Google is holding onto key data like it’s a state secret, in order to maintain its own competitive advantage.

This goes beyond a standard brand safety issue. It’s a privacy issue, an antitrust issue, and because of the international money laundering implications, an issue of national security. 

A few updates about what happened after the last issue

The following companies issued responses to our last issue

33Across (the SSP we highlighted in our last issue) issued a response saying: 

“We have reached out to our supply partners and the sites who list the 33Across Ads.txt line on either Breitbart.com and RT.com to have them removed.” 

Since publication, the 33Across records have been removed from Breitbart’s ads.txt file.

Saambaa (a self-described ‘event discovery platform’) reached out to us to point the finger at a ‘3rd party ad management company’:

“The Breitbart ad inventory is managed by a 3rd party ad management company. We work with them on other sites and they have grouped our ads.txt as part of their larger assembly of ad buyers on Breitbart.”

The company that Saambaa is talking about here is Granite Cubed, the exclusive advertising broker for Drudge Report, which has been extensively covered by Craig Silverman at Buzzfeed. Drudge Report and Breitbart have numerous overlapping ads.txt records, a potentially telltale sign of dark pool sales houses at work.

Microsoft updated nearly 100% of their ads.txt files on Bing and MSN using their own schema. All their sales houses (and any dark pool sales houses) are now labeled, and even include schema that identifies business names and countries. See for yourself!

The IAB (the ad industry association) defended ads.txt (which they created) in a blog post titled “Don’t blame the tools; Learn how to use them”. Here’s Zach’s response to that letter. We thank the IAB for their attention, and look forward to seeing how they help marketers protect their ad budgets. 

We’re also thrilled that our research inspired a handful of new products to help advertisers cross-check their ads.txt records. Like this one, which found “a handful of sellers misclassify nearly all their sources as a direct relationship (publisher), when the sources are, in fact, intermediaries.”

Could this be Breitbart’s “back door” business model?

When Breitbart was shut out of 90% of its ad revenues, it was because thousands of advertisers blocked their domain (www.breitbart[.]com). That closed up one revenue stream — we’ll call it “the front door.” 

But there would still be another way for Breitbart to cash out: they could enter into partnership agreements with other publishers, use those publishers’ DIRECT IDs, direct the ad dollars to dark pool sales houses — a name we came up with because they can operate as untraceable shell corps — and split the cash from there. 

We call this “the back door.”

When Breitbart became the subject of the Sleeping Giants campaign, they only lost “front door” access. But the back door is still open: there’s nothing stopping them from using other account IDs or partnering with other publishers who agree to mislabel DIRECT inventory with them. 

What does it mean to go through the back door? It means you can cash out from the other end: 

  1. Set up revenue-share agreements with your friends 

  2. Buy up tons of websites together (even random ones — it doesn’t matter)

  3. Place the same DIRECT account IDs on all of them 

  4. Link them to the same sales house

  5. Divvy up the cash 

Advertisers would never know who the money actually ends up with, because there is no publicly available directory of sales houses for them to check on.

If you were, say, part of a global fascist propaganda effort, you might partner up with alt-right organizations around the world, set up a common shell corp and use this back door to fund your activities. The added bonus? You can use all that user data you’re collecting together to target citizens more efficiently ahead of important elections. ¯\_(ツ)_/¯

Let’s work backwards together to look at how the adtech ecosystem makes high-stakes fraud possible.

Breitbart as a data broker

After our last issue, a lot of folks asked us why it matters if publishers are sharing DIRECT IDs. You can technically make up anything you want on an ads.txt file. Maybe they just copy/pasted more records in so it looks like they’re popular with vendors? 

That surface-level understanding of mislabeling is why it has not generally been policed by DSPs and why marketers aren’t aware of what’s going on.

There are actually two financially sound reasons the bad guys would share DIRECT IDs:

  1. They can spin up new sites with higher CPMs quickly — DIRECT IDs generate more revenue than RESELLER labels because RESELLER IDs are more often blocked by buyers. This makes it easier to pool user data across partner sites, allowing them to offer more targeted audience profiles on newly minted websites. . 

  2. More targeted ads (which are worth more $$) — Advertisers are obviously willing to pay more for better targeting, so these impressions are worth more. Pooling user data means they can offer more targeted audience profiles across all their partner sites. A rising tide of audience profiles lifts all publisher boats.

Whether a publisher like Breitbart partners with their friends around the world or just buys up a bunch of websites on their own, sharing DIRECT IDs is a fast track to racking up high value impressions and clicks.

That would give Breitbart a new lease on life as a data broker. 

Breitbart as a sales house

Now, once you have the money, you need to get it out, right? So where do those ad revenues actually go? To the sales house. Lucky for Breitbart, anyone can form their own dark pool sales house — easy to do because you can legally link it to an anonymous LLC — and serve themselves and their friends.

This structure is how most SSPs operate, except without the sketchiness and anonymity. That’s why you can think of a dark pool sales house as a “pseudo-SSP.” It works like any other SSP, but you don’t know who owns it. 

Breitbart as a vertical integration scheme???

Sure, that’s one way to put it. You could also call it “rampant money laundering.” If you’re able to be a data broker (which you can) and control a sales house (also very simple), you can effectively control your own secret supply chain.

Where the hell are the ad tech cops?

The IAB (the Interactive Advertising Bureau) says they’ve built the tools and architecture to check on these things. Together, the following two standards are meant to bring a level of transparency to ad buying: 

  • Ads.txt = a directory of account IDs

  • Sellers.json= a directory of sellers (incl. company name, address, location, etc.)

But there are some glaring holes: 

  • There’s no way to know when bad actors are removed from sellers.json. There is no “sellers removed” schema or standard to alert us to trouble.

  • Organizations have permission in sellers.json to use both DIRECT and RESELLER labels in ads.txt. They don’t get in trouble when they use DIRECT across more than one website.

  • Sellers.json is applied inconsistently. Sellers.json has been largely ignored by Google and haphazardly applied by other ad tech vendors. 

  • Google only just released a“beta” version of a sellers.json. In June 2020, 2+ years after pushing the ads.txt standard, they’ve released an extremely limited sellers.json directory. It’s still not enough to do a basic fraud check.

And then there’s this dealbreaker:

  • There’s no official global ads.txt directory. You can’t cross check for mislabeled DIRECT inventory across multiple domains because the IAB hasn’t made it publicly available!

In other words, there is no process that documents bad actors for advertisers. There’s really no one in the ecosystem holding anyone accountable for anything.

This is a major antitrust issue for Google

Google has God’s dashboard to ads. But even their records contain hundreds of mislabeled domains. (And we only looked at a tiny fraction of Google’s total inventory).

So how is anyone in the industry supposed to properly conduct their ads.txt-sellers.json checks when not buying through Google?

We tried to find out for you. Before we released the last issue, we reached out to Google with a list of account IDs from the Breitbart dark pool sales houses, which were from Google’s own advertising system. We still haven’t heard back. Most of those records are still active. 

As of now, Google’s policy still allows for shell corporations to create seller accounts, get approved as both publisher and intermediary (aka DIRECT vs. RESELLER), and then label those account IDs as DIRECT across hundreds of unique websites. 

Google does offer more information if you use their ad exchange instead of their competitors. In other words, you can reduce fraud only if you advertise through Google, because they haven’t made those tools available to anyone else.  

If Google is the only safe place to buy ads because they own all the data, that seems like an antitrust issue to us. 

The seller.json directory they released a couple months ago has two problems:

  1. It was just 5% of the records. Google has only released about 5% of their sellers.json records. They’ve kept the other 95% for themselves. You can see the records here

  2. You can completely hide the ownership of an accountID. Google has an optional “make my business confidential” toggle for the Google sellers.json listing, which if you squint, does help maintain privacy. But mostly it just helps dark pool sales houses proliferate.

Bottom line: We should be able to check our ads

What can we do with this information? Two things. First, check out Nandini’s Twitter thread about how to ask your ad exchanges to fix ads.txt labels

Then, send Google an email — here’s a handy template: 

Dear Google, 

I’m writing today to understand how you plan to address ad fraud taking place across the ecosystem. 

  • Why have you allowed clients to hide their ownership through a confidentiality flag? What efforts are you undertaking to stop money laundering through this loophole?

  • Why are you giving organizations permission to append both SELLERS and DIRECT labels without any apparent punishments against organizations who label DIRECT across hundreds of unrelated sites?

  • Does Google have any process to help the industry identify bad actors? 

A suggestion: Seller accounts that are removed from the sellers.json for malicious activity or TOS violations should be listed under a “sellers-removed.json” file instead of just being quietly removed from sellers.json files without notice to other ad tech vendors or buyers.

Google and all DSPs should also ban the use of shared accounts by shell corporations (SSPs/pseudo-SSPs) who are not registered as data brokers in both Vermont and California.

If a seller is found to be cloning DIRECT labels across publisher websites, the seller should get one warning that pauses all their bids and access to the bid stream, and a complete suspension upon a second violation.

I look forward to hearing from you. 

Best,

[Your name]

That’s it for us. Thanks for reading, we’ll be back next time with more!

Claire & Nandini


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin. Big thanks to Zach Edwards (@thezedwards) for making this story possible.

Loading more posts…