This kid is not flying a plane and we're not in control of our ads

We just think we are.

A few days after the January 6th insurrection, shareholders at two major companies filed a resolution asking that Home Depot and Omnicom undertake audits to prove that their ad budgets haven’t been funding disinformation and extremism online. 

This, as far as we know, is the first time the link between programmatic advertising budgets and disinformation has been flagged at the shareholder level. The lowly and mundane ad audit is suddenly in the spotlight!

Investor scrutiny is probably not a welcome development for the $300 billion advertising industry — particularly the brand safety wing — which has for years coalesced around the claim that ad-funded disinformation is a problem they mostly have under control. 

This isn’t the first time ad tech will be scrutinized, of course. For years, the adtech industry has been responding to complaints about questionable placements with the same answer:  “Oh THAT? That was just a little oversight. Hang on a second, we’re going to give you more control over your ad placements. We’re going to help you get even more granular.

More control over our ad placements? Who can argue with that? Well, we’re about to. 

Sure little man, you can fly the plane!

The adtech industry has good reason for wanting advertisers to feel like we’re in control: it takes the heat off them and puts advertisers to work instead.

Google was the first to catch onto the appeal of introducing new brand safety features that absolves them of further responsibility. When UK agencies discovered that their ads were funding extremists, Google was summoned to Parliament and asked to come back with a fix. Two months later, they rolled out their Big Idea: “Page-level enforcements for more granular policy actions” 

“To allow more precise enforcements, and provide you with feedback about policy issues as we identify them, we’re introducing page-level enforcements. A page-level enforcement affects individual pages where violations of the AdSense Program Policies are found. As a result, ad serving is restricted or disabled on those pages. Ads will continue to serve where no policy violations have been found, either at the page- or site-level.” (emphasis ours).

It was a solidly underhanded maneuver. On the one hand, page-level enforcement offered more granular control. On the other hand, no marketing team in the world has time or resources to seek out and block individual pages on every website on the internet.

But that hasn’t stopped adtech from running with the marketing ploy that more controls means you’re in control.

Granularity, or the concept of giving advertisers ever more filters and settings, has since become a core strategy in an industry that realizes they can continue to invite disinformation and extremism into their inventory (in the name of choice) while putting the onus on advertisers to sniff it out and block it themselves (in the name of control).

You have the power to choose exactly where you want your ads to go, they say. It’s just functionally impossible to use.

It sure looks like we’re not in charge

That kid is not actually flying a plane and you are not calling the shots on your ad placements. Ad tech has locked us into a system that makes us believe that marketers are in control. Sure, they let us push the buttons…

  • What topics or categories do we want to avoid?

  • What is our risk tolerance? 

  • Do we want to only be on “positive sentiment” content?

But we don’t know what the buttons do. A handful of product and engineering teams do — and they are making the real strategic decisions behind the scenes: They decide what content is deemed positive and negative. They decide what risky content looks like. They decide what a piece of content is about and apply this across billions of articles. And from what we’ve learned through their leaked data, it doesn’t look like it even works.

But let’s pretend for a moment that it did. Have you ever tried to rate an article the way that brand safety technology does? Try it now. Here are some articles:

  • A CNN article about a Black Lives Matter protest

  • A Daily Mail article about Elliot Page, who recently came out as trans

  • The Root’s obituary of Mary Wilson, a co-founding member of the musical group The Supremes

  • Al Jazeera’s coverage of multiple attacks in Afghanistan

  • One of the many Epoch Times articles about bunnies

How would you rate these articles for brand safety?

  1. What topic or category would you put each article in? (Use the IAB Tech Lab’s Content Taxonomy for reference.

  2. How you rate each article for “risk”? (Low, medium, or high)

  3. How would you rate each article for “sentiment?” (Negative, neutral, or positive?)

We couldn’t even apply this rubric by hand to one article without scratching our heads.

And if you were to send this exercise to your friends, we’re certain that each of you would come up with different answers, and none of you would be sure that you were right. As readers, we all interpret different articles differently.

What’s the end game here? We’re not in control - we’ve given it all up to to someone else.

We need control, not more controls

If adtech gets any more granular, it may disappear up its own butt. Now, we don’t want that. That leaves us with one option: start zooming out.

As marketers, it is our job to connect with consumers. Every day, marketers add or subtract from our brand equity ‘bank accounts’ by associating and disassociating with ideas, causes, people, and products.

When we advertise on respected publishers, we gain a little bit of their brand equity. When we advertise on disreputable publishers, we lend them a bit of ours. Ad placements are a powerful way to live out our brand values.

Our clients — mostly marketers and people who work in brand and reputation — get this right away. Here’s how they might break down the above list of articles with ease:

  • CNN is brand SAFE for most brands because it adheres to journalistic standards

  • The Daily Mail is brand UNSAFE for many brands because it can be racist and misogynist and it doesn’t consistently adhere to journalistic standards.

  • The Root is consistently brand SAFE for most brands because it adheres to journalistic standards. 

  • Al Jazeera is brand SAFE for most brands because it adheres to journalistic standards.

  • The Epoch Times is consistently brand UNSAFE because it regularly publishes disinformation.

The ultimate decision depends on each team’s brand values. Do they need any more granular controls for this? No.

Stay safe folks. Check your ads. 

Thanks for reading!

Nandini & Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

Look at the mess we made PLUS our new podcast!

We get sh&t done. Here’s what’s next — and how you can help.

This is a special one, folks. It’s our 24th issue of BRANDED. It has been just over 1 year since we launched this newsletter. Happy birthday to us! Stay with us for a BIG SURPRISE at the end. 😉

But first, can we brag a little? We started BRANDED with zero expectations. It seems like everyone has a newsletter these days. What would make ours special anyway?

Well, our newsletter ended up rocketing from 0 to 7000 subscribers… and helped us shape a year of international coverage on the advertising industry: 

We partnered with the smartest industry minds to break stories on adtech secrets: 

We became the loudest advocates in the adtech industry for the news industry: 

We launched our first-of-a-kind company, Check My Ads! 🎉 , providing brand safety training for global brands, agencies, and ad exchanges.

  • Our launch was covered byFast Company, AdWeek and AdExchanger.

  • We spoke to Afdhel Aziz in Forbes about our vision for Check My Ads.

  • We chatted with Kevel about why clients come to us and how we work with them.

We made headlines with Sleeping Giants. 

We helped the world make sense of the Facebook ad boycott:

We thought leadershipped: 

We started doing interviews together 👯‍♀️: 

We led direct action work that led to direct results:

And just in the past two weeks...

Our newsletter became a platform for change and started to move the needle in the industry… and then this month, there was a coup. 

Did our advertising dollars fund an insurrection?

We don’t know what you saw that day, but what we saw was years of ad industry negligence come to life. Our advertising dollars have been fueling disinformation and extremism, and we finally saw the outcome play out in real life on Capitol Hill on January 6th.

It is still too easy to make money online creating disinformation and chaos. And it’s still too easy for our ad dollars to fund it. This isn’t just a brand or reputational risk anymore. It’s a risk to public safety and our democracy — and that means it’s everyone’s business. 

For the first time, shareholders at two publicly-held companies are filing resolutions compelling them to check their ads to see whether their budgets are being funneled towards extremism.

“Advertisers are not passive bystanders when they inadvertently finance harm,” they wrote. Their spending influences what content appears online.”

That urgency is what's driving us to our next big launch.

Sponsor our new podcast IMMEASURABLE

For the past year, we’ve been talking mostly to the industry. But the adtech industry needs the full weight of public scrutiny for us to create change at the speed we’re looking for. That’s why we’re launching a podcast this year.

We’re going to be talking about how the ad industry is financing extremism and it’s going to be… well, it’s going to be really fun, honestly. This is a heavy subject and we know how to bring this issue to the public.

Do you want to advertise on this podcast? Great! You don’t have to visit any ad exchange or media agency. Just hit reply on this email. The first 5 sponsors who come in higher than $19,999 get an entire episode to themselves. 

If you’re saying “OMG! SEND ME SPONSORSHIP DEETS!” send us a quick message and we’ll get right on that. And if you’re not in the position to sponsor, please pass along the word! 

Share BRANDED

As always, thank you for reading,

Claire & Nandini


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin

These Fortune 500 keyword blocklists are defunding the news

This unprecedented leak by Integral Ad Science shows us the news stories the world’s biggest companies refuse to fund — and it gets weird.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

  • Dark pool sales houses are now officially listed as a national security threat in a paper released by Harvard Kennedy School and the German Marshall Fund. BRANDED broke this story with Zach Edwards last July. 🎉

  • Nandini’s Twitter story about Uber’s $100 million discovery of ad fraud went viral. Guess no one noticed when it happened in 2017? The story shot to #1 on Hacker News and was featured in Pluralistic and Input Magazine this week.

  • Nandini’s commentary about buttflap pajamas got her quoted in Vice, Business Insider, and Gizmodo.


A keyword blocklist is kind of like a brand’s secret diary. 

It reveals all their collective corporate neuroses: their deepest fears, associations they want to avoid, and the topics they’re uncomfortable with. It’s a whispered set of instructions to their vendors, telling them exactly where they don’t want to be seen on the internet.

So naturally, a leading brand safety company has been leaking them.

Last BRANDED, we told you that Dr. Krzysztof Franaszek, founder of Adalytics.io, stumbled upon the proprietary classifications brand safety vendors use to determine what content is “brand safe.” But that’s not all. Thanks to Integral Ad Science we can see their clients’ keyword blocklists, too. 

👉 You can access Krzysztof’s findings here. Note: We’re looking at the entries marked “[Brand]_NEG or [Brand]_Negative”. These are most likely the blocklists. 

Originally, blocking “bad words” was a rudimentary way to prevent ads from appearing on potentially unsafe content on the web. But for some major brands, the practice has morphed into what appears to be a random collection of nouns they’re not vibing with.

For example, it looks like Boeing has “pop culture” and “television” on its blocklist, Microsoft has a blanket ban on “interracial,” JP Morgan is blocking “Taylor Swift,” and that T-Mobile doesn’t want to be anywhere near the word “Florida.” 

But let’s get serious. These lists, which are often applied by brand safety tech regardless of context, tell us exactly which types of stories are being wholesale defunded by major companies. 

For marketers, managing keyword blocklists is just another mundane task in their busy schedules, but every word we throw on the blocklist has enormous implications of what news stories get funded. In fact, the practice of keyword blocking blocked $3 billion from the global news media in 2019. It threatens the survival of our news media ecosystem. 

What are brands thinking when they make blocklists, and where is it really getting them? Today, we’re looking at Mastercard’s keyword blocklist and working our way backwards.

What you’ll see is how easy it is to unwittingly weaponize brand safety tech against news outlets, even when you have endless resources and your CMO is President of the World Federation of Advertisers.

So this is what a blocklist looks like

If you want to follow along with us, here’s “Mastercard_BlockList2_Dec2020” from Kryzstof’s dataset. This was the keyword list that Mastercard seems to have used for Politico.com.

For your convenience, we’ve organized them here by category: 

Places

barcelona; baghdad; iraq

Major crises

banking crisis; charlottesville rally; crash; plane crash; hurricane; flood; brexit; irma;

Words that describe economic bad times

bear stearns; crashes; foreclosure; fraud;

Words that describe violence

deadly; death; deaths; died; dead; deceased; massacre; casualty; shooting; explosion; gunfire; missile; gunman; gunmen; shooter; air strikes; ambush; bomb; hostage; child casualties; terrorist; terrorism; attacker; hijack; raid; daesh; al qaeda; daca; isis; isil; nazi; alt right; molester; child molester; pedophile; 

Words that suggest sex exists

sexual; pornography

Words about literally everything that happened in 2020

racist; racism; white nationalist; discriminated; protest; discrimination

Bad men

kevin macdonald;  matt lauer; hitler; john skipper; alex linder; jerry richardson; 

Capitalism 

keystone pipeline; citibike; fined; fines; 

????

shithole; warns; advisory; refugee; amtrak

“Harvey”  🤷🏻‍♀️ 

So what’s happening here?

First, despite saying they stand against racism, it looks like what Mastercard actually stands against is the keyword “racism” — that they’re withholding financial support from any story or journalist that uses the word “racism.” Not exactly great way to support the Black Lives Matter movement.

Second, it looks like Mastercard wants their ads kept away from topics related to websites that promote violence. But what they’re actually doing is erasing their ads — and their ad dollars — from reports of the most pivotal events in 2020. 

Third, it looks like no one’s reviewing these blocklists before approving them for the next month. At one point, they didn’t want to advertise alongside stories about the destruction caused by Hurricane Irma. But that storm has been over for three years now... they can take it off the list.

We sent a request for comment to Mastercard last night. If they come back with an explanation we’ll update this post at branded.substack.com. 

Why are they blocking all this? 

It looks like Mastercard doesn’t want to be associated with financial and economic hardship. This does make some sense. They may not want to put themselves within a screenshot’s distance of a news story about foreclosures. 

But the rest of the list suggests they have bought into the toxic “best practices” peddled by the ad tech industry.

For their part, IAS recommends that you target “hero-related pandemic content” and purposely avoid placing ads on stories that may make people feel “negative.” Industry organizations like Trustworthy Accountability Group and Brand Safety Institute are also constantly rolling out surveys like this one to scare advertisers into believing that advertising on “sensitive” news stories may harm your brand.

But these surveys barely pass the common sense test. If you saw a Mastercard ad while reading a story in the Washington Post about a mass shooting, would you think that Mastercard supports murder? Would you refuse to use Mastercard all of a sudden? No, no one cares.

Not only do these claims defy logic, no CEO or ad industry representative has ever been able to provide an example of how advertising on reputable news coverage has led to any business consequences. 

It’s literally never happened. And yet here we are, listening to whatever advice this is:

“But Claire and Nandini, it’s not Mastercard’s job to fund the news”

No, it’s not. It’s their job to run effective marketing campaigns. 

Mastercard’s keyword blocklists reveal a problematic industry-wide practice: marketers are thinking about everything they want to block, without thinking about where they do want to show up on the internet.

Here are the consequences of that:

You block yourself from the highest-engagement places on the web. Mastercard’s marketing team probably doesn’t realize how much prime real estate and quality eyeballs they’re forfeiting through their blocklists. When people read the news, they read all the way through. They engage. They share.  “Just in case” blocking keeps you away from the people you want to reach most.

Your budget gets funneled towards fake news. What happens when you feed enormous sums of money into your campaigns but also block all the good, reputable sites because you’re afraid of current events? That money has to end up somewhere at the end of the month, and that somewhere is likely to be fake news, disinfo, and plagiarized sites run by unscrupulous people who know how to avoid keyword filters.

You miss branding opportunities. Coca-Cola didn’t block “Berlin Wall” when the wall fell. They showed up and handed out free cokes, and became an enduring part of the story.  People remember brands that show up during pivotal moments and times of crisis. Any blue-chip brand should be investing in social discourse. It’s just good practice.

It’s a bad idea to defund reality

By now, you may be thinking “Wow, keyword blocking is terrible. Mastercard should really think about switching to something more nuanced! Like page-level intelligence that dynamically scores individual pages in real-time to decide whether it’s brand safe.”

Yeah, it’s called “contextual intelligence” and that doesn’t work either. Krzysztof found that another brand safety vendor, Oracle Grapeshot has blocked Mastercard ads from three-fourths of New York Times articles.

Sure, the technology doesn’t work, but neither does the logic we use to operate them. Running our marketing operations like a bunch of engineers isn’t getting us the results we want. It’s almost like we have to start thinking like marketers. 

If you or your agency works with a brand safety vendor, here’s what you should do:

  • Review your keyword blocklist. Ask for a copy of it and send it back with heavy edits. This list should be short. Like, less than 7 words short. 

  • Build an inclusion list. When Chase Bank reduced their inclusion list from 400,000 to 5,000 websites, their performance stayed the same. What websites would you include in your list of 5,000?

  • Get the Adalytics extension (free). Krzysztof’s extension allows you to see which brands are blocking the articles you read. It’s pretty cool!

  • Check your ads! If you’re not sure how to draw the line for what is and is not appropriate use of your ad budget, get in touch with us.

Thank you for reading!

Nandini and Claire 


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin. Thank you to special guest @kfranasz.  

Inside the chaos of brand safety technology

Integral Ad Science, Comscore and Oracle are leaking the top secret classifications they use to block ad revenues from the news.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).


Every day, a handful of tech companies decide how billions of advertising dollars will be spent on the web. We don’t see these decisions take place, but brand safety algorithms scan every page and every piece of content we look at to decide whether it’s “safe” before serving an ad.

These millions of little verdicts add up. They determine who on the web gets monetized — and who gets blocked.

It’s a big responsibility, and it appears, one they do not take seriously. While brand safety tech companies have been extremely secretive about how it all works, it turns out they have also been unwittingly sharing their own proprietary data all this time. 

Dr. Krzysztof Franaszek of Adalytics contacted us with a startling discovery last month: he was able to see how brand safety companies classify every individual news article he reads by right-clicking on “inspect” in Google Chrome. He could see exactly which brands were blocking which articles, because the keyword blocklists of global brands were also sitting out there. In other words, there’s a leak. A pretty big one. 

Three brand safety companies —  Oracle (Grapeshot & Moat), Integral Ad Science and Comscore — forgot to encrypt their data, giving us our first real look into how they categorize, block and move your advertising dollars across the web.

This issue is the first in a series of posts that will explore Krzysztof’s findings.

We’re definitely not supposed to see this

Krzysztof tells us he was in Google Chrome’s Developer Tools console when he noticed the following javascripts loading:

  • admantx.com (owned by Integral Ad Sciences)

  • mb.moatads.com (owned by Oracle Moat)

  • zqtk.net (owned by Comscore)

  • gscontxt.net (owned by Oracle Grapeshot)

While digging around, he realized he was seeing the real-time ratings behind every article he was reading on the web, from the New York Times to Vice. In other words, he could see the automated signals they use to decide whether to allow or block an ad on a webpage, all within a fraction of a second and in total secrecy.

👉 NOTE: Krzysztof has published his research and methodology in full here

With the help of URLScan.io and Internet Archive, he was able to retrieve the following page-level values assigned to national news publishers... 

Brand safety floor categories

Commonly known as “the dirty dozen”, these are the categories that both the 4As and IAB Tech Lab urge advertisers to mark as “never appropriate.”  These include adult content, arms, crime, death or injury, online piracy, hate speech, military conflict, obscenity, illegal drugs, spam, terrorism, and tobacco.

Risk level

“Low,” “medium” or “high risk” content. This data appears to map to IAB Tech Lab’s Content Taxonomy

Brand safe

Grapeshot had explicitly marked some articles as “gv_safe.” It appears that Grapeshot has an option for brands to only advertise on pages they mark as “safe.” 

Keyword blocklists

Integral Ad Science has been leaking several of its clients’ keyword blocklists, including Fortune 500 companies in the financial, telecom and consumer goods sectors. We will talk about this in our next issue of BRANDED.

This is significant. We know that brand safety technology blocked ~$3 billion globally from the news industry last year. We know how one major company's COVID-19 ad keyword blocking forced trusted news sources to forfeit up to 55% of ad placements on their pandemic-related news. And occasionally, we can see with our own eyes when an ad has been blocked from the news.

But no one, not even news publishers themselves, knows the extent to which their articles are being blocked from monetization. Today, we’re looking at the brand safety machine whirring in real-time — and it does not look good.

Hello, is this thing on?

Brand safety companies have just one job: to keep their clients off hateful and extremist content. 

Much of the work has already been done for them. In September, The Global Alliance for Responsible Media adopted a framework that nails down pretty clear definitions of what hate speech and extremism is. 

Before that, the American Association of Advertising Agencies (or the 4As) came up with the Brand Safety Floor Framework, which is made up of the aforementioned 13 dealbreakers that most brands have been clear they don’t want to be on.

All they have to do is implement those requirements. Here’s what they’re doing instead:

Systematically defunding national news outlets

It appears that brand safety technology cannot tell the difference between actual offensive content and journalists reporting on the issues to inform the public. Krzysztof’s data shows us that Oracle has aggressively blocked the news:

  • Grapeshot marked nearly one-third (30.3%) of New York Times articles as unsafe. 

  • Moat marked one-fifth (21.4%) of The Economist as unsafe, including an article about molecular cells which was likely classified under “Death Injury, or Military Conflict” because it mentioned “programmed cell death”  

👉See the full list of block rates for news outlets here

Even at the newspaper of record, certain beats appear to be entirely unsafe. At the New York Times, nearly every article written by these top reporters were marked unsafe:

  • Rukmini Callimachi, a four-time Pulitzer Prize finalist who covers ISIS and violent extremism (91.7% unsafe)

  • Jan Ransom’s who covers criminal courts and jails in New York City, and who covered the trial of Harvey Weinstein (92% unsafe) 

  • Marilyn Stasio, who writes about crime fiction for the Book Review (89.7% unsafe) 

  • Ali Winston, an investigative reporter who covers the NYPD (97% unsafe)

👉 See the full list of block rates of New York Times reporters

If they’re this good at catching reporters, they must be amazing at catching actual hate and extremism, right? Not exactly.

They’re giving extremists an unlimited hall pass

Having taken Integral Ad Science’s “Context Control” tool for a spin a few months ago, we knew that brand safety tools appear to be unable filter for extremist & white supremacist websites. Krzysztof’s study confirms that it gets worse. 

Both Moat and Grapeshot are allowing brands to advertise alongside conspiracy theories, COVID disinformation, election disinformation, and much more.

  • One America News Network (OANN.com), which is a critical vector of election disinformation was 88.5% safe.

  • Hannity.com, whose figurehead denied and downplayed the pandemic since this spring was 60% safe.

  • TownHall.com, whose coverage of the “Wuhan Virus” has been racist at best, was 69.5% safe.

👉See the full list of block rates for news outlets here.

Crucially, Krzysztof did not find a disinformation parameter in Grapeshot, IAS, Comscore, or Moat. They don’t appear to filtering for the one thing that does cause brand safety crises.

They’re not even consistent

The Brand Safety Floor Framework offers a consistent set of definitions for brand safety companies to implement.

At a minimum, they should all be catching the same “bad” content. But we’re not even seeing that. On the Wall Street Journal, Moat and Comscore’s algorithms only agreed on what is brand unsafe 59% of the time. Which begs the question, what exactly is this technology good for? 

Brand safety is out of control

The implications of this research are breathtakingly anti-democratic. Ads are the currency of the digital economy, and brand safety technology companies have been acting with impunity because no one has the information to challenge them.

We don’t know how many media outlets have been run out of existence because of brand safety technology, nor how many media outlets will never be able to monetize critical news coverage because the issues important to their communities are marked as “unsafe.”

Brand safety is a product looking for a problem to solve. As they pivot from keyword blocking (which always sucked) to their fully opaque “contextual intelligence” solutions, there is little evidence that any of it works.

The CMO of a multibillion dollar company recently told AdAge:

“Nobody knows anything...There's a lot of tricks these companies can do to make their products look like they're working, and they work—until they don't.”

We get it. It’s convenient to not know how your budget is being spent. The internet is big, and it’s easy to hand over these decisions to someone else. It helps us avoid uncomfortable, politicized conversations about what media environments are appropriate for our brands. 

But brand safety requires us to do two things: 1.) keep our ads away from hate speech and 2.) fund our news ecosystem. When you know your vendors are failing at both, can you afford to look away?

You cannot automate your way to brand safety

In our next issue of BRANDED, we will examine keyword blocklists of the Fortune 500. Some of these blocklists are so long, they have effectively cut off the entire news industry.

Until then, there are three ways to act: 

Dig around the brand safety data yourself

Krzysztof’s research is available in AirTable throughout his piece, so you can play around with the data yourself. Let us know if you see anything interesting!

Read it here

Get the Adalytics browser extension

Krzysztof’s browser extension lets us analyze the advertisers, for once. You’ll be able to see which brands are targeting you with the most with ads, figure out how many (ir)relevant ads you’re seeing per day, and estimate how much companies are paying for your attention.

In the future, you will be able to share your ad data for money, and help marketers understand if their ads are actually being shown to the intended audiences. 

Download it here

We are accepting clients

We work with global brands to develop brand safety playbooks, so you can draw your own line and operationalize your brand values in your media spend. We are booking into March 2021.

Learn more about what we do here


With that, we’ll close out 2020 with a hearty thank you to you, our readers. This has been a wild year, and we’re grateful to be in your inbox.

See you in the new year!

Nandini (@nandoodles) and Claire (@catthekin)

Yes, the adtech bubble is going to burst anytime now.

Here’s how we’re going to survive it.

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:


We recently picked up a copy of Tim Hwang’s Subprime Attention Crisis for a look into the slow-mo implosion of the ad tech economy, which he believes is a matter of when, not if. This does not surprise us. It doesn't surprise anyone who understands the mechanics of ad tech.

What makes this a juicy, pint-sized read is how Hwang succinctly lays out his argument: what’s happening in the underbelly of adtech isn’t just a generic grift. It’s an extremely specific grift that we have seen before. It did not go well for us last time!

“When a hot, overpriced commodity is discovered to be effectively worthless, panic can set in, causing the market to implode,” writes Hwang. 

Hmm, where have we seen this before?

Are we really doing this again? 😐

From the inside, adtech looks exactly like the subprime mortgage industry right before 2008. Hwang practically goes down a checklist, point by point telling a story of junk assets and pathological confidence.

The plumbing: They’re using the same algorithmic real-time, high-speed trading model they use to buy and sell mortgage-backed securities.

Commodification: “What is different about the present-day online advertising system is the extent to which it has enabled the bundling of a multitude of tiny moments of attention into discrete, liquid assets that can then be bought and sold frictionlessly in a global marketplace,” writes Hwang.

Opacity: Marketers can’t see what’s happening within their ad campaigns, so there’s nothing stopping ad tech platforms from inflating the value of ad placements. Opacity helps middlemen hide substandard inventory.

These conditions, Hwang writes, create “perverse incentives,” and can encourage players “to continue pushing the bright horizons for a marketplace despite knowing that major structural problems exist.”

They sure can. In fact, the ad tech industry is looking a lot like a $300 billion house of cards right now because none of us know how much anything is actually worth. 

You won’t hear the ad tech industry talk about The Bubble because they are invested in an economy that is built around it. Entire companies have been built, bought, and sold based on made-up metrics.

But Hwang — who’s standing close to the edge without going over— is seeing all kinds of things you can't see from the center. He paints a picture of a flourishing, creative practice hijacked by bankers and brokers who transformed advertising into a soulless trading desk. 

We said yes to more efficient marketing. What we got was a system of mass surveillance that produces an enormous amount of waste and shows no evidence of being better than what we had in a pre-programmatic world. 

Ironically, this has imperilled the advertising industry itself. Hwang argues:  “There is, if anything, a strong ethical imperative to allow the collapse of global surveillance capitalism rather than attempting to save it, because it might clear the deck for something better to emerge.” 

Let’s not forget there’s a strong marketing imperative too, because a system that is full of waste, defunds the news, and funds disinformation isn’t a system that is working for us.

Don't cry because it's over. Smile because it happened.

This might be hard to hear,  but the digital advertising supply chain never even cared about you, babe. It was never built for us.

The bubble will burst, Hwang assures us. But we can make it a “controlled demolition” if we start thinking now about what life will look like after the collapse.  

Instead of mourning the end of a thing that never worked for us, marketers should start to think about what we might want in a new system. But we have to know what we want.

OK, so what do we want???

Now is the time to ask: what do we want the future of advertising to look like? Last time, we took what adtech gave us. This time, we should be clear about our requirements. DARE WE DREAM? 

Here’s where we would start:

  • No more vanity metrics — We cannot live and die by views, clicks and conversions. We need to invest in understanding our customers as people, not numbers, and move towards real attention, brand recall, and trust.

  • Embrace direct relationships — We need to get as close to the publisher as possible. Reduce the middlemen to reduce market opacity. 

  • Inclusion, not exclusion — We should advertise intentionally. That means knowing where we place our ads instead of hearing about our placements from random Twitter users.

Subprime Attention Crisis is a series of observations - and an unfinished story. It’s up to us to write the next chapter. 

Marketing as we know it today was built on a series of design choices. And if a thing can be designed, it can be redesigned. A collapse wouldn’t just be the end of something, but the beginning of something new.

Thanks for reading,

Nandini and Claire


Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin.

Share

Loading more posts…