The Truth Behind Branded Search Ads

In the world of PPC advertising, bidding on your branded keywords is routine.

There are countless articles out there reinforcing this position from the advertising community,  arguing that branded keywords:

Help protect your brand positioning

Generate relatively inexpensive traffic

Protect against competitors if they’re bidding on your brand

Offer users a better experience if things like ad extensions (sitelinks, etc) are used—features that aren’t available with organic search

But being the curious group that we are, we can’t help but ask: Are we in a branded search echo chamber?

In 2011, a study from Google examined how the number of organic clicks changes when search ads are present and when search ad campaigns are turned off. This study is frequently referenced when discussing if search ads cannibalize organic traffic, and its conclusion—that a “full 89% of the traffic generated by search ads is not replaced by organic clicks when ads are paused”—has become our industry’s collective gospel. But, should it be?

When one of our clients came to us to test the efficacy of their branded search ads and uncover if it really was worth it to include them in our advertising strategy, we jumped at the chance to explore Google’s conclusion and get to the truth behind branded search.

The Hypothesis

This particular client has a strong brand and is a popular choice in their category already, making them skeptical of spending money on their own branded keywords when they already rank #1 organically. Specifically, we wanted to test if the majority of the ad clicks from branded search ad campaigns occurred at the expense of clicks from their organic searches. Using a complex collection, extraction, and analysis of data from Google Analytics (GA), BigQuery, and Google Data Studio, we worked together to either prove or disprove this hypothesis.

The Brand Holdout Test

Naturally, the best approach to this problem was to perform an A/B test, so we could compare two identical sets of parameters, with one variable difference, to examine the impact of the variance.

Here’s how we set up our test—and how to set up your own:

Step One: Create two test groups
Since our client is in the US, we split the population in half based on session data in GA, and assigned states to each side of the split to create even test groups.

Step Two: Pause some ads
In 25 states, the “holdout” states, we paused branded search ads, while in the remaining 25 states, the branded ads kept running.

Step Three: Determine the timeframe & data collection method
The estimate from our analytics team was that the test would have to run for a period of 27 days to reach statistical significance, and we’d collect our data in GA.

Data Collection

While the initial setup of this test in Google Ads was relatively straightforward, the challenge we faced was measurement based on data collected through GA. We used GA to track the volume of sessions from organic traffic on branded terms, and when session counts are high, GA samples data. To appropriately analyze this test, we needed unsampled data to ensure granularity and accuracy. Here’s the impact of sampling on a session counts report in GA (this is an ad hoc query, and therefore the sampling limits were applied by GA):
It was evident that we couldn’t extract data from the GA user interface to analyze this test, and we also couldn’t use existing ETL vendors due to the same sampling issues. We needed a different solution.

But we recognized that there was a way to overcome sampling issues by using the GA API and extract data at a daily level to avoid sampling issues. We used existing Python libraries, as well as custom code, to export data from GA into BigQuery for further post-processing and analysis. Because the reporting for this test needed to happen on an ongoing basis, we built this data pipeline:
The Python wrapper around the GA API was unique in how it allowed us to query data: the GA query was built as a JSON structure, which could easily be modified to support different use cases and ensured our data team could analyze the information to conclusively determine the efficacy of branded search ads.
Before we dive into our analysis, a note: We believe in proving directional accuracy when it comes to marketing tests like this—our goal is to identify and confirm trends that can inform better decision-making for budgeting and tactics.

The Analysis

With the data flowing appropriately into BigQuery, we set up reporting using Google Data Studio to analyze results as a split test.

But after digging into this analysis, we determined we needed a more sophisticated approach to measurement and analysis for two reasons:

1. First, although we attempted to control external factors, the two groups were not an even split.

2. More importantly, there was a significant difference between organic and paid landing pages. Organic traffic landed on the homepage where there were multiple CTAs and user paths. Paid traffic, however, was directed to a specific conversion-focused landing page with just one clear CTA (a form submission).

Measuring conversion rates in the control vs. holdout groups was not a valid approach in this test setup. Our alternative solution: Measuring the incremental traffic driven by search ads for the same region during periods when branded ads were running vs. ads being turned off.

Here’s a look at the reporting dashboard:
This method was partly inspired by a study conducted by Google in 2012. In that study, researchers at Google examined how the ranking of an advertiser’s organic listings on the search results page affects the incrementality of ad clicks expressed through Incremental Ad Clicks (IAC) and as estimated by Search Ads Pause models.

Comparing per cent change in sessions for periods where the ads were running vs when the ads were turned off within the same group was a much better measure of performance—external factors applied more equally to each period, with the exception of changing market conditions over time (a competitor could change bidding strategy during the duration of the test).

The Results

Our findings were clear: When branded search ads were running, we saw 5.13% more sessions than when ads were turned off. And while some of this traffic was indeed cannibalized from organic, the overall impact to the volume of traffic was positive.

For this client, even though their organic traffic is already high, we proved that the investment in branded ads provided a useful, net positive result as part of their ongoing marketing efforts.

Creating Your Own Brand Holdout Test

Continuous testing and optimization should be par for the course in any digital advertising strategy—it’s essential to measuring advertising’s impact on revenue.

Before embarking on your experiment, there are four things you need to consider to ensure it’s successful and provides useful insights you can action:

Brand strength. This may be an obvious caveat, but worth stating. A test like this requires a high-ranking brand with substantial traffic daily/weekly/monthly (depending on the length of your test). Without that, any trends will be harder to identify.

Data quality. In our test, we knew that our campaign naming structures ensure we can accurately pinpoint branded campaigns, allowing us to confidently identify and pause them. This kind of test demands accuracy in tracking attribution for branded campaigns.

Causality. External factors are always a consideration when it comes to marketing analyses like this—running a test like this during a global pandemic, for instance, could skew results, so factor that into your approach.

Spend. Similar to traffic volume, the amount you spend on securing paid branded traffic also can impact your testing approach. If you’re not investing enough in branded campaigns, it’ll be more difficult to see directionally accurate trends or change.

In digital advertising, it’s always worth proving what works and what doesn’t. But as budgets tighten and tactics are put under the microscope, a brand holdout test can help ensure your ad spend is going exactly where it should be.
Ready to start your own test or want to know more about ours?
Contact Us

Don’t wonder how advertising helps you achieve your goals—prove it.

Related Posts

Browse All

Expert Interview: Wendy White, CMO, Egencia

Read Post

Expert Interview: Patrick Edmonds, Proposify

Read Post