Google Ads Unleashed | Winning Strategies for E-Commerce Marketers

How to Test If Your Google Ads Really Work (5 Steps)

Season 2 Episode 111

Send us a text

Ever wondered if your Google Ads are actually driving incremental sales—or if those conversions would’ve happened anyway? 

In this episode of the Google Ads Unleashed, Jeremy breaks down the concept of incrementality and shares a 5-step framework to test it for yourself. You’ll learn how to build a clear hypothesis, set up a geo holdout test, and measure results without falling into common traps.

Get your free 30 minute strategy session with Jeremy here: https://www.younganddigital.marketing/

Scale your store with 1:1 coaching: https://www.younganddigital.marketing/1-2-1-coaching

Hello and welcome back to Google ads and leash guys. Hope everyone is doing fabulously this Monday. And I want to give you an update from one of the pot Well, one of the podcasts have done a few weeks ago with myself, small which, if you haven't listened to it, go back. It was an absolute banger. But basically what I talked about with Marcel was the concept of incrementality. So maybe if you're not aware of what incrementality is, I'm just going to give a quick rundown of what it is. More or less, incrementality describes the effectiveness, or is the word for the effectiveness of our advertising, okay? Basically, incrementality means how much of the advertising is actually generating additional sales, or if that sort of advertiser didn't happen, what would be the baseline of sales that we've made anyway? Okay? So let's say, for instance, you're measuring, I don't know, 100 conversions in Google ads, right? And then you are so and you get 100 sales in the in the in the store, I don't know if this is, now, of course, a very reductive, reductionist sort of, sort of example, and you remove all of the advertising and suddenly all of the sales are gone right overnight, then Google ads would be 100% incremental as a channel, right because it provides 100% uplift in sales if it's on or off. And as you probably have realized yourself, there are lots of types of traffic that may be more or less incremental, right? So, for instance, that's why we we focus on those types of traffics in our advertising efforts. For instance, new versus returning customers, right? If someone searches for your, your your product, right? And they already are a customer with you. Well, how much is the advertising for returning customer really influencing that person's decision to buy again with you? Probably very little, right? So you would probably think that this type of advertising is not very incremental. Whereas, if you focus your advertising on someone who has never heard from you and get them to the page for the first time, you would probably imagine that this type of advertising is actually adding incremental sales to your brand, or something that you probably instinctively know and have done forever, but without ever having really thought about it is branded versus non branded spend, right? That's why you have a separate brand campaign, or exclude brand from your prospecting campaigns, right from your top funnel campaigns, because you inherently realize that probably a lot of people would have maybe bought anyway. Who search for your brand, right? They might have not clicked your competitors ads. They might have clicked on an organic search result and bought whereas, of course, you want to make sure that your non branded advertising really is non branded, because that is how you truly gain new customers. Okay, so very often you will be running brand ads and now wondering whether branded traffic actually adds incremental value to our marketing mix, or you want to be proven how much it actually does, right? So we talked a lot about incrementality with Marcel, and since then, I've jumped into action, and I want to talk to you about how we would run an incrementality test on branded traffic now within Google ads. So what we always do is we start with a hypothesis. We have a client who sells a product worldwide, and they sell in the United States as well, and the United States is a big country, so this provides a really good battleground to run a pretty nice test on this. And the hypothesis that we have is that branded traffic adds little incremental value to our marketing mix, and thus we could improve blended return on ad spend if we cut out this type of spend in the ad account. So that is what you always do as the first step. Right out of five, this five steps, you first actually have a hypothesis that you want to test, because otherwise it's not an experiment, right? It's the same, when you do an experiment in physics, right? You will say, You know what I believe, if I let this apple drop from this height, then it's going to fall at the speed the gravity pulse. And then you measure the outcome and see if your. Hypothesis is correct or incorrect, right? And it's the same here. You want to first have an hypothesis, and this is what we want to do. We want to see whether we that we believe that branded traffic adds little incremental value. This is our hypothesis and and we could improve our clients results if we cut it out. Then in the second step, you want to decide what kind of test you want to run in order to test this hypothesis, because it's the same in physics, again, right? You don't test the same stuff with the same methods all the time. Or in statistics or mathematics, they have different tools to test different things, to get the result right. So if you want to test the pH level, then you get, like a pH reader right, or one of them testing strips. If you want to test whether there is carbon monoxide in something, then you get a specific test for it, and it's the same here in incrementality tests, you got several tools to your disposal, and the tool that we are sort of taken here is going to be a holdout test, a geo holdout test. So this is one of the classic examples, or the classic tests that you can run, and the classic method, it's basically withholding ads in a certain geographical area, or withholding ads from a certain cohort of users to understand what impact this has on them, or rather, what impact the advertising continues to have on the group of users that keep getting exposed to the ads. Then the third thing, the third step, is that you will have to decide what and how you want to measure this. Okay, so you may want to, first of all, you want to ask yourself, what expectation do I have of this test if I, if I run this. So in this example, we have decided for geo holdout test, because our hypothesis is that if we withhold the ads in 50% of the states, that very likely our blender draw us in that selection of states is going to be better than any other one. Okay, so this is the expectation that we that we have, and there are certain things that you want to consider. So first of all, you want to estimate what impact will this have on the total amount of sales, right? So that you actually sort of have an idea of what's going to happen. You want to then also measure, decide how you actually going to measure this, right? You going to measure first of all, not only your pay performance, right, and you have to keep track of that, but what are you actually expecting is going to happen in those 50% estates where you don't run branded AdSense anymore, is, for instance, organic, organic sales are going to go up. Is this the only thing that you want to look at? Or do you want to look at organic and paid sales? At what period of time do you want to run this for? Right? Because a week is probably too little, two weeks is too long. So we have decided on four weeks, because this is a good time frame that gives us statistically valid data. How do you want to present these results, like a graph, Excel? Do you want to track on a weekly basis, on a daily basis? What is, what's, what's kind of, the the way you want to present this data? Then you also need to decide on a good test region, right? Because the United States is pretty big. The states, they all have various different, you know, buying behavior Californians are very different to Texans and very different to New Yorkers. So you have to pick a really sort of good, sort of, you know, number of states who want to be running in this and right, I would say at least 25% of the total area that you want to withhold it in? Okay? Otherwise, it's not really valid. Regardless of where you run it, whether it's in Germany or in the UK or in Australia, you want to maybe not choose states with outlying behavior, right? So for instance, Hawaii or Alaska would be bizarre choices to withhold, because they probably don't behave like the rest of the world. You don't want to choose states with big and versus a few which are small, right? You also want to decide how much ad spend you want to withhold. And you also want to question yourself, are we able to measure this experiment at all? Right? So these are all considerations that you want to be thinking through in the third step, and before you then have other things to consider. So these are the things that you decide on. Once you decide this, you want to then consider further things. So for instance.
Is there any competition? Is there any seasonality? What is the method that you want to calculate the test results with, right? So, for instance, do you want to measure this by total, bro, us? You want to measure the profit per the state, then, where you where you withhold the ads? Do you want to, well, do it on total revenue, on papers, organic, there's lots, lots of ways here, right? And then you run it, so that's the fifth step, and you get closer to the truth. And in this case, how we've decided, I can't, I don't have in front of me right now, but basically we're going to start now, starting tomorrow, with this particular client. We have chosen five states where we're withholding the ads, all right, where we're not not running brand anymore. And we have also chosen five other states as a control group, and what we're doing is we are monitoring now the spend of brands on a daily basis for four weeks. We are also monitoring the total turnover of those states in the next four weeks. We're also monitoring the blended return on ad spend of those states in the next four weeks, and we're also measuring the level of sales that organic has in the next four weeks. And this is what we are then looking at. Also what we're looking at is we've chosen five other states, which are comparable in size, because we can then actually compare our chosen states where we've worked with healthy ads, to those other states and see if there's any other behavior that might explain like a downswing in ADS. Let's say, for instance, we've chosen California, withholding California and a similarly sized state is than Texas. And California, let's say, always has double the number of sales than Texas does on it on a regular basis. And then after we have performed this test, we see that suddenly California only has, you know, 1.5 times the number of sales compared to Texas, then we should probably expect that this was an impact from our incrementality test, right? Because we take a state where we've not, like, rather sorry. So excuse me, I'm going to start this again. Let we withhold in both right? And then if we see that the ratio has changed drastically, then this is probably due to something else, or some sort of measurement error, right? Because you would expect that probably withholding in both of those states would result in the same sort of, you know, that they'd still have the same correlation of sales. I'm going to start this again. So let's say, for instance, just to really, really hammer home this point, because that is how you test, sort of, your your validity of the experiment. If that makes sense, let's say, for instance, California always has double the number of sales than Texas, and you withhold in both. And then after the test, they still have, California still has double the number sales to Texas. Then you know that there's not been any other influence and factor that may have had an impact on revenue. Okay, that's kind of what I want to get to. There may be some limitations of incrementality tests as well, right? So first of all, one thing to say is that you always want to get closer to the truth, right? Like that's the whole idea of these. A limitation could be that you know you might be shoot as an advertiser, shooting yourself into your own foot, right? Because you've probably been running brand for ages, and then find out that it's not really been an incremental whatsoever, and you wasted 10s of 1000s of pounds as of course, not great, but I would always frame it as that, you know, knowledge is really, really important, right? To know that this is now something that we can avoid in future, right? Regardless of the outcome of this test, another limitation of the incrementality tests could be, for instance, that it's such made such a small impact on total revenue, that the stuff that you measure in you can't really measure an uptick in sales, right? So for instance, let's say paid search is only 5% of the total marketing mix. Well, if sales go down by naught point 5% in total by withholding brand, is it just because of a small fluctuation, or is it because. Um, really, because you withheld brand, right? So it's very hard to sometimes run these tests because the statistical validity is not there. Some limitations. Might also be that you don't actually understand how to interpret the results, right? You may also lack the tools to make statistically sound sort of statement of whether this has actually been a successful test or not. So these are few things to consider. But all in all, to sum it up, first of all, the first step is to make a hypothesis. The second step is to choose the method of tests that you want to run. Typically, this would be a conversion lift study or a holdout test right? Third is you want to decide on where you want to perform this test for how long, how you want to measure it, and what your expectation of this test is. Fourth, there are things that you want to then consider that could be hampering this, like seasonality, the expected incrementality, so competition, for instance, and certain methods to calculate the results. And then, number five, you want to make sure that you're aware of the limitations. And once you've performed this experiment, see where it takes you. This has been a bit of a complicated episode, so hopefully you stuck with me. If you have, please give the podcast a like and subscribe. You can also always reach out to me. You'll find me at Jeremy and Google ads on LinkedIn. You can also find me on the website young and digital, dot marketing and send me an email at Jeremy young in digital dot marketing, or you can just, you know, send me a nice message as well. That's that's fine. I would love to chat to you. So as always, this has been Jeremy young, your personal Google Ads expert, and I wish you a happy and productive week ahead.

People on this episode