Apr 1, 2022
Here were the resources we covered in the episode:
Investopedia
LIs advice for optimization
NEW LinkedIn Learning course about LinkedIn Ads by AJ Wilcox
Contact us at Podcast@B2Linked.com with ideas for what you'd like AJ to cover.
AJ Wilcox
You're running and testing your LinkedIn Ads. But how do you know
when your test is complete? When something isn't working? How do
you know when it's time to pivot? We're covering deep testing
strategy on this week's episode of the LinkedIn Ads Show.
Welcome to the LinkedIn Ads Show. Here's your host, AJ Wilcox.
AJ Wilcox
Hey there LinkedIn Ads fanatics. So we've all been told that we
need to always be testing with our ads. Well, sometimes it can be
hard to know when our tests are conclusive. Or when it's time to
move on to a new test, or even what do we need to be testing? Well,
if you test too long, you end up missing opportunities for more
learnings. And if you test too short, you risk coming to the wrong
conclusion, which can really be costly on your future performance.
So this week, we're gonna dive deep, we're going to talk about the
different types of tests that you can run, and how to tell when
they're complete. Make sure to listen to the end, because I'm going
to be sharing my methodology for deciding which tests to run next,
after you found conclusive results from your previous test. So
first off in the news, I got a chance to talk to a friend who's
part of a really cool beta for LinkedIn right now. It's called the
audience insights beta. And essentially, what it is, is a really
granular breakdown of the audience makeup of the attributes from a
matched audience. You can think of it as a really helpful analysis
of your target audience, as well as a great tool for understanding
the ways that LinkedIn targets better. The way that it works is
you'll go into your matched audiences section, and you'll select
one of those audiences, then this can be any sort of a matched
audience, it could be a website retargeting audience, or anyone
who's submitted a form, or anyone who's visited your company page,
you get the idea. Then you click a button that says, generate
insights and it will open up a dashboard about that audience. And
what you get here is a whole bunch of different facets and
breakdowns of what makes up your audience. It'll show you your
existing audience size. And it will tell you how many of those
people fit into different categories. There's interests, so this is
where you can find out which interests that your target audience
are tagged with. And this can help you with your interest
targeting, determining whether to use it, or how many or which
types of interests to use. As a side note, I hardly ever use
interest targeting because it's such a black box. But now with
this, I actually feel a lot more comfortable in finding and using
interest targeting. There's organic content, so you can see the
trending content that is most engaging to this exact audience. You
can see the location, and this is the profile location of where
members of that audience are located. There's demographics, there's
education, there's job experience. And this gets really exciting
because it'll show you the seniority breakdown of your target
audience, your job functions that fit within them, your years of
experience, and even more. And as you probably know, when you're
building a campaign over in the right rail, we get a little bit of
an audience size breakdown, but this is really that on steroids.
It's a supercharged version of audience Insights. And then as
you're exploring here, it's really quick to create a campaign based
off of the targeting you're exploring, which is pretty cool. When
this feature sees full general audience release, we will definitely
let you know more about it. But for right now, I wanted to give you
a quick heads up on what's likely coming and how excited we are
about it.
AJ Wilcox 3:36
Okay, on to the testing topic. Let's hit it. So first off what is
pivoting? You may have heard the Silicon Valley term to pivot. A
business needs to pivot. When a business doesn't have product
market fit, companies can pivot or adjust their strategies to find
the right fit. You've probably also heard the axiom of fail fast,
and that originates from Silicon Valley as well. And the concept is
that by taking too much time doing the wrong thing, or a less
effective thing, you risk so much more than if you were to just
make a quick painful one time adjustment and get to that product
market fit much quicker. The same risks are present in our ad
testing. If you're testing two different ad concepts against each
other to the same offer, but that offer is bad, what you're doing
is you're wasting weeks of good potential performance that you
could have from running a better offer. So definitely, we should
always be testing something. And to be clear, not every test will
be exactly what you want. Some tests will fail and others will win.
And some will just be inconclusive, or some will teach you
something but it's just not important. So pivoting is essentially
knowing when something needs to be changed, or when to conclude
your current test. You can pivot because something is working. You
can pivot because something's not working. Or you can pivot just
because it's time to want to test or try something new.
AJ Wilcox 5:00
So we're going to do something we haven't done before on the
podcast, I'm going to bring on a guest for explaining a certain
topic. So please welcome Chris Dayley, CEO of smart CRO, who's
going to explain the concept of scientific testing and statistical
significance. Alright, we're doing something that we haven't done
here on the podcast before, I got to bring in my friend Chris, who
is a conversion optimization expert. A longtime friend, partner we
met, it's probably been 11 or 12 years ago, maybe even more than
that, where we're both doing SEO at the time. And anyway, this is
Chris Daly, who runs smart CRO. And, Chris, I brought you on
because we're going to be talking a little bit about statistical
significance and obviously, this gets into the stats side and the
math side of marketing, where many marketers who may have come from
the more creative side may not have experience. So first of all,
tell us about yourself. And then I'll ask you more of the meaty
questions.
Chris Dayley 6:00
First of all, thanks so much for having me on the show, man. You
know, I'm one of your biggest fans and so I feel flattered to be on
the show. And, you know, like you said, I've got, you know, more
than a decade of background in digital marketing, I pivoted to
conversion rate optimization about 10 years ago. And I've been
running a conversion rate optimization agency for the last eight
years, I think it'd be a fun fact, AJ, and I actually started our
agencies like within a week of each other. And I called AJ, because
I wanted to pitch a company that he was working at. And he's like,
Oh, I'm actually not there anymore, I started an agency. And I was
like, me too. Cool. So yeah, I've been doing version optimization
for about the last eight years. And actually, I hated statistics
when I took statistics in college. Probably one of the reasons I'm
dropping out of college. But since I started doing conversion
optimization, I've actually really fallen in love with a lot of the
statistics that because of how applicable it is, and I'm excited to
dig into this stuff with you.
AJ Wilcox 7:01
So cool. Well, and the reason why I brought you on, Chris, I mean,
every time I'm talking about statistical significance, or anything
stats related, it's always parroting something I've heard on one of
your, I think I've probably listened to 80 or more podcasts that
you've been a guest on, and I've gotten to hear you speak at so
many different conferences, and I'm basically just parroting stuff
that I've heard from you. So I wanted to bring you on to ask these
questions. Because I mean, why parrot what someone else said, why
not just go right to the source? So tell us, first of all, what is
statistical significance? What's the definition? And I guess why it
matters?
Chris Dayley 7:35
Yeah. So well, let me first say why it matters. So anytime you are
measuring data, right, like when you're running ads, for example,
and you see that one ad has a 50% conversion rate, and the other
one has a 10% conversion rate. There's all sorts of questions that
come to mind, once you hear that this one has a better conversion
rate than the other one. You know, most marketers would want to
know is, well, how reliable is that? How much data do you actually
have? Are you talking about, you got 10 clicks on both of them, and
one of them had five conversions and the other one had two, because
that's not a very big data set. And so which makes that data not
super reliable, or in other words, there's a huge risk or chance
that's involved in saying that one thing is a winner and one thing
is a loser when you have such a small data set. And so statistical
significance is really a it's a statistical calculation of how
confident you are that your results are not due to just random
chance, right? Because, again, if you have 10 clicks on two
different ads, and one of them has five conversions, and one of
them has two, they obviously have a super, super different
conversion rate there. But there's a huge likelihood that you might
have just had like two people on the first ad that were awesome.
And they could have landed on either ad and converted. And so
you're not really sure if it's due to the ad, or just due to the
fact that a couple of qualified people saw those ads. So anyways,
the reason that statistical significance matters is you need to
know with certainty that when you say an ad, or in my case, if you
say that a variation of a landing page is better, you need to be
pretty confident that that result will hold true because there's
all sorts of risks that's involved. If you assume that the ad that
got five clicks is better than the ad that got two clicks. And you
start basing all of your marketing around that first ad, like let's
say that that first ad had a video and the second one had an image,
if you base all of your future ads off of the fact that you think a
video worked better. But it turns out that actually if you had run
that test for longer, the image would have performed better. You're
going to really screw yourself over in the long run, you're going
to end up operating under false assumptions. And so, again,
statistical significance is just way to with confidence say that
what you think is a winner is actually a winner.
AJ Wilcox 10:05
Oh, yeah. Alright, so one thing I've heard you talk about you like
to determine your statistical significance to that 95% confidence?
Do they call it a confidence interval? I forget what it's
called.
Chris Dayley 10:16
Yeah, confidence interval or P value or whatever you want to call
it. There's lots of different terms for it. But yes.
AJ Wilcox 10:23
So why do you run your test to a 95% significance level? Other
cases in marketing where you'd suggest a 90% or an 80%? Or do you
recommend the 95 for all of us?
Chris Dayley 10:34
Yeah, that's a good question. And let me break that down into a
couple things. 95% statistical confidence means that you're 95%
certain that this winner is actually a winner, right? And the
reason that I like using 95%, as sort of a minimum threshold is
obviously 100% would be ideal, right? To be 100% certain, but it
usually takes a lot of traffic or a lot of data to get to 100%
statistical confidence, unless you have a huge difference in
numbers, right? Like, if you have 10,000 visitors that saw one ad,
and you have 10 clicks, and you have 10,000 visitors that saw
another ad, and you have 1,000 clicks, you'll have 100% statistical
significance, because the difference, the discrepancy is massive.
But again, if you're testing ads, for most datasets, you're going
to end up with like, you know, 10,000 views and 500 clicks and
10,000 views, and 550 clicks. And, yes, the second ad had 50 more
clicks, but there's only a 10% difference. And so it's gonna take a
lot more data to know ,okay, was that for real? Was there something
that fluency not variation? Or if you keep running for long enough,
are they just going to even out? So 95%, it's a high enough
confidence that there's still a very low chance of calling
something a winner, that's not a winner. So there's only a 5%
chance that if you say this is a winner, it's only a 5% chance that
you're wrong. Right, which is, there's still a chance and it'd be
great if there was zero chance, but I mean, my philosophy has
always been if you're testing enough, like, if you are constantly
running AB tests, on ads, or whatever, yes, maybe 5% of the wins
that you called were false positives. But if you run enough tests,
you're gonna end up with 95 winners, true winners, and maybe five
that weren't true winners. But overall, by and large, you have a
very, very high win rate there, right? That's the first thing. A
95%. For me, it's high enough that I feel confident, but it's not
so high that it's impossible to reach. 100%, I view as very
unlikely to get 100%. So the second part of your question is do you
have to go with a 95% statistical significance. And I say no to
that, I don't always run tests until I get a 95% and here's why.
The closer the data is, so again, if you have 500 conversions on
one, and 530 on another, you could be stuck at like an 85, or an
80% statistical significance. You might be stuck there for weeks,
because there's lots of things that may happen. And one variation
might get a few more conversions one day, which is going to
decrease your static and then the next day, you might have a lot
more conversions on the version, which is gonna increase your stats
and so the statistical significance number is going to fluctuate
over time. So I usually pair in or I add in a second rule, it's
like my backup rule. So I like to shoot for 95% statistical
significance. But if I end up with a variation that has been
winning consistently for a period of two weeks, and I still don't
have a 95% stat sig, then I will still call it a winner. Because
even though you know, I might have an 85% stat sig. If I have a
winner that has been consistently performing well, then I will use
that longevity of data to sort of support okay, yes, I only have an
80% stat sig here. So there's a 20% chance I might not be calling a
winner, but the data looks pretty reliable. Right? Like the test is
being consistent. 95% If I can get it, and and if not, do I have
consistent performance?
AJ Wilcox 14:23
Oh, that's great. All right. So question for you then. What I love
about testing to statistical significance, is we as marketers
aren't shooting from the hip. We're not just gut checking all of
our marketing, because that can obviously lead you down pretty bad
roads. I know a lot of marketers do, but I don't recommend it. It
allows us to approach this scientifically and actually be certain
that you're learning stuff along the way. But how then do you know
when you've reached statistical significance, because the LinkedIn
ads platform isn't going to tell you, you don't get to register
your AB test anywhere and have it monitoring? What tools do you use
or what would you make available to yourself to watch this and
grade your AB tests?
Chris Dayley 15:01
Yeah, good question. There's lots of free tools. I mean, if you
Google statistical significance calculator, there's tons of free
calculators that you can use out there. I was showing you before
this call that I've actually just developed my own inside of a
Google Sheet, where I just use an API by just pulling all of the
raw data from Google Analytics. And then I calculate my own
statistical significance. Even though the tools that I use, do
calculate it for me, I still like to have my own statistical
significance calculations. You can grab tools online, and if you
have a way of plugging in the raw data from LinkedIn, then you can
calculate it. You can also just go in and like, you know, for
example, Neil Patel on whether you like Neil Patel or not, he's got
a free tool on his site, that you can just plug in the number of
visitors or the number of, you know, like, if it's an ad, the
number of impressions you have, and the number of clicks you have,
or the number of clicks you have, and the number of conversions you
had, or the number of impressions, you have, whatever, but you're
going to plug in the number of "traffic", and then the number of
conversions for each of your variations, and then it will give you
a statistical significance calculation. So I mean, like I said,
free tools, easy place to start, if you're not calculating
statistical significance now, just go and grab the data from two of
your ads and pop them into one of these tools. And it will
calculate the statistical significance for you. The one other thing
that I'll just say, say, you know, you'd mentioned that like, it's
easy to shoot from the hip as a marketer. And statistical
significance is a great way of ensuring you're not doing that. It
also ensures and it also helps to put some checks in place so that
you don't call tests too quickly. Because I know whether you are an
in house marketer, or if you are an agency marketer, you always
want to show your boss or your client, like you want to show them
when these you want to show them wins as quickly as you can. And
you want to mitigate the risk, you don't want to be running a test
that is losing money for your company or your client for very long.
And so the reason that I see people end tests too quickly, is
because they're like, Yeah, but if that variation continues to
perform that way, it's going to lose us a lot of money, or the
opportunity cost is so high, because I could be generating so many
more conversions from this other ad. And so statistical
significance is a good way of like putting a check in place for
yourself. So I always tell my clients, we're at least gonna run
tests for a minimum of a week. Even if we see something just like
blowing this other variation out of the water, we're still gonna
give it a week, because things can change in a few days. And so you
want to run experiments for long enough that you see some
historical data in there. And the static will help with that.
AJ Wilcox 17:43
What was so shocking to me when we were talking, this has been
years and years ago, but you were showing me one of your tests for
a giant enterprise company. And you were showing an AB test. And we
were looking at this graph over time, and we could see that by like
day five of your test, variation B had statistical significance, it
was the winner by like 30%, or something high. And then it may not
sound high to you, I know you get higher. But then you showed me
the continuation of that graph. As the test kept going into week
two, all of a sudden, variation, a took over with, again,
statistical significance, and it was winning, and then it reverted
back to B. So what I love about what you're saying is run the test
for long enough, but realizing that stats can be misleading just
because human behavior can change. But we really should be, I
guess, tracking things that will stand the test of time, as well as
just fitting our statistical significance.
Chris Dayley 18:38
And I would say don't even calculate statistical significance until
you have at least a week's worth of data. Because if you calculate
stat sig on day one of a test, I almost guarantee, you'll get a
calculation that says you have 100% statistical significance,
because it's gonna be like, Hey, we have five conversions on this
one, and none on this other one that will give you a 100%
statistical significance. But it's such a small data set, it would
be stupid to like call a winner with that small of a data set. So
like I said, I don't even look at static until at least a weekend,
because it really doesn't mean anything until then.
AJ Wilcox 19:16
Yeah, and especially on a platform like LinkedIn, where every day
is a little bit different. I know that a weekend day performs very
different from a Monday, and I know the difference between a Monday
and a Tuesday. They're closeish, but they're very different. And
then you have the difference between a Friday, totally different.
So you don't want to run for a partial week, especially to the
LinkedIn audiences, when every one of those days has a little bit
different of a personality. Love the idea of running for at least a
week love the idea of two weeks, so you have to have each kind of
day. And I love the idea of making sure that you're running whole
days. You didn't start your test mid day one day.
Chris Dayley
Yep, absolutely.
AJ Wilcox 19:55
All right. So kind of a fun little announcement here. Chris and I
were talking before the call about creating a joint tool that we
can then share with this audience. So make sure that down in the
show notes, you'll see the link to both of our LinkedIn profiles.
Make sure you're following us. So you'll get the free tool when we
release it. We don't know how long it's gonna take, I have a crazy
idea in my mind that I don't even know if it's possible. But
whatever we come out with, I know it's gonna be cool. But Chris,
where can people find you? Where can they follow you? Where do you
put your stuff out? How do they get in touch with you? Just take us
wherever you want us?
Chris Dayley 20:26
Yeah, so the only social media platforms I'm on is LinkedIn and
Twitter. So you can find me on Twitter, it's @ChrisDayley. Last
name is D A Y L E Y. Or you can find me on LinkedIn. I'm not on
Facebook, not on Instagram. And then my company website is
smart-cro.com. You know, and again, I focus on website and landing
page AB testing. And so if you're wanting to go from testing your
ads to testing your landing page, your website, that's definitely
something I'd be happy to chat with anybody about.
AJ Wilcox 21:00
Awesome, Chris, thanks so much for enlightening us, we'd love to
have you back on the show. At some point when I can think of a
something else that we need your commentary on. But thanks again
for for just being willing to come on and sharing your abundant
knowledge.
Chris Dayley 21:12
I will talk to you anytime you want to talk to AJ. So thanks for
having me on the show.
AJ Wilcox 21:15
All right party on.
AJ Wilcox 21:17
So Chris, and I talked about different tools for calculating stat
sig. In the show notes, you'll see a couple links to some tools
that we've used to calculate that you can try out. And by way of
instruction, here's how you'll use them. So what you'll see is an A
and a B. And there's essentially a box for before and a box for
after that you fill in. And this can be kind of confusing, but what
you'll do, if you want to test the statistical significance of the
click through rates on two different ads, what you'll do is in the
top box, for your ad, a variation, you'll put in the number of
clicks. And the bottom box, you'll put in the number of impressions
that ad a received, then the same thing for ad B. In the top box,
you put in the number of clicks, which is the number of results.
And on the bottom, the number of impressions. So the number that
it's out of. If you want to test conversions between two offers,
it's the same type of thing, it's just in the top box, you're going
to put in the number of conversions or leads. And in the bottom
box, you're gonna put in the number of clicks, that's going to show
you your winner. And the statistical significance. If there is some
between the conversion rates, you could take this way further, if
you have enough data on, let's say, sales, qualified leads or
proposals sent, you could put the same thing in the number of those
results with the number of leads or whatever it is you want
underneath. Okay, so now you know how to use these tools, go check
them out, go try them, and evaluate some of the tests that you're
running. So I guess my first question is, how do you know when you
have enough data to actually make a decision about your tests?
LinkedIn has a section on their website in their help section that
we've linked to in the show notes, so you can go read it. But
basically, they say, you want to always be testing, which we
definitely agree with. LinkedIn says every one to two weeks, pause
the ad with the lowest engagement, and replace it with new ad
creative. Over time, this will improve your ad relevance score,
based on indicators that LinkedIn members find that that ad is
interesting, such as clicks, comments and shares, which will help
you win more bids. Since bid actually means something important
when they say, which will help you win more bids. I think what
they're probably trying to say is, which will help you win more
auctions. But we'll let them make that clarification. LinkedIn also
recommends include two to four ads in each campaign because
campaigns with more ads usually reach more people in your target
audience, I would disagree with the majority of that advice. What
we found is that the learning phase when you launch ads, usually
lasts about one to one and a half days. So if you have ads with
really poor engagement, after let's say, your first two days, it's
usually pretty safe to say, there's something wrong with these ads,
we can take action now by pausing them and taking them off the
table. That being said, even if click through rates really aren't
great. Sometimes we'll keep them running just so that we can suss
out the conversion rates because obviously, getting leads and
getting a good cost per lead is way more important than the amount
of engagement that an ad gets. But of course, we always do want
good click through rates whenever we can. I'm also not in a hurry
to pause the low engagement ads, since we're always using
LinkedIn's option of optimizing the ads in the campaign to those
that have the highest click through rate because that's going to
send almost all of the impressions to the higher performing one
anyway. So having another ad in there, that's just kind of dead
weight. It's getting ignored anyway, so I'm not in a huge hurry,
but its okay if you want to. We've talked about this before on the
show, but I don't recommend including more than two ads per
campaign. Since what it does is it it dilutes your AB test. If
you're running an ABCD test, but your ad A gets 60% of the
impressions and ad B gets 30%. And the last 10% are split between C
and D. That doesn't make for a very good test with a lot of data,
we would ideally want a lot more data spread around all of those
variations. I get it LinkedIn asks us to put more ads in a campaign
because it breaks the frequency caps and allows your ads to be
shown more often, which will get you to spend more money. But I
care a lot more about the performance of ads getting good
performance than just spending all of my budget usually. Okay,
here's a quick sponsor break. And then we'll dive into what you
should watch for to evaluate your tests.
The LinkedIn Ads Show is proudly brought to you by B2Linked.com, the LinkedIn Ads experts.
AJ Wilcox 25:56
if the performance of your LinkedIn Ads is important to you
B2Linked is the agency you'll want to work with. We've spent over
$150 million on LinkedIn Ads, and no one outperforms us on getting
you the lowest cost per lead and the most scale. We're official
LinkedIn partners and you'll deal only with LinkedIn Ads experts
from day one. Fill out the contact form on any page of B2Linked.com
to chat about your campaigns, we'd absolutely love to chat with
you.
AJ Wilcox 26:22
Alright, let's jump into what to watch for in your tests. First of
all, you want to set your threshold. You want to decide what the
parameters of your test are going to be. One parameter you could
set is say I'm going to run this test for a certain amount of weeks
or months or days, we heard Chris talk about how he wants to run
for at least a full week. And with LinkedIn specifically, I would
suggest running for at least two full weeks, you do also want to
make sure that you are working from whole days, which means you'll
want to start your test as close to midnight in the UTC timezone as
possible. And then finish it around UTC midnight whenever you're
finishing the test. But of course, if you see that the results are
crazy different, like you have two offers, where after a week and a
half, one of them is converting at 40%. And the other is converting
at 6%. You don't have to finish the rest of your time period test
as long as the data is there. And you can tell yes, definitively,
this offer A that's converting at 40% is way better, you can
determine your winner a little bit sooner. Another parameter you
could set for your test is say we're going to allocate a certain
amount of budget towards this, you can say 3000 Euro is going
towards this test. We see a lot of marketers do this because their
bosses give them a certain amount and they have to apportion it out
and budget it across different things that they want to learn. This
is certainly possible, but just make sure that by the time you're
done spending that budget, you are running a statistical
significance calculator across it to make sure that the results
that you got can actually be trusted. Another way that you can set
a parameter here is saying how much data you want to generate. So
you might say, we want to run this test until we have 120 leads, or
400 clicks or anything like that. Again, you just want to make sure
that the parameter you set here for the amount of data you want, is
actually enough to make a difference. You may also set a threshold
of stat sig between two ad variations on the click through rate
level. And that's going to come pretty fast actually, because what
you're doing is you're showing clicks compared to impressions
across two different ad variations. And you could get that
statistical significance quite quickly. You could take that a step
further and run a test based on statistical significance at the
conversion rate level. So now you're seeing which offer converts
better. With even more data, you could do the same thing,
statistical significance based off of which ad or which offer gets
the highest number of marketing qualified leads. Another step
further based off of sales, qualified leads, or proposals or closed
deals. Now, if you want statistical significance between two ad
variations or two offers all the way to the close deal, you will
need to be spending a lot of money, this is in the millions per
month in order to get here or you have to have been spending for
years. I just want to level set you just in case you're thinking
that sounds really fun. But if you're spending, you know $5k a
month or something that's probably not realistic, I would stick
more to like statistical significance at the conversion rate level.
A lot of times we'll end up running pretty much the same ad
variations, the same AB test across a lot of different campaigns.
And so rather than trying to achieve statistical significance,
within each one of those campaigns are we're looking at a small
number of clicks and a small number of impressions. Instead, with a
simple pivot table in Excel, we can combine the performance of all
of those ads that were ad A and all of the ads in the account that
are ad B add them all together. And then we're going to achieve our
statistical significance so much faster. You can do the same thing
with your costs per click. Measure which ads or which offers get a
better cost per click. This obviously doesn't mean nearly as much
as your leads, or close business does, but it is something you can
test. Generally, the ads with the higher click through rates are
going to get the lower cost per click. But if you're spending
enough, something really good to test is your conversion rates.
Which ad gets a higher conversion rate? Which ad variation gets a
higher conversion rate? Which offers get a higher conversion rate?
Which audiences get a higher conversion rate? These are all things
that you can test again the same way with static, if you're getting
data back from your sales team on lead quality, or if you have a
lead scoring algorithm set up, you can judge your tests based off
of lead quality or traffic quality that's coming from a certain
audience. Then if one of your audiences is producing a higher lead
quality, then you'll know that you can adjust your audience. Use
more of the targeting that's winning less of the targeting that's
bringing in the crappy quality. One word of warning here, though,
is that with any social advertising, one issue that we're always
going to face is ad saturation, which means changing performance
over time. If you try to run the same test, and you run it for two
months, chances are at the beginning of that two months,
performance will look pretty good. But then about halfway through
the test, you'll see performance falling, and then by the end, it
might be abysmal. So if you try to lump those two months of
performance together, you're going to get something that looks
pretty average or maybe even bad. But what you didn't know is the
first two weeks or the first month that it ran, it was really good.
And you should want to do more of that. As a general rule of thumb,
I found that your ads or your offers will saturate after usually
about 28 to 33 days. But how do you know? Well, I like to go into
the performance chart and look at campaign performance since the
day of launch. And I like to look at click through rates over time,
as the same people tend to be seeing your ads over and over and
over, or they're exposed to the same offers, every time they're on
LinkedIn, they're going to be much less likely to click over time
and you'll see those click through rates drop. So with your tests,
make sure that you're changing things up enough, or you're starting
new tests, before your last test fully saturates and you watch
performance drop over time. Sometimes I'll be running a test, and I
stop the test not because it's finished, or I've achieved stat sig,
it's because there's something else that is a higher priority thing
that I want to learn. And I think that's just fine. If the
opportunity cost of waiting for a test to finish is higher than the
upside of what you're going to get out of learning something from
the new test. Don't be afraid to either nix it or put that test on
pause. And what you should know is, there are different kinds of
tests that you can do. Some are easy, some are hard. But any test
that we do that's closer to the money is going to teach us
something more valuable. What I mean by that is testing things like
ad copy. Sure, you can improve results by 5 to 15%, with different
ads and different imagery. But by changing the offer, you can
double, triple quadruple your results. By working with and coaching
your sales team to get them in the right mindset to nurture the
leads that you're generating from LinkedIn, that can improve your
ROI by 10, 20%. But obviously, the closer you get to the money, the
longer those tests are going to take.
AJ Wilcox 33:36
So here are some of the types of tests that we like to run. There
are ad tests and the first ad test that we like to run is same
image, same headline, but we vary the intro in the ads. We like to
test motivation there. So an example I like to use is maybe one of
those makes them feel like the hero and the other one warns them
that if they don't take some sort of action, they'll look bad or be
disgraced. But you can definitely also do imagery or video ad
tests, keeping the intro and the headline the same, but just
varying visual. Testing offer against offer. So an ebook against a
guide, or a checklist versus a cheat sheet, a webinar versus a case
study. These are all good examples of offer tests you can run. What
about how often should you fail before you decide that it's time to
pivot and change your entire strategy? I'll give up on an offer if
I've run three A B tests have messaging against it, and all six of
those ads have failed. If that's the case, after our best effort,
I'm certain that the offer just isn't that great. There's no amount
of lipstick that I can put on that pig and make it look pretty. I
guess this is gonna be my rule of threes because the same thing
applies if I've tried three different offers in the same kind of
vein. And if none of those offers work, that I'm going to guess we
either don't have the right audiences or we don't have product
market fit or we just haven't figured out what it is that this
audience cares enough about. I just got a chance to speak at Social
Media Marketing World in San Diego last week. And one of the
speakers that I heard said something really interesting. We solve
migraine problems, not headache problems. And what that means is
your offers, they really do have to solve something really
significant, that's causing a lot of pain, because someone's not
going to go out of their way to go and sign up for something, or
talk to a sales rep about something or download a guide about
something that is just kind of a meh problem. If it's a headache,
they can work through it. If it's a migraine, you have to stop
everything and focus on it. So how do you then determine what your
next test should be after you've finished one? If I have a brand
new offer, my first test is almost always going to be an intro
versus intro in these ads test against the same offer. I want to
find out what motivation or how do we call out to them to get their
attention best. If I've been running the same offer for more than a
month, then my favorite test to line up is an image versus image
test. And this is because if people have been seeing the same image
over and over for a month, they're going to saturate, they're going
to say, Ah, I've already seen that, and not pay attention to it.
But if you can change up the imagery significantly, you'll get
people to take a second look. And they may realize, ooh, this
actually would be good for me. If you know what your audience likes
already, you can start to do offer versus offer tests. So use the
same motivation, the same callouts, but push them to one offer or
another. Let's say you have two different offers. One is a guide
that teaches them how to solve a certain problem. And the other
guide teaches them how to investigate and analyze some of the
results they're seeing. Test offer against offer and find out which
is their bigger headache, or which ones their migraine. Maybe some
of you have done market research. This is more on the PR side of
marketing. But we get to do a lot of this with the level of testing
that we can do on LinkedIn. Because the targeting is so good, we
can break our audiences up into these little micro segments that
act like little focus groups. So maybe you're trying to decide do
operations folks, or do IT folks resonate more. Which one is our
better customer? Do manager level seniorities interact with us in a
different way than chief level or VP level? These are all tests
that you can run simply by breaking these audiences up into
separate campaigns and measuring their results against each other.
The advice that I always give to my team is make sure that you keep
a testing journal. This could be a Google sheet, it could be a
physical notebook that you keep next to your desk, whatever it is,
what this is going to be is a record of every test that you're
running, and you want it to have a few things. First of all, you
want to put the date. Second of all, you want to put the expected
outcome of it. For instance, you might say I'm testing offer A
against offer B. My hypothesis, so you include the hypothesis. My
hypothesis is that offer B is going to perform better because I
think it provides more value. Next you want to write down your
parameters. So are you testing for a certain amount of time or
after a certain amount of budget. And then lastly, you have to take
action on this, you can't just leave the notebook there and never
come back. So I like to put something on my calendar. On Friday at
three o'clock, I'm going to go back and reevaluate this week's
test. I'm going to go back to that testing journal and write
everything down. Once you have several tests, you want to share
these things, share them with your team. Freak, reach out and share
them with me. Anything cool that you learned about your audience,
or your offers or pain points, or messaging, these are all valuable
things. These are hard fought victories. You need to remember them
and share them so that you can then go and create new offers that
take advantage of it. New ad copy that takes advantage of those
learnings. And then you'll have higher performance from then on
out. So I can't encourage you enough. Definitely make sure that
you're keeping a testing journal so you can make sure that you are
taking advantage of all of your learnings. Alright, I've got the
episode resources for you coming right up. So stick around
Thank you for listening to the LinkedIn Ads Show. Hungry for more? AJ Wilcox, take it away.
AJ Wilcox 39:32
Alright, here's our resources from this episode. First of all,
Chris Dayley, you'll see down in the show notes, we have links to
his website, his Twitter and his LinkedIn. You'll also see the link
to my profile as well so you can follow me for when we come up with
that really cool LinkedIn Ads, test evaluation tool, whatever we
want to call it something that calculates statistical significance
ongoing over time. You'll also see the links to two different
statistical significance calculators. One on Investopedia and one
run on HubSpot as well as the link to LinkedIn advice for how to
optimize and run tests. If you are new to LinkedIn Ads, or if you
have a colleague who is definitely check out the link to the
LinkedIn Learning course that I did with LinkedIn. It's by far the
least expensive and the highest quality of any LinkedIn Ads course
out there to date. Look down at your podcast player right now,
whatever you're listening on, and make sure you hit that subscribe
button, especially if you want to hear more of this in the future.
If you hated this, I don't know why you're still listening. But
yeah, you probably don't have to subscribe, but I hope you do
anyway. Please rate and review the podcast and anyone that who
reviews will give you a shout out live on air. And of course with
any feedback, any questions about the podcast, suggestions, you can
reach out to us at our email address Podcast@B2Linked.com. And with
that being said, we'll see you back here next week. cheering you on
in your LinkedIn Ads initiatives.