Oct 6, 2020
Show Resources:
Split test calculator
Ep 07 - LinkedIn Ads Account Organization
Ep 15 - Benchmarking Your LinkedIn Ads
LinkedIn Learning course about LinkedIn Ads by AJ Wilcox: LinkedIn Advertising Course
Contact us at Podcast@B2Linked.com with ideas for what you'd like AJ to cover.
Show Transcript:
Bro, do you even test? Yeah, we're talking AB testing your LinkedIn Ads today. Let's do this.
Welcome to the LinkedIn Ads Show. Here's your host, AJ Wilcox.
Hey there LinkedIn Ads fanatics. LinkedIn Ads provides the absolute best testing ground for learning about your audience and your account. No other ad platform can match it. And over the years, we here at B2Linked have honed our account structure and testing strategy. And I'm about to reveal it all for free. The only thing I ask in return is that you ship me bottles of your local root beer. I'm just kidding. It's truly free. But I wouldn't hate on receiving root beer shipments. In the news, LinkedIn launched its new site redesign on desktop. And this probably went out to a third or a fourth of all LinkedIn members. So chances are that most of you by now have it. But I'll tell you, it's not a huge departure. It's very much a redesign, but not necessarily adding any new functions or really changing anything, except making the whole background gray and having rounded buttons instead of more squared off buttons. I'm kind of ambivalent on the whole thing. But I do appreciate how it doesn't really look like Facebook all that much anymore. It sure seemed like they were going pretty Facebook there for a while. Along with this release, LinkedIn released LinkedIn stories. So again, maybe most of you who are listening right now have this, I was part of that initial test group with LinkedIn stories. But quite frankly, I'm really scared to actually hit the button and release my first story, because I haven't ever used them on Snapchat on Instagram, or anything else. Stories is totally a new format of sharing for me. And I'm really excited to see how business to business does these well, because I really don't even know where to start. Maybe by the time you hear this, I'll have created my first story. The part that I'm most excited about with LinkedIn stories is if you listened to Episode 16, with Ting Ba from LinkedIn, she alluded to the fact that we were going to have stories first, and then we would have stories as an ad product. And every time I talked to someone at LinkedIn about stories, I asked them the same question, which was, is stories inventory exactly the same inventory as the newsfeed or does it have its own inventory? And the reason why I'm so curious about this is because when LinkedIn releases a new ad format that has its own inventory, what happens is that inventory all starts with zero competition, meaning a lot of times we get significantly cheaper traffic for sometimes a year or two. So I love getting new ad inventory. And sure enough, what I found out after stories released, this is its own inventory. So it starts with a fresh auction at zero competition. So as soon as these ads come out, man, I want to be the first one to launch one. I wanted to shout out a couple reviews here, both of them on Apple podcasts. Sydney&Marketing says, "It feels like cheating. There is so much valuable information that is shared. It feels like it's breaking the rules. His podcasts are beyond helpful." Sydney in marketing, thank you so much for leaving a very kind review. The next one is from Jaston on Apple podcasts as well. He says, "What a digital marketing podcast should be. The Golden Age of podcasts has arrived. And there are so many great marketing podcasts. But few are as valuable as AJ Wilcox's LinkedIn ads show. I have known and followed AJ for a number of years. And I was excited to hear that he had launched a podcast, especially on such an interesting topic as LinkedIn Ads. AJ is the pro of LinkedIn Ads. And so it just makes sense that he would find a way to easily communicate all the latest trends, info and insights. What I most appreciate about the podcast is that it jumps right into the applicable information around LinkedIn and applicable information around LinkedIn Ads. My favorite information that's been shared are LinkedIn Ads Roadmaps, Mommon LinkedIn Ads Media, Common LinkedIn Ads Performance Metrics, and the Best Ways To Get LinkedIn Ads To Perform. If you have interest in running LinkedIn Ads, then I highly recommend the podcast. If you are simply interested in hearing great discussions around digital media, then I think it has a lot of value. Thanks for the great podcast AJ." Now from the username Jaston. I know that this is Josh Aston, and he's the founder of above the Fold Digital. He's a fantastic senior marketer that totally understands the process from driving traffic all the way down to analytics and CRM integrations. He's the real deal. So Josh, thank you for the kind review. I also want to let you know that I regularly rope in the team here at B2Linked to pull research and help inform the topics that we bring up. So a big thank you to the B2Linked. I know AJ will Cox gets a lot of credit in the reviews and online. But this is very much a joint effort. So thank you to the whole team, I don't want you to think that I'm hogging it. All right, I want to feature you. So please do leave a review wherever you leave podcast reviews, Apple podcasts, Stitcher, anywhere, I'll find it and share it out and give you a shout. All right onto testing methodologies. Let's hit it.
5:23
Testing Approach
Now when I say AB test, I actually don't mean testing in the same
exact way that you would think a strict AB test is, I'm basically
just talking about testing two things against each other. We try to
do this in a smart way. So that we're comparing things that
actually will tell us a result that we're looking for. And we try
to keep the data as pure as we possibly can. I say this because one
of my favorite AB tests, which isn't a pure AB test is where you
test one offer against another offer. So let's say that you have a
webinar coming up. And then you want to test that against a report
or an E-book, you wouldn't call this a pure AB test, because both
of these offers are very different. So you're going to have
different imagery, different ad copy. But because the offer makes
the biggest difference in your conversion rates, I still think this
is super valuable. So if our clients have multiple different pieces
of content or offers, that's probably the first test that we'll do.
But then once we found a winning offer, we'll start doing pure A B
tests. And these will be the same offer with ad versus ad. The
first thing I like to test in the actual ad copy, if they're both
going to the same offer, is intro vs. intro. And there are a few
flavors here, you can have two different calls to action. So you
can test which call to action gets people to take action. You can
also test pain point against pain point, or motivation against
motivation. And this can help you understand what your audience
responds to best. What are they most motivated by? Is it fear based
or aspirational? Is it statistics or clickbait, whatever it is, you
can find that out with an intro vs. intro test. The next thing
we'll test is headline vs. headline, I wouldn't do a headline vs.
headline test until you've already tested your intro. Because that
generally has the biggest effect on your click through rate. Then
as soon as we see performance for an ad version starting to
decline, then we'll do an image vs. image test where we take the
best performing intro and headline and test two different images
and see if we can find what kind of image catches people's
attention best. And then let's say you settle on the best
combination of intro, headline, and image. You could even go really
advanced and do a landing page vs. landing page test, same ad copy
going to two different landing page experiences. So you can test
what elements of the page are getting people to convert better or
worse. As you're doing these tests. Go back and listen to Episode
15 on benchmarks, because that's where you'll learn what benchmarks
to watch for. You know, let's say you have two tests and one is
outperforming the other, but if they're both performing below
LinkedIn's average, then you might want to end the test early and
scrap it. And there are several different metrics that you can
watch to see how your ads and offers are doing. And I'll start from
the least impactful metrics, and then we'll work towards the most
important.
What To Look For:
8:27
Impressions
So the first is impressions. If you see that one ad is receiving
more impressions than another, what that teaches you is basically
what LinkedIn thinks of the ad. If both of your variations are
getting a similar amount of impressions, then you know that both
have a similar relevancy score. And so LinkedIn is giving them
traffic evenly. LinkedIn doesn't see a big difference in click
through rates. And that's okay. The next metric to watch is
actually your click through rate. And this is how you can tell what
ad copy and what motivations are good at getting the customer's
attention. Again, if your click through rates are similar, let's
say one is a 0.8% and is 0.75%, those are so close I wouldn't say
oh, yeah, let's do whatever got 0.8%, they really could be similar.
But if one is a 0.4% and one's a 1.2%, you've got a winner.
Cost Per Click
The next is watching your cost per click, because your cost per
click really is a combination of both your audience competition,
how your bidding, and how your ads are performing. And so
especially across audiences, if you see significant changes in your
cost per click, that teaches you something about how either that
audience is responding to your ads, or the increased level of
competition among different audiences.
9:48
Conversion Rate
Next is your conversion rate. Your conversion rate is going to
teach you how effective your offer is at getting people to take
action. How attractive it is. We see really good conversion rates
on LinkedIn of being 15% or higher. So compare that against your
variations and see how you're doing and and which one wins.
Cost Per Lead
The next is your cost per lead. And that is how much of an outlay
and cost it requires to generate a lead, a conversion, an opt in,
really whatever it is that you're going for. If you're spending a
lot, or have been spending for a long time, then you could measure
your ad differences by cost per lead, and tell which one is better
at getting people to convert. So if you are testing things like ad
copy, image, someone's motivation.
Click Through Rate
The things that you're probably paying attention to, are going to
be the click through rate. And in order to get enough data about
your click through rates to make sure that they are significant, we
found that that's usually, at least if you're in North America,
you're usually going to be spending between $300 to $1,000 across
those two ad variations. Make sure that you get enough volume so
the results really are significant here. If you're paying attention
to the cost per click across your tests, you probably want to spend
about the same amount that same $300 to $1,000, but make sure that
this goes over at least one calendar week. And that's because costs
per click can vary significantly by day. So you don't want to have
just a partial week and then have a messed up data set. And your
cost per click will also vary as your ad performance adjusts over
time as well. Now, if you've got larger budgets, you're probably
going to be paying more attention to the conversion rate. And this
is where I highly suggest that you get to because the conversion
rate of your offer is a much better predictor of how well your your
ads are going to be performing than any of your ad copy. It's
deeper in the funnel and therefore more impactful. Your conversion
rate is really going to vary based off of the type of conversion
that you're promoting or call to action that you're asking for. If
you do what we recommend, which is starting with a content offer,
something like free gated content, you will likely average between
about 10 to 15% conversion rates. And those conversion rates at
average costs per click within the first thousand dollars in ad
spend, you should have a good feel for an asset, whether it's an
ultra high converter, or a total dog or somewhere in the middle.
And then by the time you've spent about $5,000, you'll likely have
statistical significance on your cost per conversion, and your
conversion rates all the way to the 95% confidence interval. And
this should tell you whether that offer is really hitting the mark
or not. And if you should do more like that. Also, this many leads
should give you a pretty decent feel with how the sales team is
measuring and responding to the lead quality. And it should be a
high enough quantity of leads that your conversion rates from let's
say, marketing qualified lead to sales qualified lead actually
become meaningful. And of course, remember that the value behind
LinkedIn ads is in the lead quality. So ultimately, how they result
in those lower lead stages is really what matters. Now, if you're
pushing people right to what I call a high friction asset, like
right to a demo request or a free trial, or talking to sales, or
buying something that will likely average somewhere between about
1.5% to a 4% conversion rate. And because that conversion rate is
so much lower than if you were going to be releasing content,
you'll have to spend significantly more to reach the same levels of
significance. So keep that in mind, you might spend $5,000 to reach
significance around content assets. And you might have to spend
$25,000, to get the same thing around demo requests.
13:55
Statistical Significance
Now I've been talking about statistical significance and it really
is important to define it for our purposes here. What it is, is a
statistical measure that's mathematically complex. And the output
is kind of confusing. One time I built it manually in Excel and now
I only use online calculators. But what it really boils down to is
the difference between your two test variations that you're running
is unlikely to happen by chance. Or in other words, that the winner
of your test is actually the winner and not just a statistical
anomaly. For AB tests in digital marketing, I use a 95% confidence
interval, which is pretty high, but it's not the highest you can
go. And what that would mean is we're 95% certain that the winner
of the tests, the one that appears to be the winner is actually the
winner. So if you need to make decisions faster, or maybe you have
a lot less data, you can get to statistical significance faster. If
you use something lower like a 90% confidence interval. If you want
to calculate this, there are hundred online calculators if you just
Google it. But my favorite so far is splittestcalculator.com. And
if you scroll down to the show notes, you'll see a link to that. It
makes this process super straightforward. So if you're testing the
conversion rates between, let's say, two different assets, what you
do is you take variation, one's number of clicks, and then you
input variation two's number of clicks, and then down below, you
put variation one's number of conversions, and then variation two's
number of conversions. When you hit calculate, it'll tell you the
conversion rate, and whether or not it's significant to that 95%
confidence interval. And statistical significance is really hard
for me to say. So from here on out, I'm just going to call it stat
SIG. That'll be a lot easier on my tongue. So if you're testing the
stat SIG around your click through rates, just put impressions in
that same calculator instead of clicks, and then put clicks instead
of conversions. The surprising way that stat SIG works is that your
results will oscillate in and out of statistical significance quite
regularly. So you want to make sure that you don't call the test
too early. To keep from doing that, you may want to set an amount
of spend that you're comfortable with as part of the test. And in
statistics, we call that a horizon. So then go and check for
significance as soon as you hit the horizon. And that should help
protect your results, making sure you don't choose the wrong
variation. And right now, I'm thinking through a way to have a live
significance calculator for our LinkedIn Ads, so we can track when
results become significant. And when they've oscillated out. So if
anyone's done anything like that, reach out, and let me know I'd
love to chat with you about that. Okay, here's a quick sponsor
break, and then we'll dive into exactly how we structure and
account for testing.
16:52
The LinkedIn Ads Show is proudly brought to you by B2Linked.com,
the LinkedIn Ads experts.
17:01
If the performance of your LinkedIn Ads is important to you,
B2Linked is the agency you'll want to work with. We've spent over
$130 million on LinkedIn Ads, and everyone you talk to here is a
LinkedIn Ads expert. So you'll get the very best advice with no
sales pitch. We're official LinkedIn partners, and we're the only
LinkedIn Ads agency confident enough to share all of our secrets
publicly on a podcast. Fill out the contact form on any page of
B2linked.com, to chat about your campaigns, or heck send a
postcard, our mission is always to make you look like the hero.
Alright, let's jump into a bit more of a b testing as well as all
reveal our secret sauce and how we structure accounts for
testing.
17:43
There are a whole litany of AB tests that you can run. So depending
on what it is that you're testing, here's what you can expect that
to affect. So if you're running an image test, expect that to alter
your click through rate. So if you're testing images, one against
another, don't watch for the difference in your conversion rate,
your conversion is so far away from the image that you probably
won't see that much of a correlation. So watch your click through
rate there. Images are much better at getting someone's attention,
leading to a click. The next is if you are doing an intro or a
headline test, because this is ad copy, it will very much affect
your click through rate. But depending on the call to action, and
how valuable you write the ad copy to be, you can actually affect
conversion rates from here. So maybe test against your click
through rate as well as your conversions. If you're testing
differences on your landing page, or even on your LinkedIn lead gen
form, you'll see those results directly affecting your conversion
rates. And then if you are testing your offers, the offer itself
really permeates through the entire funnel. Because a really
attractive offer can make for some really attractive ad copy,
teaching you about click through rates. Because a really good offer
is really easy to write really good ad copy around. So you'll see
that effect in your click through rates. But then, of course, the
offer really is what it is that you're calling someone to action
on, you're telling them this is what I want you to do. And so that
will also influence your conversion rates. Back when I used to run
a lot of Google Ads, I ran quite a few of what I called AAB tests.
So what I would do is come up with an A and a B variation. But then
I would launch an exact copy of the A. So there were technically
three ads in that ad group, but two of them were identical. And the
reason that I did this is because I could look at the difference
between my identical ads. And that would help me understand how
much variance I could expect in my data. So for instance, if click
through rates were close, then I could assume that measuring my AA
against my B would be a worthwhile pursuit. But if click through
rates were very different, I knew I really couldn't trust the
difference between my A and my B nearly as much. Now I'm ashamed to
say that I don't do very many AAB tests on LinkedIn. So if you are
currently running these, please reach out and tell me about it. I
would love to see those results. All right on to the exciting
stuff.
20:14
B2Linked Approach
So what I'm about to present to you is the official patent pending.
Patent is really not pending. The B2Linked approach to an account
setup. We've honed the SAM account testing strategy over the years
and it has worked really well for our AB testing and pulling
insights out. What we do is we take the target audience, and we
break them into small micro segments. Go back and listen to Episode
Seven, if you haven't already, and dive into how that works. Then
what we do is we launched the same A and B variations into each of
those separate campaigns. So each campaign is running its own AB
test. But it's the same AB test that's running in every other
campaign across the account, get it? Now, because the whole account
is running the same two ads, at any point in time we can roll all
the results up to the whole account level and look at the
difference between A and B variations, totally ignoring the fact
that these are all across multiple audience segments. At any point
in time, you're only 35 seconds away from getting this data by
hitting export from campaign manager, throwing into Excel, creating
a pivot table, and boom you're looking at it. And because each
campaign is running the same ads, we can measure the difference
between each of those campaigns being just the difference in the
audience. So it's an AB test of audience vs. audience. So you might
find something like hey, the click through rates are higher with
director level than they are with VP level folks. Or maybe cost per
click is lower when we target the audience by job function than
with job title. And then you could use those learnings to find out
how you get more traffic or cheaper traffic or better quality of
leads. You can roll that data up to the account level at any time
at any granularity and find insights fast. You can roll all of your
similar types of targeting up, you can roll all of your similar
sonorities up to learn about them, the world is your oyster. And
then of course, each of those individual campaigns as they spend
and amass data, you'll eventually be able to draw conclusions about
them individually. So how this targeting type and this segment of
the population converts to this exact offer. So really, I told you
that this episode was all about AB testing. But the way I'm
describing it to you is actually setting up a series of AB tests,
that's AB tests around your ads, and AB tests around your
audiences. And simultaneously running a giant multivariate test of
ads against individual audience micro segments. Boom, talk about
under promise over deliver, whoo, give myself a high five. If you
are a total stats geek, and you're geeking out with me on this,
then I'm sure you're going to be lured in by this concept of you
can tell LinkedIn you want to rotate creative evenly, instead of
optimizing to the best click through rate. So fight that urge. The
reason why this is is it's really a misnomer in the way that
LinkedIn presents it. It's not going to show both ads evenly to
that audience. What it's going to do is enter them both into the
auction evenly. But of course, LinkedIn is going to give different
relevancy scores to each of those ads, meaning that the one that
performs higher is going to get shown more, it's going to win more
auctions, and it's going to win them at a lower price. And the poor
performer won't show as often. So if you select this option, you'll
find that even though you're rotating evenly, you still end up with
a mismatch in your impressions and your clicks. So I call the
rotate evenly option, the charge me more and show me less button. I
don't like pushing that button. But there is a solution for this
issue coming. LinkedIn announced that they're going to have a split
test option coming where you'll effectively be able to split test
your different ads against each other, and it actually will split
them 50/50. I keep asking LinkedIn how this works with the auction
and how it avoids falling into the same trap as the rotate ads
evenly, but no one's gotten back to me on that one yet. I figure
it's probably implemented the same way that Facebook's is because
Facebook has an A B testing option as well. But I haven't found
anyone who can tell me how Facebook pulls it off, either. So if any
of you know reach out, I would love to hear how this interacts with
the auction. All right, I've got the episode resources coming up
for you right now. So stick around.
24:51
Thank you for listening to the LinkedIn Ads Show. Hungry for more?
AJ Wilcox, take it away.
25:02
Resources
Alright, got some great resources for you. The first is that split
test calculator. It's called splittestcalculator.com. It's a great
one to go and pop your your stats from your campaigns into and see
if they're significant or not. The next is, if you haven't listened
to it Episode 15, all about benchmarking, you'll see that option
that link right in the show notes below. And you'll also see a link
to Episode Seven, all about account structure. This will give you a
little bit more insight into how we structure and account for
manageability and for testing. And if you're new to LinkedIn Ads,
or you have a colleague, who is point them towards the LinkedIn ads
course on LinkedIn Learning. There's a link to that below as well
and it's a great intro course into LinkedIn. And because it's on
LinkedIn Learning, it is ultra cheap. It is literally 1/24th for
the cost. If you had me come and train your team on exactly the
same material. Make sure you look down at your podcast player and
hit that subscribe button, make sure that you've got every one of
these episodes coming at you. And please do rate the podcast. And
then if you leave us a review, I would love to shout you out as
well. And then finally, if you have any ideas or questions or
topics that you'd want to cover here on the podcast, reach out to
us at Podcast@B2Linked.com. And with that being said, we'll see you
back here next week, cheering you on in your LinkedIn Ads
initiatives.