“How do my ads look?”, “What are the best ad sizes and locations for my site?”, “Can I get some opinions or advice on ad placements for mobile?”
These are some of the most common questions asked by publishers who are trying to monetize their website, or improve their current monetization strategy. It’s great that people are asking for advice and looking for ways to improve their sites – and there are many forums and experts around which offer great tips. However, not all solutions work for every publisher.
Each site is different: different content niche, different traffic sources, different user base, different mobile user percentage. All these differences have a huge impact on ad performance. How can you be sure that the ‘best’ ad placement according to one person or article is the best for your site too? Are you certain that ad sizes that work for someone else will work equally as well for you?
A lot of publishers place their ads based on what ‘looks’ good (to them), or seems like it will perform well, rather than actually figuring out what is objectively the best ad combination for their site.
Rather than basing our decisions on our own opinion (or those opinions of others) you can use ad location testing to find what actually works best for your site’s users. Continuously testing different combinations of ad sizes and locations has the added benefit of generating valuable data that can show you which pages make the most money – helping you create content with higher user engagement.
Let’s dive a little more into the specifics of how ad testing helps you get the most out of your site:
The principal benefit of testing is an increase in your site’s ad income.
A lot of publishers focus the majority of their time (as they should) on creating content and increasing traffic. But not many publishers are thinking about optimizing the traffic they are already receiving.
The first common mistake is thinking that their site is already at its optimal level of performance. Complacency leads to stagnation, and this is especially true with ad combinations. What worked last year won’t work as well today, and what works today may not work as well next year. Visitors can become so used to seeing the ads in the same location every time that they essentially become ‘blind’ to them.
That’s where ad testing comes into play!
Lets look at why it’s important: First and foremost, ads need to be seen and/or engaged with in order to serve any purpose to the advertisers who are paying to show them on a site. Their prominence is how they generate their revenue for you. Secondly, ads with good viewability and high engagement signal to advertisers that impressions on your site are worth the money they are paying, which typically increases the price they are willing to pay you to show an ad.
To find the combinations that improve viewability and increase engagement, you’re going to need to test what the users respond to best. Basically, you cannot guess; you need to test to find what works.
The question then becomes: what approach to take to do ad testing?
Now that we’ve established why it’s important to test, let’s look at how it’s done. We’ve highlighted the two most common forms of testing below: A/B testing and Multivariate Testing.
A lot of people have heard of A/B, or split testing. This is where you change a single variable and run traffic to both, to monitor how the two variations compare in performance.
For example, which ad performs better – one at the 970 x 250 billboard ad at the top of the page, or the 300×1050 super sidebar ad? Running a test to both variations will find out what works best!
A/B testing is good because it’s fairly simple to set up and it’s a nice introduction to testing in general. You can set up the tests, compare the two sets of data and then iterate from there. For example, you can run Test A against Test B, and then run Test B against Test C and so on. This approach will give you some valuable data about ad sizes and location that work better than your current set-up.
Once you have the hang of A/B testing, you can begin running multiple tests. For example, Test A vs. Test B and Test C. Fairly soon (if you include mobile, tablet and desktop) there are hundreds of variables all affecting overall revenue. And there are unlimited possibilities for new tests. As an example, you could test the sidebar ad to see if a 300×250, 300×600 or 160×600 works best.
While A/B testing is a great start, you are limited to testing one variable at a time. And manual A vs B testing is a laborious process.
Along with testing ad sizes, you need to test ad colors. Add into the mix ad locations, the number of ads per page, viewport size, browser/OS and before you know it, you could be potentially compiling a list of billions (yes, billions) of individual variables in the testings mix. At this point, it’s probably better to take a different approach to testing.
Unlike A/B testing, multivariate testing tests multiple variables all at once. So, rather than just comparing the size of a single ad on your website, you might change the locations, sizes and number of ads on the page to see how they compare.
Using a multivariate testing approach, it’s possible to create a myriad of test variations of your site with different ad combinations. Since all the elements on a page affect, in some way, one another, we recommend this approach to testing for maximum results. For example, changing the color, size, number of ads on all devices means you have a lot to keep track of…
And this is one of the downside of multivariate testing – by involving everything from ad sizes, locations, and colors – is that it can become very tedious or unwieldy very quickly. Monitoring all of the variables and how they affect the other elements on the page is a huge undertaking that requires a lot of time, effort & patience. Sometimes too much data is almost as bad too little data.
For an example of how quickly the number of variations can grow, take a look at the table below. With each additional ad unit you test, the number of possible combinations grows exponentially:
With any testing process, you need to be patient. Data is needed to arrive at statistically confident results, and it takes time to collect that data.
Be careful of making decisions before the results are solid (statistically confident). This can result in inaccurate data. To explain this – let’s look at a simple statistical example:
There are 10 marbles in a bag. Five of them are black and five of them are white. On the first try, you reach in a pull out a black marble. You put the marble back in the bag and pull out a second marble. This one is black as well. Then a third – which is also black. If you were to stop here, you might claim that all the marbles in the bag are indeed black – or at least the majority of them. However, if you were to run the same test ten thousand times, you would eventually come to the realization that you have a 50% chance of pulling a black or white marble. What this illustrates is that if the sample size (three marbles) is too small, it can lead to a false conclusion. Or at least, it’s not painting an accurate picture of the true situation.
Ezoic found that roughly 7,000 unique sessions of a single experimental variation will give you a 95% confidence level for those particular results. This data can then be used to make decisions about what works best and what to test next.
If you have a good amount of traffic, we encourage you to run a lot of tests. The more the better. It’s not possible to predict what combination of ads will resonate well with your visitors, so don’t be afraid to try entirely new combinations that you wouldn’t normally try!
Here are a few things to take into consideration when you’re testing.
In every testing process there needs to be a scientific ‘control group’. The control group provides a reliable statistical baseline of data to compare the results against. In most cases, you can use your current site as the control and the new variations as the test versions. If the test versions beat your ‘control’ – then you’re making progress.
There are huge issues with not having a control. You aren’t able to base your results on anything reliable. And because there are so many variables that affect a site’s performance, you need to know if it’s the changes you’re making to the site that are affecting the performance, rather than other variable.
Lets look at a few variables that can affect your testing:
Seasonality: December is typically the best month for advertising revenue, since advertisers are willing to pay a pretty penny to attract customers before the holiday season. If you were to make changes to your ad combinations in January and compare them to the performance in December, it would not be an accurate comparison, since the seasonality would play too large a part in the results and skew your conclusions.
This is where the control group comes in. It allows you to compare the performance – like for like (apples for apples if you like) – at any given time rather than looking back historically. That’s why it’s important to run tests concurrently, rather than historically.
Traffic sources: Users who arrive on your site from Facebook, Pinterest or Twitter tend to interact differently to visitors who arrive on the site from Google or Bing (organic search). It’s important to know your traffic sources when setting up ad location/size tests. Each traffic source gives you better ways of optimizing your site for the different types of users.
Device categories / Viewport Size: With a significant change in user habits towards mobile usage, it’s important to test on all devices – desktop, mobile and tablet (desktop pageviews are now falling and mobile views are increasing year on year). As more variables (like screensize/OS/Browser type) are introduced to the digital landscape, it’s vital to test while taking those variables into consideration (the ads on mobile for iPhone 6+ might not be optimal on an iPhone 7). In January 2016, Ezoic saw 150m visits and the largest screensize segment was only 15% of the total. This fragmentation of screensize makes testing much more complicated.
Demographics: Age and Gender of a site’s overall user base. It’s not unusual for a website to have varying user demographics over the course of a year. A literature site, for example, may get the bulk of its traffic from students during term time (as they cram for term papers and finals), but in the summer, when these users are at the beach, you may get more visits from an older generation who have time to pursue hobbies. What works for one set of users may not work for another.
Operating Systems & Browsers: Not very long ago everyone used windows/internet explorer or iOS/safari. These days there are multiple browsers (chrome, ie, firefox, safari, silk, opera) and multiple operating systems (iOS, Android, Windows, Fire, OpenWeb, Symbian, Blackberry etc). These all have an effect on how pages load and have to be taken into consideration when testing.
Ad Exchanges or Ad Networks: The quality of the advertisers will have a great influence on the yield. Some providers are more reliable and consistent than others. Ad exchanges like Google’s AdX system allows multiple ad networks and advertisers to compete in real time for each and every impression on your site. This bid pressure is good for your CPC’s and CPM’s and income overall. It’s much better to be an ad network ‘shopper’ than an ad network ‘loyalist’, because quite often ad networks resell your inventory on an ad exchange or to another ad network anyway. The closer you get to the advertiser, the fewer middle-men there are and the more money you make.
The only way to tell if your tests are successful or not, is to monitor the data. Obviously your overall income is the top indicator of ad performance, but there are other things to take into consideration as well:
Don’t look at the performance of a single ad on a single page
All ads on a site dilute one another. This is super important to remember when you are testing entirely new ad combinations. If you focus solely on improving the performance of one ad, you could actually end up harming overall revenue by affecting its relationship with the other ads on site. Too often we see a larger ad unit being tested against a smaller ad unit, and that might win – but the dilution of the larger ad’s higher performance on the rest of the ads on the page, means your session income goes down. Not good.
We recommend monitoring the overall performance of revenue (session income) rather than on an ad-by-ad or page-by-page basis.
Which leads us into the next topic…
Don’t optimize for RPM
Ad optimization is most effective when you monitor it per user session, not per page.
Keeping a user engaged on your site has become more important than ever. If you encourage your user to view more pages, the number of ads viewed will increase along with overall revenue. Many publishers use RPM as their guiding data point for optimization, but it isn’t an accurate measurement of monetization performance.
Let’s take a look at the example below. Which would you prefer?
Page View Per Visit: 1.5
10,000 visits * 1.5 pv/v
= 15,000 page views
15,000 pv & $10.00 eCPM
= $150.00 income
Page View Per Visit: 2.5
10,000 visits * 2.5 pv/v
= 25,000 page views
25,000 pv & $8.00 eCPM
= $200.00 income
As you can see, the site with the lower RPM is earning the most.
At Ezoic, the metric for measuring revenue is EPMV, or earnings per 1000 visitors.
EPMV reflects a wide range of contributory measurements (CPC, CPM, RPM, etc.) and also takes into account user experience.
It is important to realize that RPM doesn’t tell the whole story. For this reason we recommend optimizing your revenue using EPMV as your guiding metric because it’s impervious to seasonality and ad density. This data point will enable you to optimize your revenue from each individual visitor to your site.
User experience metrics are important too!
People visit websites because there is a perceived value to be gained from going to the site. They may be looking for a helpful articles, or a how-to video, or some other form of content that your site provides to the online world. They are NOT visiting the site to view the ads you are serving – they are looking for the content.
It is therefore extremely important to take into account how your users interact with your site at all times. When you are testing – it’s important to take into consideration the UX metrics as well as the income metics as they are symbiotic.
Ezoic defines and measures user experience (UX) as time on site, bounce rate, and page views per visit. Measuring these metrics tracks how users are interacting with the site. Changes in these UX metrics can signal that your ad locations are affecting how users are responding, which can have a major impact on rankings and revenue down the road.
Here is an example: You move some ads around and some of them are above the fold on your highest traffic page. You find that your bounce rate has suddenly spiked up. This could be a sign that there is a problem with your ad locations. If this is the case, it could be that the ad placements are too aggressive and are driving the users away before they engage with the content properly (which is a shame!)
This is why it’s extremely important to measure not only the revenue for each user that comes to you, but also the average time on site, page views per visit, and bounce rate.
In fact, it’s a good idea to focus on improving user experience metrics as much (or even more than) improving ad revenue. Our data from working with thousands of sites shows that increased UX metrics and high engagement with a site can ultimately mean improved rankings and more money for you in the long-term. Search engines (and by extension, advertisers) will see that your site provides value to users around the web and be more inclined to reward you for the content.
Most publishers are selling themselves short by ending their testing regimen.
After running a few experiments, they find a ‘winning’ combination and stick with that, bringing their testing efforts to a close. Although the winning combination may perform better than previous ad placements, it’s a short-sighted conclusion to what could have been an ‘up and up’ methodology.
As we’ve discussed, testing a site’s ad locations requires patience and the ability to go on the numbers and not on subjective opinion. It’s important to track which device your users are visiting from (whether it’s mobile, desktop or tablet), the traffic source and the time of year – to name just a few. If you think about all the variables that come into play, it’s easy to see why continuously testing your site is difficult and yet so important.
How Automated Ad Location Testing Can Help
Continuously testing your site will have a positive impact on the user experience and ad revenue. However, we understand that manually testing can become a huge burden on both your time. It is very, very challenging to try and do all this manually (nor should you!). That’s why Ezoic was built.
Ezoic automates the process of continuous multivariate testing for publishers.
The benefits of testing a site (increased ad revenue) should come without having to devote the majority of your time to setting up and monitoring experiments. It’s much better that you simply pick your goals overall (user experience goals or revenue goals) and then choose the different ad sizes, locations, and colors you want to test and then let a computer do all that for you.
Here’s the kind of results we’ve seen with Ezoic recently:
Ad testing need not include page layout changes – you can test ads without having to change your page layout or theme.
You’re in control of what is tested, and there are technologies out there (like the Ezoic system) that enable you to simultaneously test thousands of different combinations. Since ad testing can be automated, you’ll have more time to focus on your site’s content and adding value to your users.
Whether you choose to test with Ezoic, manually on your own, or using another self-service option like Optimizely, we hope you have understood just how important it is to continuously test your site’s ad locations and other important elements. Visitors to a site can provide valuable data, allowing you to make content decisions with confidence that you’re boosting the engagement of the whole site (as well as the ads).
Before long, you won’t need to ask others where they think you should place ads on a page. You’ll be able to tell them that you picked the best ad sizes and locations available – based on the best arbiter of choice there is – a site’s own user data.
Getting started with Ezoic is really easy, and should take 20-30 minutes. All you need to do is integrate, pick and choose different ad sizes and locations you want tested, and turn it on. Then Ezoic’s system will auto-test to find the ad combinations that increase your overall income!
Let Ezoic take care of finding the right sized ads and the perfect location for each. You focus on your content!
Get a free ebook on ad testing here:
Content supplied by Ezoic – the world’s largest website optimization platform for independent publishers.