Deciding on how and what tests to run, and auditing ad yield optimization test results requires AdOps expertise. The main goal of these yield optimization tests is to improve your session RPM numbers and maximize your profits.
In this article, we will give some tips to help you make the most out of running yield optimization tests, and decide on which types of tests you should run in your specific situation.
The first step to your yield management journey is deciding which tests to run for maximum monetization. There are many factors in play when it comes to choosing what test you should run, such as the type of site you have and your audience.
In general, the best way to figure out which yield tests need to be run is to do a brainstorming session with your team.
-Start with your overall goals and where the gaps are, then move on to brainstorm features you could A/B test to improve performance & user experience. If you’ve got a dedicated team that focuses on split tests, they might want to be in the meeting too.
-Use the Pareto principle to prioritize—the most common conversion rate issues are usually at the top of the list.
-Keep it simple! A/B testing success rates decline as tests become more elaborate.
Yield tests must be compared against historical data to determine if the changes did, in fact, increase revenue before you can determine if the test is successful. To make the most of A/B testing, the control and test conditions should be run simultaneously with traffic splitting between them. It’s always a good idea to run pure A/B tests depending on your setup.
The test would be run over a set period of time during which your site’s traffic would be subjected to the yield test’s conditions. While it’s clear that testing ad types is important to get the best results, it can be hard to know where to begin. How do you choose which ad types to test, and how do you decide what’s a good metric for measuring your ad efficiency?
The first step is to choose a metric that’s going to be most indicative of how much revenue will be impacted by the change in the ad type.
If you’re testing different ad styles in your sidebar, for example, then click-through rate might not be the best metric. What you really want to know is how much extra revenue each style of ad brings in.
When you’re looking for ways to improve the effectiveness of your ads or increase the number of sales they generate, it can be tempting to focus on how much revenue you’ve generated since you started running the test. What percentage of visitors converted when they saw ad A versus ad B, and which ad is generating more revenue? While this information can give you a good idea of what kind of results you can expect from each ad in practice, there are some things that using solely this information will cause you to miss.
It is important to perform a pre- and post-analysis to see how your revenue metrics were affected by the test, as well as what their trend lines looked like before and after.
When it comes to testing your Ad Yield Optimization (AYO), there are a few factors that determine how long your tests should be, and what the volume of traffic should be running through your test conditions. In general, the length of your test will depend on the volume of traffic you have running through your ads.
When you’re testing AYO, you want to make sure you have enough data points to compare the performance of your control group to your test group and have enough data to build a strong case for making changes. You also want to make sure you don’t run your tests for too long, as that could cause you to miss out on opportunities.
If there is a high refresh rate, more data is required than if there is a low refresh rate. For example, if users refresh every 5 minutes it would take less time than if they refreshed every hour.
It’s also smart to set up alerts that inform you when there are significant changes in traffic volumes between your two conditions—this will help ensure that the traffic volume is large enough to get accurate results. Finally, always keep an eye on your high-performing ads—you never know when something might change in terms of what your users are interested in.
You can run multiple A/B tests simultaneously if your system is built to do so. In order to provide statistically significant results, you’ll need a high volume of traffic depending on the test’s conditions.
If you run more tests concurrently, less traffic will run through each one, and statistical significance will become more time-consuming. Usually, started to mid-level publishers are not set up to run simultaneous yield optimization tests. It’s best to start with one test at a time.
The purpose is to see if there are better combinations that will bring in more revenue than what you’re currently running. And like
To run your tests successfully, you need to set clear expectations for what you’re trying to accomplish with a test and then make sure that you meet or exceed those expectations. You should also have a clear idea of what an ideal outcome is for whatever metric is being tested. You should always have a specific goal in mind before running any test so that you know when to call it quits. When things go well, stick with your tests until they clearly prove themselves successful or unsuccessful (with the way that this is defined clearly spelled out).
When the time comes to review results and decide whether to continue or end a test, look beyond just the final numbers.
The best way to make sure you get meaningful results is to consistently monitor your revenue metrics like RPMs, CPM, etc as the test is running.
As a result, you can catch tests that negatively impact revenue right away, or alert you if something isn’t working correctly.
You can now review the results by running your test and reaching statistically significant traffic levels!
Once your tests start running, the tests showing great results are grouped into 2 categories:
Despite running tests for a specific period of time, your overall ad yield optimization strategy is always evolving. This is why optimization is crucial!
Achieving the right balance in ad yield optimization is something that everyone struggles with. The constant back and forth between KPIs—it can be a headache.
The success of your ad yield optimization program won’t be determined by how well it works in the beginning. It will be determined by how long it continues to work as you make changes to your setup and traffic sources.
You’ll also be able to prioritize the most valuable optimizations for the highest priority KPIs, so you can maximize the impact of each change you make.
This set of best practices will give you a solid foundation for managing your testing program. If you’re still confused about running these tests? Don’t worry!
MonetizeMore’s 250+ AdOps experts will be there for you to plan test ideas, run them, and determine the winners that will help you maximize your ad revenue in a sustainable manner. We’ve always ensured we are on the top of Split tests!
Our Yield Ops team uses historical performance data, AI, and machine learning to predict trends and optimize your revenue faster than any AdTech company can do.
In other words, if you’re looking for a high-performing Google-certified publisher partner for your next campaign, look no further.
Here’s the course that 300+ pubs used to scale their ad revenue.