Affiliate Marketing Case Study: Where you learn how never to lose
Nelson Mandela is credited with the phrase "I never lose. Either I win or I learn." This phrase should be the motto of a good media buyer and in this case study, we will put it into practice.
In fact, the majority of case studies focus on campaigns that work with a high ROI. However, on the specialized media buying forums, most of the questions revolve around campaigns that don't work, or don't get off the ground. Most of the time the questions revolve around the data itself: do I have enough data to make a decision? How can I be sure to make the right choice? etc.
In this Affiliate Marketing Case study, we will learn how the total absence of conversions can still be used for learning, we will set up simple budget rules to avoid ruining yourself during tests and how to organize your tests to make sure you never waste your time.
A little bit of theory: the principle of rare occurrences
When we talk about conversion rate we tend to think of it in the following way: if I talk about 4% conversion rate then every 100 clicks, I should have 1 conversion every 25 clicks. But the reality is quite different. You may have 2 conversions on the first 2 clicks then nothing for 78 clicks then 2 conversions in the last 20 clicks.
Conversions are rare occurrences, which means that they don't happen every time they should according to an average, but if you take a large number of events (clicks) then their average would be X percent.
Therefore, you would need to have a lot of volume in each campaign to ensure the relevance of the campaign, however, this potentially means spending a lot of money before you get an answer. We will see how to minimize these and yet have reliable answers.
Design your campaign as an experiment
When you want to be successful, you must first of all start by creating your campaign in a very rigorous way, and above all, you must give it a goal.
Of course, the goal of any campaign is to earn money, but if it was enough to do anything to make money everyone would be media buyers (and rich). Usually, the creation of a campaign is based on a founding event: we have the intuition that a new vertical can work well, we have seen competitors create new campaigns on a particular source or a particular offer emerges. Each of these elements will lead to the creation of a campaign in such a way that, with or without positive results, one will be able to draw information from what works and what doesn't work.
For an experiment to work, it must first be based on a question. Let's take a concrete example: In our campaign today we tested a vertical that we were not used to working on. It's called sweepstakes.
So the question is simple: can I run any sweepstakes on propellerads without any prior knowledge? And more importantly, how am I going to create knowledge to make this campaign work without ruining myself.
The setup requires choices so that I can easily respond to the challenge I set myself previously.
First of all, a worldwide campaign seems complicated to me. Indeed, if I choose all the geos I will have to spend even more to get reliable answers, so I will choose to restrict myself. I will, therefore, have to make a first choice which will be to focus on one geo. For that I can't just flip a coin, I have to base my action on a logical approach.
So I refer to my favorite Affiliate Marketing Network, lemonads, and I make the first selection of Affiliate Programs:
I sort by verticals, and I realize that two countries are rather well represented: the United States and Italy. The presence of Affiliate Programs in an important way indicates that there is a market and that therefore advertisers did not launch randomly.
With this information, I will now try to evaluate the difficulty of these geos, to see if I can "easily" make a place for myself in the sun. To do this I turn this time to the source of traffic (in this case I chose propellerads, we'll see why later) and I compare what both countries tell me in terms of volume and cost:
In this case, I see that I have a large number of daily prints for the two geos of my shortlist. I also note that the optimal CPM is very different from one geos to the other. 3.293 for the United States, compared to only 0.662 for Italy. I, therefore, have a ratio of almost 1 to 5, even though the volume of the United States is almost 6 times greater than that of Italy.
What does this tell us? There are two possible hypotheses: the first is that the United States is much more profitable, the second is that competition is much stronger in the United States. A third and more likely hypothesis is that it is a mix of the first two and that therefore on an unknown vertical the market will be much harder to penetrate.
At the end of this quick analysis, I realize that I could get sufficient traffic in volume and with competitive bids for less in Italy than in the US. But the question I asked myself in building up my experience is how can I learn a maximum of things with the reduced budget I have. For the moment, I am not looking to scale or increase profitability.
The source of traffic
The source of traffic chosen was based on several criteria.
First of all, the geographical coverage, we wanted a worldwide source to be able to have traffic whatever the geography chose.
The second is that the source is known for its quality of traffic, but also affordable rates.
Finally, we wanted to have formats that showcase advertisers' marketings to understand their intrinsic performance.
The Propellerads popunder quickly became obvious, as this source obviously met all our criteria.
Choice of offers
The choice of offers was made in collaboration with my affiliate manager. I asked him for the offers with the highest ECPM in the country I was targeting. This is often a good starting point, the other constraint I also had was to cover a wide range of products in the sweepstakes. Indeed, I wanted the offers to speak to different categories of the population. (This point will be discussed in more detail in a later section).
Campaign configuration (tracker side)
The configuration of the campaign also had to give an equal chance to all offers while not risking to present the same offer to the same customers, nor to compete with each other.
To solve this, here is how the campaign in Voluum looks like:
As you can see there will be only one URL for this campaign, the offers will compete directly with each other. This allows me in conjunction with a capping of 1/24 h to be sure to keep uniqueness in the presentation of offers to future customers.
Then the relative weights of the offers will be managed directly by the Voluum algorithm. This detail is also a way to minimize losses. By making the self-optimization algorithms work 24 hours a day we have constant monitoring of the campaign which makes decisions as soon as mathematically possible. This is particularly important and therefore helps us to minimize our losses.
Campaign configuration (source side)
On the source side, we also had to find a configuration that met our needs. The panel of propellerads and more precisely the possibilities offered for the creation of a campaign gave us a lot of possibilities. Let's see how to make the most of them:
This simple screen already gives us a good idea of the first steps.
Of course, in our case, geographic targeting is an obligation, and we have already mentioned the reasons that pushed us to a capping of 1 per 24 hours.
However, at this stage, we can already talk about a detail that is also important: the implementation of adequate tracking so that we can later use the data and make decisions on it. We, therefore, chose to track costs, campaign IDs, and unique ids to create a targeting list later on.
The second set of decisions was to choose the traffic that would be most likely to provide an answer:
So we selected only users with high activity and selected only direct traffic from propellerads.
Another important point was the selection of devices:
In this test phase, we mainly wanted to keep classical devices to ensure any incompatibility of the offer. This last one is responsive but we don't know how it will react on a blackberry or a PlayStation.
So we made a snapshot of the average customer and made sure that our traffic selection was consistent with our typical profile. With tracking, we can also determine the performance factors or not.
Finally, we decided to create marketing capital by creating a retargeting list of people who converted in this first campaign. Indeed, by keeping such a list I will be able to use it to launch similar campaigns on other products.
Definition of the budget
For our campaign to be available, we just had to decide on the budget to allocate to this first test.
This decision would of course be taken like all the others with a goal in mind. The goal is to produce data that is representative enough to make decisions while minimizing costs.
We have two variables to play with, the first concerns the budget as such, the second concerns the unit price. The upper bound of a budget is generally done according to the rule of the multiple of the max CPA.
Indeed, we consider that if a multiple of the CPA is reached without at least one conversion being carried out, then the campaign will not be able to reach its target CPA even after optimization.
This is based on the concept of distribution:
These bells show the distribution in % of the conversion rate, in blue when the conversion rate is 1% and in yellow when the rate is 2% (for 500 displays). In decoded what does this mean? When we observe a conversion rate of 1% on 500 displays there is a strong probability that the real rate is between 0% and 2% (the "high" parts of the bell) the closer the value is to 1% the higher the probability is.
You're going to tell me that's all well and good, but how do I transform this into actionable data. Simply by understanding the relationship between the distribution and the displays. If no conversion occurs in a large number of displays then it will have little probability of appearing in an even larger number. This simply and empirically validates the law of multiples.
One last point remains to be understood. There is a direct correlation between the conversion rate and the number of displays that will be necessary for its validation. The higher the latter is, the fewer postings will be required. This is why most test campaigns are based on high conversion tunnel data (cad at the beginning of the tunnel). It is preferable to test an offer at CPL than at CPA even if the unit gains are lower, the number of conversions is much higher.
Coming back to my campaign, I believe, based on my experience, that it has positive or negative points impacting the conversion:
- Source to pop under which implies low targeting and therefore uncertain user interest.
- Active population concerned only
- Presence of auctions on the market
- Presence of offers on the market
- Direct display of already optimized advertiser marketing
So I can probably estimate that I will get a conversion rate of 0.10%, or a rate of 1 per 1000. If I assume this, I should have no cost problem as long as 1000 displays cost me less than a conversion. My payout is 1.40 euros, so I have a maximum bid of 1.40 per 1000 displays.
If I rely on this last graph a bid of 1.096 dollars would allow me to get at least 50% of the traffic and conversions per day, this bid being well below my maximum bid so I found my CPM, I still have to define the overall budget of the campaign.
With what we have already defined previously, I only take a very moderate risk if I define a budget 30 times higher than my max CPA. I will therefore have a budget of 35 euros.
I will still be able to shrink this budget but I want to be sure that I have a large amount of data and the acquisition cost is already low.
The launch is therefore carried out with this configuration, but provides disappointing results:
In the absence of conversion, we must understand what is happening, so we will apply our checklist:
- Is the geographical targeting respected?
- Yes, we saw it in the previous screenshot.
2) Do we have spaces that absorb all the volume and are not efficient?
- By studying the following screenshot we realize that, even if some spaces diffuse more than others, the sampling is quite correct:
3) Is the distribution by offer correct?
- We can see that the distribution is also very homogeneous:
4) Are the visits due to bots?
- On all the screenshots no element shows a high rate of suspicious visits.
Once this checklist has been applied, we have to go to the obvious. In the current format, the popunder, the offers are not adapted. We must, therefore, find a way to measure the willingness of customers of this source to participate in sweepstakes. We will, therefore, put forward a new hypothesis: if we changed the format, would we be able to create a higher level of commitment?
Launch of V2
On this point, we will not review the configuration in detail but simply explain the choices that have differed and why.
First of all, we had to measure the interest of prospects for the sweepstakes type Affiliate Program, so we had to change the format, and go and get push notifications. Why this choice? The answer is simple, it requires a click from the user, and therefore indicates, even in the absence of conversions, the presence or absence of interest from the user.
So we made several creations that allowed us to test the interest not only for the sweepstakes but also for each product individually. As a result, this time we created 3 campaigns, each promoting a single product. The principle is to measure the CTR on each product.
We also had to change geography. The reason was simple: we had chosen Italian products because the Landing Pages were relatively generic. In a perspective where we wanted to create a real continuity, we needed product-oriented LPs, so we went back to Finland, which offered lps of this type.
Apart from the format, and thus the creation of adapted creative, as well as the cutting of the campaigns into 3 one for each product, the configuration is similar in all respects:
So we have a budget of 30 dollars as well, (10 for each campaign).
Interpretations of the results of V2
The v2 provides more interesting results this time:
The very first notable result is the CTR: it indeed indicates a strong interest of users for sweepstakes-type programs. We have therefore validated that it is possible to successfully launch this type of offer on this source.
The other element is that there is a hierarchy of CTR, with an iPhone offer that interests a wider public, so we can direct offers towards the technological universe. Finally, the last notable point is that this time we have unblocked the conversion counter, with a more product-oriented tunnel.
We hope that this case study will have shown you how without prior knowledge and with a very limited budget (60 dollars) you can really develop knowledge about campaigns and direct them towards profitability. As in our first case study, it is the rigor and the method that allows us to create reliable experiences that are not biased by unexpected data.
This method is contained in our 6-point loop:
- Creation of the experience
- New hypothesis
- Back to step 2
Do not hesitate to tell us in a comment if you want more studies of this type.
Accelerate your Conversions, Increase your Revenue