Everybody wants results from digital marketing.
But what works for you and your business and your customer is different than what works for everyone else.
You can try to follow formulas and systems, and they’ll work to a point, but there is always testing and looking at results and optimizing.
Sometimes you find something awesome and creative that drives huge results…
But new creative often doesn’t work.
It’s risky to try new things.
Testing and learning is expensive.
Not testing, not creating, isn’t the answer either- you’ll never get noticed- you’ll never learn- but there’s a cost to learning.
And too much creativity and novelty is risky.
So how do you manage risk while testing and trying to achieve great results?
The more you spend on digital ads, the more you have to be aware of this.
As we attempt to expand while continuing to get good results, it becomes more and more important to manage risk around creative testing.
How Do You Maximize Profits With Ad Spend Allocations?
You have to make creative decisions and allocate your ad spend against creative in a way that balances the need for two things:
#1 Profitability (high results or low costs of any kind, regardless of the KPI we use for it at the time): any goals we have for appointments, revenue or cost per new customer must be achieved at the same time that we create and learn.
#2 Novelty: to push forward our KPI’s, we need varying degrees of novelty in the creative. It’s the amount of novelty, the degree that it diverges from what has been proven to work, that increases the risk, increases the cost of testing and lowers profit while testing.
At times, we work with a simple system of allocating ad spend between:
A. BEST: proven ads (and when I say ads, we may also mean landing pages, depending on how traffic distribution is set up) that achieve our best KPI performance so far. We allocate a certain amount of ad spend to this- as much as possible, to try to achieve the overall KPI goal- while leaving a certain amount of spend for the “TEST” group. In the beginning when nothing is proven, it’s all TEST. And the definition of BEST changes as the KPI’s improve.
– vs. –
B. TEST: new ideas that are unproven. Many of them will not perform and will be discontinued. Some will end up in the BEST group.
As a rule of thumb, we can recommend a ratio of anywhere from 50:50 to 80:20 BEST:TEST.
However, in more complicated situations like TPW, I recommend we look at more groups, as shown in this image…
#1 OLD Proven Creative
Proven profitable, or best performing creative so far
Keep in mind that the definition of proven is relative
KPI’s improve over time, and the definition of what’s best changes
Best investment for ensuring KPI goals
#2 NEW Slight Variations on Proven Creative
e.g. changing one bit of text or one image in an ad or landing page
Lowest risk of poor performance for new creative
Highest assurance of profitability for new creative
#3 NEW Bigger Variations on Proven Creative
e.g. a big landing page layout change, or changing multiple things at one time
Medium risk of poor performance for new creative
Medium assurance of profitability for new creative
#4 NEW Totally New Creative
e.g. totally new ideas, themes, messages, formats and customer pathways unlike previous tests
Some amount of totally new is required, but because its overall performance is, on average, the lowest, it should be allocated the least amount.
Highest risk of poor performance for new creative
Least assurance of reaching KPI goals
An example spend ratio could be…
The idea is to put spend in every group to allow for diversity but to allocate spend conservatively to reduce KPI performance risks.
It’s critical that to use this 4-category system for new ad, landing page and other tests as you go forward and increase ad spend.
If you don’t do this, you risk spending too much on the riskiest creative, and while you may learn a lot, you will not at the same time produce satisfying results.