I test constantly in an effort to boost my response rates. So it's no wonder why some people have asked me about multivariate testing. Specifically, they want to know what is it, how does it work, and what's “Taguchi.”
First of all, Taguchi is just one form of multivariate testing. I'm not a mathematician. And Dr. Genichi Taguchi and his work specifically relate to the manufacturing industry, which was later proven beneficial in the car industry.
Not direct marketing. (At least, not at first.)
Dr. Taguchi's formula was only recently extrapolated to the areas of direct marketing, advertising, and now Internet marketing, and has become the basis behind multivariate testing and the popularization of multivariate testing as a whole.
More often than not, the name “Taguchi” is a buzzword often bandied about, even by people who's multivariate tests do not use Taguchi's method specifically. The formula is rather complex, and I'll leave it to those more capable than me to explain it to you.
But in this article, let me explain the basics of multivariate testing, using layman's terms and as best as I can. I hope not to lose you along the way. Fingers crossed.
What multivariate testing is, is this…
Split-tests are normally based on two different versions of one ad. That's why they're often called “A/B Split Tests” (or “A/B Split Runs”). The object is to determine which ad — either version “A” or “B” of a salesletter, for example — pulls the greatest response.
Normally, marketers test one thing at a time: whether it's the headline, the price, the offer, the copy, the color, the guarantee, or whatever. Each of these is a “variable.”
In direct mail, for instance, marketers will send a small run of version “A” to a specific number of people, and version “B” (often, this is done simultaneously) to another group. Each version will test one variable at a time to determine which version pulls the best.
You then take the winning ad (i.e., the one with the highest conversion of leads to sales), which becomes what is called the “control,” and use it to mail to the rest of the list.
Depending on how big your list is, you can do a variety of these split-tests before you determine the control and launch your winning ad. (As you can see, that can be limiting.)
Online, it's a little more effective, because you can have a program that randomly pulls either one of these two versions with each visitor, and have it calculate the results after reaching a specific sample size (i.e., a certain number of visitors and/or sales)
But here's the dilemma — or the limitation, in other words.
Split-tests are linear in fashion.
With your ad, you can test two variations of one particular variable to determine the winning version. Once you're done, you take the winning version, choose a new variable, and run a subsequent test. You then move on to the next test. Rinse and repeat.
The object is to constantly beat your control.
Objectively, you can only test only one variable at a time so that you can precisely determine what caused a boost in conversion. If you tested multiple variables simultaneously, you would be at a loss as to which variable actually caused the jump.
With a long salesletter, you have hundreds of possible variables, too. You can test headlines, lead copy, deck copy, colors, pictures, positions, layouts, prices, premiums, offers, order forms, order buttons, captions, bullets, guarantees, testimonials, ad nauseum.
But that's not all. You also have multiple variations of each variable — such as different versions of a headline. (Variations of one variable is called a “factor.”) So in these cases, you can run a 3-, 5-, or even 10-way split test (e.g., 10 different headlines).
Call it an “A/B/C/D/E, etc” split-test, in other words.
But that's not the problem. In traditional split-tests, you typically test one factor at a time (different versions of one variable). Each test is done individually. And sequentially.
With sequential split-tests, however, the problem is that it's possible a variable in a subsequent test could have produced a higher response with the loser of a previous test.
Sounds crazy, I know. But let me give you an example to illustrate.
For instance, you test two different versions of a headline. After a number of sales, you determine that headline “A” outpulled headline “B.” So naturally, you conclude that the copy with headline “A” has become your control, and you move on to the next test.
In a subsequent split-test, you decide on testing two different prices. After a while, your test has determined that price “B” is the winner and converts more sales. This means that your new control is now the copy made up of headline “A” and price “B.”
Following me so far?
A question remains: what would happen if you tested prices with the loser from the previous split-test — that is, testing the two different prices with headline “B?” Would price “B” still be the winner? The response could have been greater. Or maybe not.
But the problem is, you don't know.
Sure, chances are remote that the losing headline would affect the price test. And sure, you can go back and test prices with the losing headline if you want to be sure.
But the problem is when tests become more complex. For example, what happens if you want to run multiple split-tests with other variables, like guarantees or bonuses? What happens if you want to test several variations of each variable in each test?
If you want to run more split-tests to test other variables, with factors as large as up to 10 variations per variable or more (like, say, eight different headlines, five different prices, four different guarantees, nine different offers, 12 different product pictures, etc)…
… Then you can see how limiting such tests can be if you do it in a sequential fashion.
For one, the time it would take to run each test would take forever to run them all — let alone going back and testing subsequent winners with losers from previous tests. And second, the potential permutations become considerably vast and more complicated.
That's the flaw with traditional split-testing. It's linear, you can only test one variable at a time, it takes a heckuvalot of time, and it doesn't test different permutations — or, in other words, different “combinations” — between all the different variables and factors.
This is where multivariate testing — and software based on it — comes in. The goal is to discover the best possible combination of factors that outpulls all possible combinations.
(I hope you're still following me.) 😉
Of course, it won't determine this based on testing everything. After all, not a lot of people can drive that number of traffic, anyway. But with sophisticated software nowadays, it can make fairly good predictions based on the science of probabilities.
The software can scientifically “guess” an optimal combination of variables…
This goes beyond a simple two-way split test. What marketers like myself want to know is what combination of different variables will have the strongest probability of success.
(That's what “multivariate” means. Of course, multivariate, and particularly Taguchi, are a little more scientific than this. There are many software in existence, which are robust and pretty expensive since they're mostly geared for large corporations.)
But one such software “for the little guy” is Google Website Optimizer. It's free, and you can get it with your Google Analytics account. Their help section is replete with videos on how to set your tests up, how to test, and how to interpret your results.
There are many tutorials on Google Website Optimizer, too.
As you know, I'm a fanatical tester. And I'm already running quite a few split-tests with it. I love the results using this new software and recommend it. I even share some of my impressions and reveal some of my results to my members at The Copy Doctor site.
Now, if you're not testing at all, please don't fret with all this stuff.
For now, you're best bet it at least start testing.