September 27, 2012 - FILED UNDER Agile Business
Agile Marketing Series: Test Everything, Test Anything
Show me the money.
“What about all this story stuff?” They will say in their super-serious business meetings. You know the ones—those affable bottom-liners in the blue shirts that don’t care nothin’ about what they call “fluff.”
“Character and plot? Harumph! We want tangible, results-optimized, market-driven, next-generation solutions…not marketing pablum!”
Funny how the serious prefer their own brand of silliness.
Okay, maybe it’s not that bad.
But the point remains. How can you go into the dark realms of linear thinking with agile storytelling under your arm and not get torn to shreds?
Here’s a fact: Unless they’ve done the homework, most people at your company will not be prepared to meet you halfway on your agile turf, so you have to meet them on theirs.
Luckily, you know numbers and letters. You know data can tell the tale the same as words. You (should!) know math and story are the same thing.
As a modern Renaissance woman (or man), you’re comfortable with both ways of expressing this world:
“Digerati like chimpanzee water skiers on Expedia more than Taylor Swift’s hair promo on CMT” and “We can reject the null hypothesis, as data show A had a .76 probability of higher response than B within the target market segment.”
If you use the following formula as a conceptual guide to testing two ideas against a hypothesis, you will automatically know how to use statistics more than the average water-skiing chimpanzee.
This z-test equation shows two samples of data, each with a mean (or, “average”) value (x), a sample size (n), and a standard deviation (The “o” with the combover—it’s called “sigma,” right? Hmm…where the heck is that symbol on my keyboard?!)
Data is everywhere.
You wouldn’t be reading this blog post if you didn’t work (or want to work) in an atmosphere where data is crunched, sallied about, and put into stunning charts on a daily basis. I’m not going to sell you on using data. It’s self-evident.
However, as an agile marketer, you’re not a statistics expert. The good news is, you don’t have to be. There are plenty of software packages and people in your firm (or ready to be hired) who are numbers experts. They can tell you why you should carry the number-crunching out to the n-th degree.
I disagree with the n-th degree.
It’s generally not required, because what you want to test is whether you can say A worked better than B. What worked—not why it worked.
“Why” is a rabbit hole. When you argue about “why” in marketing, you find yourself in the most inane discussions about whether the buttons should be blue or red, or whether the call-to-action should be “Learn More” or “Find Out More.”
Frankly, it doesn’t matter.
Many people make a fat living telling you they already “know” what works, but they don’t. They are bluffing. Most of their “knowing” comes from the stale crumbs on the leftover plate of direct marketing from the 1980s. Have you seen an infomercial lately? Those ads “work”…on an audience of people who last used critical thinking skill in the 1980s. Here is a favorite example parody of mine “We Got That B-Roll.”
And a strong call to action works, unless it doesn’t. Besides what does “strong” mean? Forget these old-school, conventional marketing debates! They serve one main purpose: to fill a week.
Instead of debating, run a hypothesis test, see what works, and move on wiser men and women. Test a CTA that reads “Free Purple Monkey Here.”
Test everything. Test anything.
Example: Chimpanzees vs. Taylor Swift
Let’s say we have two ads directing people to do something we can measure, and we put both ads on two different media outlets:
Ad #1: Chimpanzees water skiing
Ad #2: Taylor Swift combing her hair
Media Outlet #1: Expedia.com
Media Outlet #2: CMT
Assume we chose images, headlines, and media outlets based on the stories we came up with for our target audience using different characters, plots and perspectives. Now, it’s time for the horserace!
We want to test which of these two ads works best.
If we wanted to go crazy, we could do fifty versions of each ad with multivariate this-and-that and perform regression analyses to isolate variables and—no. That’s the rabbit hole again.
In the rabbit hole, we argue about who knows more about statistics with someone who definitely knows more than we do, and then everyone gets confused and puffed up and we wind up in what David Lynch calls the Suffocating Rubber Clown Suit of Negativity.
Instead, let’s stay out of the computational world and in the mathematical world where we can thrive.
A hypothesis, by definition, cannot be proven. It can only be disproven.
So when we want to test Ad #1 against Ad #2, we do NOT say, “Hypothesis: Ad #1 will be better at getting sales than Ad #2.”
Instead, we say, “Hypothesis: Ad #1 will be no better at getting sales than Ad #2,” and then test to see if that’s FALSE.
If the hypothesis proves FALSE, then that means there is a difference between the two ads. How big a difference? That what the statistic z tells us in the equation:
z tells us how FALSE our hypothesis is.
Whether the difference between the two ads is as random as a coin flip, where z = 0 and thus no difference (or a 50/50 chance that the ads differ). Or whether the difference is great, say, z = .499, where Ad #1 was 99.9% likely to get more sales.
(x1 — x2) is the difference between the two averages.
In the Chimps vs. Taylor Swift case, we would find the average of the sales conversion rates on Expedia and CMT for our Chimpanzee ad (x1) and subtract the average of the conversion rates for the Taylor Swift ad (x2).
n1 and n2 are the sample sizes.
How many people total saw the Chimpanzee ad (n1) and the Taylor Swift ad (n2).
o-with-combover1 and o-with-combover2 are standard deviations.
These calculations indicate how “spread out” the data is. If people on Expedia and CMT converted at about the same rate on Chimpanzees, the standard deviation would be a smaller number than if tons of people on Expedia converted versus only a few on CMT.
And you can forget d0-
In the hypothesis, “Ad #1 will be no better at getting sales than Ad #2,” the words “no better than” mean d0 is equal tozero.
One the other hand, if I wanted to bet the Chimpanzee ad will be twice as good at getting sales as the Taylor Swift ad, then d0 would have a nonzero value, but I’m too tired to figure out what it would be. Bring in a serious stats guy for that one!
Now you got game.
The point of all this is not to become a statistics expert, but rather to know enough to create a culture at your company that enjoys the payoff of mathematics without getting lost in the maze of computation.
It’s not a matter of figuring out who’s right. Remember? A hypothesis can only be proven false, not true. Besides, it’s not one big Truth we can find, but rather a bunch of little truths that come and go. Make a game of it!
Imagine how fun it would be to get together with your team one Friday per month to hand out prizes for who was right in the ongoing, raging debate of Water Skiing Chimpanzees vs. Taylor Swift’s Hair.
Or maybe one coupon gets redeemed more than another, and the losing side takes the winning side out for lunch. Or if blue buttons beat red buttons, those who bet on blue get to go shoe shopping.
Do whatever you like, as long as you rid your team of the kind of statements like, “I think flashing orange buttons get the highest response.” Please. That’s nonsense.
The beauty of agile marketing is the coexistence of creativity and analysis.
After all, storytelling is just fluff without something to back it up. And testing is boring as hell without something to bring the data to life. Neither is compelling or credible without the other, as the world requires flashes of insight as well as rational analysis.
But, alas, not everyone will welcome or thrive in an atmosphere where the lion lays down with the lamb. It seem not to be a matter of intelligence, but more about how we give and receive feedback.
More about that next time.