“Show me some examples of what is right and wrong in CRO…”; that (paraphrased) sentence came as one of the comments on my last post on whether CRO should be reframed as FoRM and it got me thinking.
Sounds like a simple enough request, but the more I thought about it, the harder it got to actually answer. So I’ll take you through all of the steps that I went through to thinking about responding to my commenter:
My very first thought was:
Well of course there are some websites that are obviously terrible and there are some that are undeniably awesome…
But unfortunately this isn’t as straightforward as it first appears because I am making those judgements based on a phrase I detest; “best practice”. At a basic level, if someone’s sole justification for implementing a change or a test is “it’s best practice”, it means that they haven’t done the necessary work required to actually justify the change, but more on that later. Sticking with “best practice” for now, let’s use an example to demonstrate this:
Now there isn’t a single person in the CRO industry (or indeed in any other industry!) that is likely to say that Ling Cars follows “best practice” for a website. But that doesn’t make the website bad. And frankly, Ling makes a lot of money through her website, so actually, her website is very good. Because…
“Goals Define Success”
…Ling Cars has a single goal; leasing cars to as many people as possible. And based on the company’s financials, that suggests it’s pretty bloody good. So that got me on the tracking of “goals define success” in my search to answer “What is right and wrong in CRO?”.
If you want to guarantee that your optimisation programme will be “wrong”, a surefire way to do so is to not set any goals for it; as the old adage goes:
If you can’t define success, you’ll never achieve it
And the most amount of “right” in CRO goals are ones that go beyond that which tests themselves can achieve. The goal of a test might be to increase booking conversion, or to lift average order value in purchasers from the UAE, or even to reduce the number of live chat conversations started by new visitors. But each of these goals should be questioned more deeply to get a grasp on the business drivers behind them…
“Looking Beyond The Test”
…and companies/consultancies/individuals that do look past the obvious are the ones that are more likely to get CRO “right”.
Let’s take one of the examples from above and play it out:
Person A: Our goal is to reduce the number of live chat conversations started by new visitors
Person B: Why is that a key goal for your optimisation programme? [Always good to ask questions]
A: Live chat agents are expensive, and they are costing us a lot of money [Reasonable answer, goal doesn’t sound illogical]
B: But don’t new visitors who talk to live chat agents convert at 4x the rate of that those who don’t chat…? [Oooo, data…]
A: Ah yes, you’re quite correct; scratch that test idea, let’s move on to something else… [Checkmate]
Now this example sums up my first few points quite well; those getting CRO “right” aren’t making decisions based on “best practice”, they are setting goals, but they are also evaluating the business route of those goals before they ever actually start testing anything.
Is There a Summation Coming…?
Yes there is. Sort of.
Most of what is seen as CRO “right”-ness is the “we tested this one simple change and made millions of £/$/€ etc.” But in reality, it is those that are asking the right questions of their own business goals that are really setting themselves up for success. And a lot of that comes from having a company culture of wanting to test and learn and to genuinely challenge the assumptions that their business models are based on.
So I completely understand why my commenter was asking about “tangible” examples of what is right and wrong in CRO but in all of my experience, I cannot honestly provide it. CRO “right” stems from a business, or a driven individual, who has the right mindset before they ever start doodling mock-ups of new basket pages.