It’s been ten years since Google popularized the A/B/n test, and in each subsequent year, analysts have predicted that this would be the year that personalization dominates marketing strategy.
Consensus is that personalization is effective. Harvard Business Review wrote that having an experimentation capability can lead to a competitive advantage. Still, CXL reports that personalization was missing from nearly half of all 2018 marketing budgets.
So what’s happening?
According to CXL’s 2018 State of Conversion Optimization, marketers reported that half of the challenge of building a functioning CRO program could be accounted for by six concerns:
- Establishing a process
- Learning and training
- Educating clients
- Proving ROI
- Time and budget constraints
For demonstration’s sake, let’s assume that each concern is equally weighted. In this case, the first three concerns – which account for 25% of the overall challenge – can be further operationalized to read “procedural or operational training” or simply “education”.
To our credit, any business practice with a 25% learning curve is hard to sell, but as an industry we have flailed around a bit in a market that’s taken longer than expected to mature – and that’s only added to the confusion.
For example, consider the types of statistical tests available to businesses and the people running them – first, the test types: the most common are A/B tests (with various mathematical underpinnings such as bayesian or frequentist statistics) followed by multivariable / multi-armed bandit tests (which are a kind of Bayesian test but tend to be computationally more complex relative to binary tests). Whichever is chosen, the goal is usually to realize some incremental performance gain or insight that will lead to future improvements.
Now onto the people involved.
According to the CXL report, about 75% of CRO professionals are under the age of 40, and about 82% have fewer than 6 years of experience in the space. Despite this, CRO professionals are challenged to discern the best approach given a desired outcome, and then employ a ‘best-fit’ model that mitigates the risk of error and maximizes the potential for positive outcomes. To complicate matters further, half of them must design this solution without a declared budget.
Given this context, it’s clear why binary tests are the most recommended and exercised option – but are they the right choice? Unlike multi-armed bandit tests which can algorithmically converge towards winners over time, binary tests are usually only valid when the probability of rejecting the null hypothesis given that it is true (the significance level, or α) exceeds the probability of obtaining a result at least as extreme, given that the null hypothesis were true (p-value).
The time requirement to reach statistical significance depends on what’s acceptable to the experimenter, the desired outcome, and the number of trials. As a result, some businesses – especially small retailers that can’t afford the risk of displaying underperforming creative for too long – make the mistake of stopping the test, which can result in a net loss.
Some respond that A/B testing should stand for Always Be Testing. The premise here is that if it’s not working, you might not have iterated enough. Sometimes this is true, but the results of a controlled test shouldn’t be taken out of the context of time and place. Retailers would be wise to approach this gambit with caution.
Vendors have done a brilliant job of building tools that enable the use of advanced statistical methods by everyday marketers, and they’ve moved mountains to build and educate a market for them. However, the fever-pitch of results and case studies coming from agencies, vendors, and individuals alike sets a weird vibe: one that doesn’t ground expectations in a practice that relies on a disciplined approach to be effective – regardless of how liberating and forgiving modern tools may be.
Doing it Right
The most glaring challenge with retail personalization is apathy on behalf of the brand doing the testing: partly because there simply might not be a strong downmarket appeal to the practice given that small enterprise must balance the difficulty of establishing such a function (opportunity cost) with the likelihood that doing so will result in quantifiable positive outcomes (note: there’s still plenty of value in qualitative outcomes), and partly because the equation leads to a college try where brands either distribute the effort amongst many people or hire marketers to lead a function that might be better led by someone with more quantitative skill.
With that in mind, how might you meet the retail personalization challenge? Here are some thoughts:
- Reorient expectations: Small brands should probably view personalization as a means to gain customer insights rather than to realize some incremental lift in revenue. Knowledge is power, and at an early stage the learnings gained from a test can be much more valuable than an x% lift to conversion. Larger brands with higher sampling capabilities can flirt with more quantitatively-oriented testing, but they should likewise view CX experiments and broader personalization as a hypothesis test and not a revenue stream.
- Continue to train and educate: It’s important to have a skilled and experienced practitioner design personalization tests. There is some sophisticated reasoning involved with designing experiments that minimize risk (including that of confounding variables like whether the test was paused part-way through) and maximize the probability that you’ll accurately test your hypothesis.
- Create a culture of experimentation: Customers’ dispositions are fluid and many retail brands use data from personalization campaigns to detect shifts in consumer trends. However, doing so requires change and implies a certain operational readiness. For instance, it might be worthwhile to prepare a design team in advance to expect to produce multiple variations of the same creative to support a testing strategy rather than to take them by surprise. From a people management perspective, it’s also worth considering whether it’s appropriate to distribute the responsibility of testing amongst existing resources, or to appoint / hire a champion with the requisite skills, experience, attention to detail and enthusiasm to run this function – these choices can mean a world of difference in terms of the program’s overall success.
With such a large variety of use cases – aspirations, goals – it’s difficult to make generalizations about the practice of retail personalization. This article proposes that retail personalization can be more effective if properly framed with respect to brands’ expectations of the outcomes associated with conducting these experiments, and that brands can further benefit from devoting more resources towards hiring and/or developing skilled practitioners to execute their optimization strategy.
If you’d like to learn more about retail personalization and conversion optimization, check out Episode 03: Retail Personalization of the PluraCast® eCommerce Podcast. In this episode, we’re joined by Brian Massey, co-Founder & Sr. Conversion Scientist at Conversion Sciences, a data-driven conversion optimization agency, who shares his insights into data-driven marketing and design.