X

Pro Tips on Upgrading Your Optimization Strategy

Jared-3

You’ve got A/B testing all figured out. You’ve got a steady rhythm of tests going, and you’re seeing great results. But now you’re at the point when you want to make A/B testing a component of, rather than the entirety of, your optimization efforts. Maybe you want to move on to personalization, because you agree with the 94 percent of companies that say personalization is critical to current and future success. Or maybe you want to test your mobile app, because you know that just two years ago, Black Friday saw a 62 percent increase in global mobile eCommerce volume. Whatever your reason for adding more sophisticated tests to your toolbox, Jared Hellman, an optimization expert with Blue Acorn, has got you covered. We sat down with Jared to get his thoughts.

Tell us the best way for a marketer to develop a testing strategy that goes beyond the basics.

If you have several concepts that fall within the same part of a funnel, you can drive a powerful experiment by grouping them. Bonus points if you can tie those concepts together thematically (i.e., increasing exposure for a particular set of keystone brands on an online department store’s gallery page). Testing purists may shake their heads at this approach because you lose the ability to correlate lift with one singular change. However, sometimes you need to move the needle quickly, and small, incremental tests lack the heft to prove revenue and conversion lift.

What tools do you use?

It definitely helps to have a tool like Hotjar, Clicktale or Crazy Egg at your disposal when looking for optimization opportunities. However, the beauty of the best-in-breed tools provided by Monetate, Optimizely and Qubit—Blue Acorn is certified on, and recommends, all three platforms—is that the sky really is the limit for executing testing or personalization concepts. Of course, you need smart, specialized people supporting the ideation and execution of those concepts. Crafting optimization strategy is a layered, difficult process; you have to know the right types of data to gather from a discovery perspective and be able to translate that data into actionable hypotheses. Developing for optimization is also difficult, especially if you’re dealing with problematic code. Any developer who has gone to battle with native functions that are endlessly overriding and breaking their variation JS can tell you that this is difficult stuff. If you have a team that can handle it, that’s great, because there is no substitute for an experienced optimization team.

What are some tests you like to run?

My favorite tests are those that solve pressing business questions. We had a client who had been running on a flash-sale-based pricing model where pretty much everything he was selling was marked down drastically—a MSRP of $199.99 became a sale price of $39.99, for instance. He knew this model wasn’t sustainable, so we ran a series of tests to see what, if any, MSRP made the most sense to display. We ultimately applied learnings from those tests to completely revamp his pricing structure. Having the data to back up the drastic change took the risk out of the endeavor.

Another client, who was operating on a resale model, had a quarterly directive to drive additional leads to his marketplace. However, this client didn’t want to compromise overall revenue in the process. This was tricky because most of the elements the client was adding (via tests) were, by nature, disruptive to the consumer experience. Which is to say that changing the context of an experience from consumer to reseller runs the risk of completely removing the user from the shopping mindset. In situations like this, testing is a terrific tool for learning, through data, how to strike a balance between constructive and disruptive.

What are the biggest mistakes you see when people transition to more sophisticated tests?

I’ve seen a few instances of clients coming to us after getting way in over their heads while trying to drive their optimization program internally. Especially those who are dealing with older sites fraught with legacy code issues. Coding experiments for those sorts of sites is a really difficult proposition unless, and sometimes even if, you know what you’re doing. A lot of times you’ll have on-site JS updating the DOM to overwrite everything you do, or changing the markup to break your styling. It can get pretty hairy. Ultimately, it’s important to understand that as you scale the sophistication of your optimization program, you need to ensure that you have enough seasoned developers (and strategists, QAs, designers, etc.) to support your efforts.

Have certain tests worked better in certain industries?

Not really. Customer personas can be vastly different from one store to another, even within the same vertical. Customers at a big box store like Lowe’s probably don’t shop the same way as those at a Signature Hardware. The sales cycle for one may be shorter than the other; the way people find products and the level of investigation into product details may be totally different. It’s really important, no matter what industry you’re in, to develop customer personas for your particular brand instead of running tests that others in your industry are trying.

Is it a good idea to scale optimization?

Yes. You’ll uncover deeper opportunities for driving revenue gains. For a long time, optimization services, and conversion testing in particular, lived in the dark, musty confines of really small incremental tests, such as headline tweaks and button color or button location changes. There is some value in these smaller elemental tests, but the results are often too minor to really drive lift. While an optimization program can and should be multi-faceted, its real merit lies in its ability to affect sustainable boosts to conversion and revenue.

Scaling from smaller tests to more holistic efforts to fix key friction points on your site (whether it’s the PDP, cart, etc.) is an intuitive first step. Keep in mind that you don’t have to rely on your optimization tool exclusively for these larger efforts. It may make sense to deploy these changes to your code base directly and just use your tool to test lift of the new layout. Even if you’re acting outside the bounds of your A/B testing technology, optimization as a concept is a great means of driving the kind of thought leadership that creates true lift.

About Jared Hellman:

Jared is an Account Manager at Blue Acorn, where he champions strategic initiatives for clients. He’s also a recovering restaurant owner and English major. Ask him about Coen Brothers movies and how to redesign your PDP.

About Amy Hourigan

A skilled marketing and communications professional and national award-winning business writer, Amy Hourigan joined Blue Acorn in 2015 as Director of Marketing. Amy excels at marketing strategy and execution, branding, and crafting clear, compelling communications that drive people to take action.

Leave a Reply

Your email address will not be published.