Conversion Rate Optimization (CRO) Interview Series: Anna Talerico
Can you tell us a little about your role at Ion Interactive? What's your conversion rate optimization (CRO) philosophy?
As the executive vice president here at ion I manage the sales, marketing and client services teams. It's a great place to be sitting because I can make sure these three teams are working very much in unison. I think that ultimately benefits our customers a great deal.
My CRO philosophy is really simple: Test. Test continuously. Test to statistical significance. Learn what you can from the results, and then move onto the next test wave. Constantly be seeking improvement in both conversion rate and conversion quality (average order value, lead score, etc).
What are the first things you look at in starting a CRO project – e.g., messaging, certain site components, the company/value proposition itself?
When we are helping customers improve their conversion rate we always need to do a quick scan of the landscape. Where are they today? What is going on competitively? What are the sources of traffic and the messages that visitors are clicking on to get to the pages? What's already been tested? Then we evaluate the current landing pages for the tactical basics—is the right content on the page? Is the call to action clear and valuable? Does the page make us feel good? Is it easy to convert?
Looking at this gives us a baseline to springboard from.
How do you determine which elements to test?
I think this can often be the hardest step to take. Sometimes, there are obvious gaps to fill on the landing page that can be done quickly, such as getting a call to action above the fold. But deciding what to test strategically can be more difficult. That's why I like to pull together two or three very different landing pages and run an A/B test on them. Rather than debate which elements to test first, it's important to just start testing. So, you pick the two or three tests that feel the most logical and you start running. Once you have traffic coming into tests and you can analyze visitor behavior and conversion results, the rest of the tests start flowing. One test leads to the next. Sometimes you need to test offers and messaging, and sometimes it will be layout or copy or forms. It doesn't really matter. What matters is that you just start. I don't want people to get hung up on launching with the exact right test, because that can lead to paralysis. The great thing about testing is that it is instantaneous feedback and results.
What, generally speaking, are the highest impact elements to test?
I don't think there is any particular element that is the highest impact. I think layout is important—where the call to action is on the page, how it is positioned relative to the content, the words we use around the call to action. This can be important. But so can a headline or a form. What works well for one company might not work well for another.
What I don't think is high impact is too much fine tuning. If you are testing 10 shades of headline colors, you are probably too far into the weeds and need to step back to look at the big picture.
What are some of the most under-tested aspects of websites?
Conversion actions that can live in and around content. I think to really boost conversions on a non-e-commerce site you have to think out of the box about where you are asking visitors to become leads or contacts. Most sites have calls to action somewhere on the page (download, buy, contact us, etc.), but rarely is that integrated well into the content—like a simple form sign-up at the bottom of an article, for example.
On e-commerce sites, there is still so much work to be done to create great shopping experiences, but you also have to create great persuasion pages as well. A lot of e-commerce sites are informational, but not persuasive. That's where a strong landing page can really help lift online sales—a landing page's only job is to persuade.
How do you determine statistical significance?
I don't! I let our conversion optimization software LiveBall do that math for me. A testing platform should have measurement in place that shows you margin of error on any test result, likelihood to win on any test combination, and let you evaluate results at different levels of statistical significance. I do think it's best to test to 90% or higher level of confidence on tests. I am not convinced that 80% confidence is rigorous enough—I have seen plenty of false positives.
Aside from sales, what are some other worthwhile conversion goals to target and track?
It’s really important to track micro-conversions because they often lead to sales.
Look at the visits to the pricing page and how many of those visits go on to convert to a lead or a sale. Look at the conversion rate of white paper downloads, or e-book downloads. See how many users who convert at that level, go on to become sales. Also look at which traffic sources are sending users that convert at the micro level, and then how many of those people go on to become sales. This information is great for developing long-term strategies.
Is there anything you can do to meaningfully test a page or section of a site that has little to no chance of achieving statistical significance (for instance small and local businesses)?
Small and local businesses can test to statistical significance, it will just take longer. The important thing is to be patient and wait it out. Start with a simple A/B test, and let it run. I think for small traffic you need to run a true apples to oranges test (two entirely different concepts against each other)—that often leads to faster results. If you are running two similar versions of a page against each other in a test it may take a very long time to get results.
There are also a lot of fun things local businesses can test. They can swap in and out seasonal images, include headlines based on location, and test images of local geography.
How can a new site (or one that hasn’t been tracking conversions) overcome the challenge of not having any historical data?
There is not much they can do except to start tracking conversions as quickly as possible. Set up a strong analytics program—know which variables are important to measure, monitor the data in real time, and optimize in real time once statistical significance is achieved.
Does the conversion rate optimization process differ in hyper-competitive versus less competitive markets? How?
I don't think the process itself differs. I think conversion rate optimization is business-critical no matter what your market is. If you aren't testing, you aren't learning and you aren't improving. Without testing you are leaving conversions on the table.