With years of experience in search and conversion rate optimisation, industry leader Peep Laja helps enterprise ecommerce businesses make more money by analysing precisely where a website is “leaking” money by using Conversion Optmization and a variety of web analytic tools.
He proves that spending more money on traffic doesn’t necessarily mean higher conversions on a badly optimised website. Head of his own agency, Peep leads one of the world’s most popular CRO blogs: ConversionXL, and regularly runs popular coaching programs and conferences.
Be sure to sign up to Elite Camp, Estonia: a 3-Day Traffic + Conversion Event, Peep is organising between 12th -14th June (there are only 4 places left).
02.19 Peep Laja Introduction
06.51 Identifying leakage points with web analytics
12.49 CRO for ecommerce vs traffic
19.25 Typical CRO process
24.24 Google Analytics and user testing
34.16 Psychological triggers
38.06 Statistical significance and validity
42.52 Analysing reports
43.46 Recommended tools
– Evaluate each design screen (homepage, category page, product page…)
– Relevancy (Pages meet expectations)
– Value / Value proposition
– Friction (Remove anything causing negative thoughts)
– Distraction (Single goal for each page)
– Eliminate bugs and experience issues
– Check cross browser and cross device compatibility
– Track everything in Google Analytics
– Recruit 5-15 people representative of the target audience
– Give specific and broad tasks to execute
– Surveys for visitors on pages with highest “leak”
– Follow-up email surveys for recent buyers with open ended questions
– Use of Google Analytics
– Identify biggest “leaking” pages
– Tracking click areas
– Use of scroll map to measure scroll depth
The advantage of click maps is also that you see where people are clicking that are actually not links.
The conversion rate that you should be caring about is whether the one that you have now is better than what you had last month
Kunle: What’s more important: driving more traffic or improving conversions? How do you find and fix areas in your store and ecommerce funnel leaking money? What is a detailed and definitive way of carrying on conversion optimisation and split test on an ecommerce site? My guest on today’s show runs one of the most popular CRO blogs out there as well as a CRO agency. He’s going to answer these questions and shed more light on his agency’s highly detailed ecommerce conversion optimisation process, how they establish hypothesis and test the amount. So stay tuned.
Hi 2Xers, welcome to the 2X ecommerce podcast show. I’m your host, Kunle Campbell and this is the podcast where I interview ecommerce entrepreneurs and online marketing experts who will help uncover ecommerce marketing tactics and strategies to help you, my fellow 2Xers and listeners double specific ecommerce metrics in your online stores. So if you’re looking to double metrics such as conversions, average order value, repeat customers, traffic, and ultimately sales, you are in the right place.
On today’s show, it is a great pleasure to have with me Peep Laja, a conversion rate optimiser some of you would know about, heard or read about. He’s the founder of ConversionXL, one of the world’s most popular CRO blogs which also doubles as a conversion rate optimisation agency and training company. Peep is not only an entrepreneur and conversion optimiser, but also a trainer who runs regular courses, coaching programs and two annual conferences, ConversionXL Live and Elite Camp in Estonia, something coming up really soon. Welcome to the show Peep.
Peep Laja: Thanks for having me.
Kunle: Could you take a minute or two to tell our listeners about yourself, for those who don’t know about you?
Peep Laja: Well, I help companies make more money. They bring me in to help their efforts when they have a bunch of traffic or they pay a bunch of money to drive traffic on their site and now want more people to buy stuff, sign up for something etc. I’ve been doing this for quite a while now, I run my own agency. We work mainly with enterprise companies in the US, but also some smaller ones in different parts of the world. In the past, I’ve run my SaaS company, I’ve done sales, I used to be in SEO, I used to be a professional fund raiser for a non-profit, all kinds of things.
Kunle: Quite a packed background. A lot of experience. You said you used to be in SEO, I noticed you started blogging since October 2011 on ConversionXL. Why did you move to CRO? I know you ran a website development agency at some point, and you still do. SO why the move from SEO to CRO to help enterprise companies make money?
Peep Laja: I did start the SEO stuff quite a while ago already, in 2008. So I used to be in SEO and those were my prime years, in 2006-2007. Back then, ranking first page on Google for something was not that hard. What I noticed was that even when I was able to maybe double the traffic for my clients websites, their sales did not double. In fact, sometimes they hardly even moved. That made me realise that there’s something more to this online marketing thing than just SEO and traffic, and that’s when I got interested in CRO. At first, I was doing it for myself, doing direct response marketing and ran my own test. And then I ran my own SaaS company meanwhile and was doing CRO for myself until I learned that that’s what everyone’s struggling with. Converting visitors into buyers. That’s even harder than getting traffic. Of course, you need both. When I started in my web development agency, at first the value proposition was that “we build sites that sell”. Basically we were building sites from scratch using best case practices and whatnot. Then I also discovered a lie that best case practices are not really best case practices, they’re more like common practices. Just because you build a site based on best case practices, doesn’t mean that it’s going to be the best performing site, there are so many other things that go into it. It’s not like you copy Amazon and you expect to have Amazon-like results, it doesn’t work like that. So I discovered how content specific sites are. We stopped doing all development work for various business reasons because people hired us to boost their sales but the development part was always taking the lion’s share of time in the projects. So the reason why people hired us and what took the longest was the mismatch, and we were most excited about conversion optimisation. So three years or so ago I came back from a conference and told my team “Alright, I’ve had it”. We were killing all development services and fully focusing on conversion optimisation. They were like “What? That’s 80% of our business right now”, I said “No, I have a vision, we’re going to do this”, and we did it. The first year after changing our business model to fully focusing on CRO, my employee per revenue went up 60% so it was a good decision and we never looked back.
Kunle: Did you have to let go of some existing clients?
Peep Laja: We finished all the clients that we had in, but we just stopped taking in new development clients.
Kunle: Very interesting because I can very much relate in regards to doubling SEO traffic and not doubling conversions or sales. At the end of the day, business owners and retailers want a return on their investment. You talked about making money and every time I hear you speak, you talk about websites leaking money. In the context of an ecommerce funnel, where are the most common leakage points from CRO standpoint and funnel?
Peep Laja: For ecommerce sites it could be anywhere. On some sites, people get to the category pages, but for some reason they’re not really clicking through to the product pages. It could be that it’s a paradoxical choice and that “there’s so many products, I cannot choose”. Or maybe they get through to the product page and you see there’s a lot of page views of product pages but hardly any clicks on the actual cart buttons. So the product page is not really doing the job of persuading a person to add a product to the cart, or maybe they are getting to the cart and not starting the checkout and still not getting past the cart page. So you have to remember that this is very individual. Every ecommerce site has specific problems, not generic problems. So we always have to look at the website performance in terms of layers. How many page views do we have on category pages, product pages, clicks on cart adds, cart page and different checkout steps. That’s how we can figure out where the biggest leak is.
Kunle: Ok, so it’s on a case by case basis. I’ve also heard you talk about the fact that you spend 20-40 hours on Google Analytics initially to spend time finding these leaks. From that standpoint, what section of Google Analytics do you spend your most time in? Is it the audience, the acquisition, the behaviour or the conversion section or all of them?
Peep Laja: All of those except acquisition which matters less for my conversion purposes. There are so many things to figure this out, so I do spend 17-20 hours at the start of each new conversion project, let’s call it that. The first thing of course, before you even look at Google Analytics, you have to conduct a health check to make sure that A, the data is correct, and you won’t believe how many Google Analytics conflicts report false data because of a crappy set up, B is everything that needs to be measured, so we need to make sure that’s taken care of first. Now we can look at the leaks. Obviously you look at the funnel performance. Typically the funnel performance, people start measuring it from the cart to the purchase, where people are dropping off. Then you also want to measure the flow from the homepage to the category page to the product pages and also having in mind that people might enter directly onto category pages or product pages, so you need to do some math here. Here, I’m mainly playing with behaviour or all pages reports and then calculating unique page views as well as unique visitors.
Kunle: Would you look at category pages on a case by case basis or holistically? Because some etailers would have over 100 categories.
Peep Laja: If the design template is the same, you will look at them as a whole. Of course, it might be that there are different product categories or brands that might perform differently. In this case, if we want to analyse it per product category, let’s say we analyse apparel and jewellery separately, then of course we need to have richer data. We need to have a bunch of custom data configured in Google Tag manager, so you’ll be able to see that kind of rich data.
Kunle: Now that you mentioned Google Tag Manager, is that your default? Is that your preferred set up within Google Analytics?
Peep Laja: A tag manager for sure. Google Tag Manager is free, it’s great, it gets stuff done and of course you need to work with an analytics implementation person. So you need to have somebody on your team that really “speaks” Google Tag Manager. It requires above average skills. It’s not like you’re a renaissance man and you do everything yourself. Or at least you need a service provider who helps you with setting all this stuff up. For ecommerce sites, I always recommend if they don’t have an in-house person, that they at least bring in a onetime implementation guy that will set it up in Tag Manager, who sets up enhanced ecommerce, all kinds of rich data, custom data that might be important.
Kunle: What do you want to see as a CRO? What settings would you want to instruct the analytics person for an ecommerce setup?
Peep Laja: As a conversion person, you do with what you have. The most important things, of course, are revenue per visitor and conversion rate, and that’s part of your standard analytics config, providing those basic things to set up. Then you have the average quantity, average price and so on. The reason we need all those things is if you’re running split tests. If you’re running split tests, you want to see how one treatment of variation is affecting all these different metrics. It might be that you create an alternative product page layout and you see that there’s no difference between conversion rate but the revenue per visitor is way up. Instead of buying on average three products, now they’re buying something like 4.2. So we know exactly which metric our treatment affected and we can figure out how it did that and maybe we can leverage this even more. Do more things to get people to buy more products in one session.
Kunle: That makes a lot of sense. So you have the key conversion metrics there and then you run your tests against them and see what changes off the back of that. We’re going to move into CRO for ecommerce. In the context of ecommerce, what sort of ecommerce businesses benefit for CRO? I know this is quite a general question, but from a traffic standpoint, how much traffic should websites generate to make them more or less a candidate for CRO, if that makes sense?
Peep Laja: I think you are confusing optimisation and testing here. Testing is a part of optimisation, but it’s not optimisation. Every single website can do optimisation. Small sites that struggle to get traffic, they should care even more about optimisation because if you’re only able to get, let’s say, a couple of thousand visitors to your site each month, in order to make money, you need to squeeze every single dollar out of those visitors, so it’s even more important. Now when it comes to testing, of course math comes into play and you need a certain sample size in order to run tests. So at the very minimum, you would need maybe 400-500 transactions per month, that’s the low end. Of course it depends if you run a test with an impact it’s way higher. Let’s say that you run a test where it’s +200%, you might get away will a lesser sample size, but of course, +200% tests, one doesn’t see those very often.
Kunle: Do you need a total of 400 transactions for each test in version A and version B?
Peep Laja: Total. My personal minimum is 250 transactions per variation, which means 500 total per site.
Kunle: And that’s a starting point?
Peep Laja: Yes. That means you should be able to run 1 test per month, or even sometimes 1 month is not enough because it all depends on your existing conversion rate and the impact of the winning test. Does it add +5% or +15%? The higher the uplift, the lesser the sample size you need. So it’s all math and you can calculate this in advance if you go on Google and type in “A/B test sample size calculator”, you will find plenty where you put in your current traffic and conversion rate, and you change the uplift per test, maybe 5%, maybe 10% and it will tell you how many visitors per variation it would need to run a test. So you can see if it would take 2 weeks or 2 years.
Kunle: That makes a lot of sense. A significant number of our listeners are mid-tier online retailers, I define them as websites or businesses having a turnover between $5 Million and $70 Million. Why should they focus on conversions rather than customer acquisition for businesses of this size?
Peep Laja: It’s not a “this” or “that” equation here. If you spend money on adverts or on SEO, it doesn’t matter which one, if you spend money on traffic acquisition, what if you would spend as much money as you’re spending now, but you would get 30% or 50% more sales or double? It really makes sense, if you are spending money on AdWords, I will take a portion of that budget on conversion optimisation for a few months, maybe half a year. To get customer acquisitions costs much lower. Now, with the same budget, you are able to acquire many more visitors and they convert better, so that’s how you dominate. If you are able to acquire customers cheaper and faster than your competition, what else would you want?
Kunle: An online retailer who wants to say double their conversion rate off the back of traffic coming from AdWords, for instance, could you talk us through the CRO process or the testing process that you would take for them to achieve their goal or doubling conversion rate?
Peep Laja: How much you’re able to increase your conversion rate depends on how crappy your site is. If it’s a highly optimised site, let’s say that Amazon hires a CRO person, that person will not be able to double their sales in one year or whatever because their site is already highly optimised. So everything is always contextual. A reasonable expectation is that you should be able to increase your conversion rate 5-15% per month. That’s a typical scenario. Of course if it’s a highly optimised site, it’s less, if it’s not very optimised, it should be way more. So how do you do it? Ignore best case practices and ignore what’s working for other people for the most part because other people’s websites have other people’s problems. Your audience is different, you’re selling different products to a different audience, you acquire your customers in a different way, your price point is different, so many things are different. It’s a very contextual environment. It’s not about a list of tactics or magic bullets, maybe if you want to make the button bigger or do this or do that, so forget about those magic tricks as well. It’s about a process. If your site is crappy, it’s relatively easy to find those low hanging fruits, but once you’ve picked those, again, you’re left with “then what?”. So you want to have a data driven approach. What does it mean to be data driven? Data doesn’t drive anything, people drive it. It’s not like you open up Google Analytics and it tells you what to do. You have to go in there, dig around and you have to find answers to the questions that you have. It’s always people driven and the key thing is to be able to pull insights out of data. Let me walk you through the typical process.
An ecommerce company hires me, I come in, what do I do? First thing I start with a heuristic analysis. A heuristic analysis an experience based assessment where I evaluate each design screen, let’s say homepage, category, product etc, against a certain set of heuristics. Those heuristics are, for paid traffic, relevancy, does it meet expectations? Does it have what I expect it to have and say the right things? Value. Do I understand the value that I would get from this page? Value proposition, what’s in it for me? Friction. What’s making me anxious, having doubts, what’s causing hesitation? Anything that is causing negative thoughts. Distraction. Every single page should have a single goal. On a product page, I want users to add something to the cart. On a category page, the goal is to help people find a product they like so they would click on it. Distraction basically answers what on this page is not contributing to people not taking the one action? If there are elements like that, should they be there? Maybe we should remove them, or minimise them. And clarity, of course. Do I understand the copy? Do I understand what I’m supposed to do? It is obvious? Is everything easy? And so on.
So I walk through this page and I write down notes, let’s say it’s an ecommerce homepage, no reason is given why I should buy from this site, there’s a value issue. Maybe there’s a slider that changes image every 3 seconds. That’s a distraction, I’m not able to pay attention to anything because the image always changes. Maybe it has an amateur design, so I’m writing down on the friction “it has an amateur design, I don’t think I can trust this site” and so on so forth. These things that I write down are not facts, they are what I call “areas of interest” and now I’m going to look for data and I use these areas of interest and what I’ve noticed to figure out, first of all, the questions that I want the data to answer. If I think this is an issue, is this really an issue? I need to come up with a question: “Are people put off by this design? Do they think it’s untrustworthy?”. So I’m going to seek out qualitative data to either approve or disapprove the hypothesis. Another thing that I do as I’m walking through the site I’m paying attention to all these different things I can do, maybe on a category page, maybe there filters where I can narrow down by price, colour, size, whatnot. Every interaction a user takes on a website should be recorded on web analytics through events. So I’m also going to mark down all these different things that people can do and we want to be able to measure those things, so I also have a list of things that should be measured. Now I’m going to go to Google Analytics and perform a health check and make sure that everything that should be measured is being measured. Every single action. Every single click. If there are forms to be filled out, I make sure that we have form analytics in place so I can see the performance of the form on a form field level and say “When I asked them for a phone number, they hesitated for 3 seconds”, “In the email field, they have multiple corrections”. The same for error messages, I want to make sure that we see how often they see which error message on which URL.
So after a heuristic analysis, the second step is a technical analysis where we make sure that we are tracking everything in analytics. We also want to make sure that there are no bugs or user experience issues because these are the biggest conversion killers. In Google Analytics, you can pull up reports. You always want to segment this by device category, so you want to look at desktop or mobile or whatever. So I want to look at Internet Explorer conversion rates for desktop, for 11, 10, 9 and so on. Now I see that Internet Explorer 9 only converts half as much as Internet Explorer 11. Why is that? I don’t know, but it’s likely that there are some cross browser compatibility issues or maybe some user experience issue. So now, maybe I can use a tool like Cross Browser Testing or Browser Stack to figure out if there are any bugs in the system because these are your biggest money killers. Even the most persuasive design or copywriting won’t save a site that doesn’t work. So I eliminate bugs. So once I’ve identified if there are any cross device or cross browser issues, I move onto web analytics analysis.
What can we learn from Google Analytics? In Google Analytics we can see what people are doing, we can see the impact and performance of every feature, every widget, every page. For instance people who are using filters, are they more likely to convert? Less likely to convert? Or the same, no difference? Which filter do people use the most? Then I know the order of filters, how prominent they should be, all that stuff. Of course, we’ll identify where the site is leaking money. Is the biggest problem the cart or the product page and so on? Also we want to figure out, not just the page type, we want to see performance per actual URL. What are the high traffic pages with a high bounce rate? Or high traffic, high exit rate? If the exit rate is less good than bounce rate, because often people exit after they’ve had 2 or 3 disappointing clicks, usually when there’s a page with a high exit rate, I want to see what page they saw before that and before that. Now I know what people are doing and not doing, where most of the money is leaking out either per page type or per specific URL, I want to get to the Why? At the same time, most tracking analysis is useful. A scroll map, which essentially shows you how far down people scroll, you can do that through Google Analytics as well with Tag Manager, you can measure scroll depth. Or you can use a tool like HotJar, Inspeclet that show your scroll depth. And also click maps, where people click, as a visual representation of aggregated data of where people click. Of course, again, if you have enhanced link attribution configured in Google Analytics, you can see that in Google Analytics as well. The advantage of click maps is also that you see where people are clicking that are actually not clicks. Something that people think that should be links, but are not. That can be insightful. The most insightful of all is user session replace. You know your category page sucks and that people are not clicking through to the product page, so what are they doing on this category page? Well now you can use a tool like Inspeclet or HotJar and watch a couple of hours of videos of how people use your category pages, very insightful. Quality of service. You want to do at least two types of qualitative research. One on your pages with the biggest leak, let’s say it’s a cart page. People are getting on the cart page, but they are not clicking “Proceed to Checkout”, why is that? You can put a poll on that webpage and if after 10 seconds they still haven’t done anything, pop this question and you ask this question: “Hey, what’s holding you back from proceeding to checkout or from completing a purchase right now?”, you’ll get a 2-4% response rate and you get a lot of insights as to what’s holding people back. There’s a site that I’m working on where 90% of the people answered the poll question in the same way and the answer was “Shipping costs”, they were too high. This is a response that no digital analytics data could ever tell you. You can’t dig into analytics and figure out that shipping costs are too high. So that’s why qualitative is so important. You want to also do this for product pages and checkout and everywhere where your site is leaking. On a product page: “Hey, what’s holding you back from adding this product to the cart right now?”. Of course, some people get pissed off, but ignore them because the amount of qualitative response you get will give you a good insight.
You also want to do email surveys with people who just gave you money. If people bought something, two days later send them an email, or email people that bought something within the last 10 days. You ask them a set of questions. You want to ask them what was the one thing that nearly stopped them from buying, what were some doubts and hesitations they experienced before completing a purchase. What was their number one problem finding the right product to buy, why did they end up buying from you and not the competition? Open ended questions. Usually I ask 7, 8, 9 questions, all open ended, no “yes/no” questions, no multiple choice. Also very insightful stuff. You also want to do user testing. Recruit people who represent your target audience and have them use your site, give them tasks and have them comment everything out loud. You can use UserTesting.com or similar. You want to give them 3 types of tasks. A specific task like find a pair of black jeans in size whatever by this brand under $50. Very specific, and see how they do. You want to give them a broad task “It’s your significant other’s birthday coming up, find them something they might like”. Again, pay attention to how they go about it. And finally you say “add it to the cart and complete the purchase” because you want to see the final completion. You do this with 5-15 people. Very insightful stuff. Now, after doing all these steps, each specific task, whether it’s user testing, heuristic analysis, or web analytics analysis, will give a different insight. You want to put all these insights, all these issues that you identified together into a master action sheet. I usually use a Google Docs spreadsheet. You list every issue, you score them by importance, you rank them from 5 stars to 1 star. 5 stars would be a critical important issue that is impacting almost all the users and is costing you a lot of money, like for instance the shipping costs are too high, or a minor usability issue like when you click on a product image on a product page, there’s like a 3-4 second delay before the picture jumps open. This is a usability issue. It’s not a huge issue, it might be a 2 star issue. So you assign a score and now you categorise each issue and you allocate each issue into one of five categories. One category is test, to say that you found out that on the product pages, the feedback was “I can’t really understand if I like these pants or not”, “we should probably make these pictures bigger”. That’s an obvious problem, obvious test, so it’s a test category. Or maybe there’s a problem that you’ve found, shipping costs are too high, but maybe you can’t really do anything about it. Your courier is screwing you. That is a perceived friction, it is an issue. That goes into a bucket that I call “hypothesise”. It’s a known issue but there is no obvious solution to it. So now usually we need to get a team of people together and do a crush disciplinary brainstorm meeting. Maybe we can increase the perceived value of a product, maybe we can increase all our prices and thus reduce shipping, whatever, be creative. There are some issues that are just no-brainers. Maybe people just can’t read your content or the font size is too small. Don’t test it, just increase your font size by 2px or whatever. There might be instrumentation issues. A very common instrumentation issue in ecommerce is that clicks on “Add to Cart” buttons are not measured. They measure only visits to the cart page, but they are not the same thing, so it’s an instrumentation issue, a massive one.
Kunle: So is your spreadsheet split into what professional is going to be assigned the task?
Peep Laja: Usually I have a column in the spreadsheet where I assign by name. I sort all the issues by the stars 5 to 1, and then people who have to fix the issues just look at this, we call it “Action Sheet”, and filter it to your name and sort it from a big issue to a small issue. It can be analytics people, business people, your testing team and so on. Typically when I do this I have 15-30 pages of issues. If you go through this process and do it right, it usually lasts 2-4 weeks. It’s a lengthy process but somebody asked me “What kind of a business do you have to run to actually go through all this hard work if you’re in a business without actually being a business?” If you’re actually serious about what you do, how can you not not do it? Everything in life is hard. Maybe your competitors are too lazy, they don’t want to this hard work, well, the one who does the heavy lifting, the one who does the hard work wins.
Kunle: Let’s summarise the key things. You said the heuristic analysis, then the technical analysis and the third part was more or less the user testing. So this is 3 key areas, right?
Peep Laja: Yes. User testing and qualitative research serving people who are visiting the site, people who have recently bought from you and web analytics + mouse tracking. So six things.
Kunle: I have to say that’s the longest and most detailed answer ever I’ve got on the show so far, so well done. I want to talk about psychology, the role of psychology in CRO. I’m reading a book now called “Triggers”. The book talks about psychological triggers and references to triggers in psychology is always negative, like anxiety, depression and addiction. How do you play with psychology in conversion rate optimisation? I think that falls within the heuristic analysis because you mentioned friction, value proposition, anxiety, negative distractions. How do you layer psychology up or prioritise psychology? Is it split into negatives and positives that give pleasure versus things that cause pain or friction? Could you shed some more light on that?
Peep Laja: When we optimise a page, essentially we have 2 levers that we play with. One is increasing ability to do something, so making it easier, more obvious, make the button we want people to click bigger and so on. Once we’ve taken care of the part, it’s easier to take action, it’s obvious. The second part is motivation. Get people to want to take action. This is where psychology comes into play the most. Which types of triggers will work the best on your specific audience or on the specific action you want people to take? That we don’t know in advance so this is where testing comes into play. In-house we have a master sheet, 100 page document of all the psychological triggers that you could use in context. Typically we go through it and pick one that seems to fit the specific context. If it’s a trust issue and we’ve identified a trust issue though maybe qualitative research, then we maybe want to use social proof like “We have served 5 million customers in the last 10 years” or maybe we want to appeal on authority “This is where famous people in our industry are doing their shopping” or something like this so start with that. It’s very hard to give you a specific answer because there are so many different types of triggers that you could be using.
Kunle: I do get where you’re going to in terms of driving shoppers to take action and you’re nudging them up from a motivational standpoint or you’re trying to reduce their anxiety and then you reference the list of 100 or more psychological triggers to taper it down. It’s almost like when you’re a surgeon, you’ve read your entire medical book, but when you see it in elementary you make reference to it.
Peep Laja: It is pretty impossible or highly difficult to figure out which of these triggers are going to work the best. A, stop assuming that you know what’s going to work the best. People usually default to the Robert Cialdini scarcity, urgency, those type of principals. I’m no expert in social psychology to say that you should or should not start with these. What I’m saying is have an open mind, test with multiple different triggers and if you know what the issue is that you’re trying to solve, it’s so much easier to choose from the huge list of triggers that exist there. So if you know it’s an anxiety issue and you can identify what the anxiety is, it’s so much easier to pick something.
Kunle: Speaking of testing, what are your frustrations? What do you see optimisers do wrong? In other words, I’m asking what’s a proper way of testing, because I’ve been reading a lot of confusion between statistical significance and validity in test?
Peep Laja: I think the main problem in testing is that people not well burst in statistics tend to end their tests too early. Sometimes it’s the fault of the tools, like VWO Optimiser or whatever, they just say “Hey, statistical significance reached…” and your test has been running, I think the record is 18 minutes, and you already have a winner. So the problem with that is that when we run a test, we want to be sure that the test result is valid, the validity that you mentioned. Statistical significance only comes to play as a factor when two previous conditions have been met because the people exposed to our experiment have to be representative of our actual traffic. So first of all we need to meet a criteria that turns in enough sample size, enough people that are exposed to your experiment. We mentioned the sample size calculator early on, so you need to calculate the needed sample size in advance. So if the statistical significance is 95% or higher, but your sample size is not enough, it’s a meaningless number. So that’s criteria number one. Your absolute sample size of people that criteria needs to be met, at least you know before you launch the test. Criteria number two is the test duration. How long does the test last? Again, when we’re running a test, we’re taking a convenience sample, not a representative sample so we need to make sure that our normal business cycle fits into the testing period. So every single weekday, weekend, phase of the moon, external events, maybe your business cycle is longer, maybe people are affected by the pay day, so you want to run for four weeks to make sure that the pay day is in there. Your blog and news that have a publishing schedule, how often do you have promotions going on, all that stuff. Typically you want to run your test for at least two business cycles which means that you should not really run a test at less than 2 weeks, probably closer to 4 weeks. So just bear in mind, when you run a test, you need your absolute sample size that you know in advance, you run your test for two business cycles which is at least 2 weeks, maybe 4. If those two conditions have been met, now you look at the statistical confidence level. Of course, this is still a hot subject of debate among the statistics community whether P value can actually be used in this context. I don’t have a PhD in statistics so I’m not going to have that debate here.
Kunle: So two things. The sample size sufficient enough. You alluded earlier to the fact that 250 transactions is a good number.
Peep Laja: Bear in mind there is no magic number. This is a ball park, so you’ll always calculate it using a calculator where you put in your actual conversion rate, the actual lift that you’re seeing, those numbers, and see.
Kunle: And the duration is quite important also?
Peep Laja: Very important, yes.
Kunle: Ok. I know you’re not one to be held for numbers, but I was curious to find out what your thoughts were for typical conversion rates for ecommerce in general given the number of ecommerce brands you’ve worked with?
Peep Laja: This question “What’s a good conversion rate?” is a question that you shouldn’t really be asking. The conversion rate that you should be caring about is whether the one that you have now is better than what you had last month, because websites are highly contextual, your website audience is unique, your pricing is unique and so on. So it’s never apple to apple comparison. If I told you that the average is 3-4%, what do you do with that information? It’s useless, really. The only thing that matters is that you’re beating your own benchmark.
Kunle: In terms of tracking and reporting of the funnel, how should retailers effectively report and track their sales funnel?
Peep Laja: I’m not sure I get the question. If it’s set up in Google Analytics, then there you go.
Kunle: Should they look at it on a monthly basis, a weekly basis? How often should they go into the funnel report and analyse the data?
Peep Laja: You should definitely have a regular check once a week or so. You should configure email alerts when something bad happens and suddenly the conversion rate drops by 20%. It might be that there’s a sudden bug or whatever. Something is happening, so that you can catch it right away. Have a regular check in time and configure alerts for any sudden changes. Of course the sudden change might also be just regular seasonality, Sunday is typically worse than a Wednesday and so on. And of course, when you run A/B tests, you need to look at all that data per test and so on as well.
Kunle: I want to finally talk about tools you use. I know you’ve actually mentioned a lot of tools, but what are your most recommended CRO tools? What does your CRO toolset look like to effectively split test and carry out CRO on a regular basis on an ecommerce website?
Peep Laja: Web analytics, I’m a big fan of Google Analytics, hate Omniture. For split testing, I’ll use Optimizely. VWO is also good. There are so many other tools that I have not used so these are the tools that I use. For most tracking, Inspectlet or HotJar. For on-site surveys, I use Qualaroo or HotJar. Email surveys, I use TypeForm. Of course you have to email without using whatever else, but to put the forms together I use TypeForm or Google Docs sometimes. And for enhanced funnel analysis I use Heap Analytics.
Kunle: Does that plug in to Google Analytics?
Peep Laja: No, it’s a separate standalone tool. And if it’s a site that relies heavily on web forms, maybe a lead gen type of situation, then you definitely need a form analytics tool like Formissimo or something similar.
Kunle: What about for ecommerce managers listening to this show, obviously they’ve learnt a lot, given that we’ve spoken about 1 hour and you’ve spoken most of the time, but ecommerce managers who want to get into CRO, what books would you recommend to them to get into the world of CRO?
Peep Laja: It all depends on how deep you want to go. The data research driven process that I was describing, that’s in a book that I wrote called “Essentials of Conversion Optimisation”. It’s $2.99 on Kindle. That’s the only book on the data driven research topic. If you want to learn more about testing, there’s a book by Chris Goward “You Should Test That”, it’s more for novices, it’s not an advanced level book. There aren’t really any advanced level books, they are all kind of beginner to early intermediary level. If you really want to become a CRO ninja, I have a conversion course that you should take, that’s only if you want to invest.
Kunle: And your blog is amazing by the way.
Peep Laja: yes, my blog ConversionXL. We try to keep the level of the best in the world. That’s our standard for publishing.
Kunle: Before you say goodbye, could you tell our listeners about Elite Camp? I’ve signed up to Elite Camp, it’s running between 12-14 June in Estonia, I’m going to see you in person. How many spaces are left in Elite Camp?
Peep Laja: I think it’s only 4 last spots, unless there are any cancellations to come, so I’d recommend to go to DigitalEliteCamp.com right now to have a chance of securing the spot. It’s a three day traffic and conversion event mostly focusing on the conversion aspect. It’s very international, it’s people from 25 different countries, it’s going to be great, come.
Kunle: And people can reach you on Twitter @peeplaja and just search for Peep on Google and you see him everywhere. It has been an absolute pleasure having you on the show, Peep and thank you for sharing your insight on conversion rate optimisation.
Peep Laja: Thank you for having me.
Kunle: Cheers. Bye.