Show Notes
In this episode of Startups For The Rest Of Us, Rob and Mike talk about growing from $1k to $5k MRR, projecting SaaS revenue growth, refining your sales process, and more listener questions.
Items mentioned in this episode:
Transcript
Rob: In this episode of Startups for the Rest of Us, Mike and I talk about growing from $1,000 to $5,000 of MRR, projecting SaaS revenue growth, refining your sales process, and more listener questions. This is Startups for the Rest of Us episode 343.
Welcome to Startups for the Rest of Us, the podcast that helps developers, designers, and entrepreneurs be awesome at building, launching, and growing software products whether you built your first product or you’re just thinking about it. I’m Rob.
Mike: And I’m Mike.
Rob: We’re here to share our experiences to help you avoid the same mistakes we’ve made. What’s the word this week, sir?
Mike: I’m hoping that in another week or two, I’ll have my completed self sign up process in place. It’s been rather painful to rework those pieces of code just because it’s core to somebody signing up so it has to work properly. I’ve found places where if the signup process previously had failed, at some point along the way, it left a bunch of orphan data and I’m like oh God. And then you start over and then it leaves all these things in the database and I’m just like, “Alright, I got to start refactoring some of that stuff.” The cleanup is kind of a pain in the neck but things are progressing pretty well at this point. I’m hoping to have that cleaned off of my plate in the next week or two.
Rob: That’s nice. One thing that we do, we have a two step sign up process for trial. If they do the first step but don’t enter their credit card, we cookie them. If they come back, we just present them with the credit card. We don’t want to make them do the first step again.
Mike: That’s sort of what I’m looking for right now but I’m looking for ways to create the account and have it as a placeholder without also creating a subscription inside the system. That’s the part that’s hanging me up a little bit. I’m trying to figure out exactly how to do that. The cookie idea is interesting but at the same time, I don’t necessarily want to leave their credentials sitting there in a cookie or someplace either.
Rob: Oh yeah, no. We store it in the database and then cookie them with an account ID or at least a user ID. We wouldn’t store that in a cookie. It’s just an idea. It’s a nice elegant way. Some people will come back and be like, “I can’t believe it. I didn’t have to enter my stuff again to get in.” It’s like a usability thing for people who get stuck in the middle with the credit card and then come back a few hours later and see that it’s already there at the credit card screen.
Mike: Yeah, that’s exactly what I’m looking for. I’m looking at trying to get around because like I said, data structures don’t really support that just yet.
Rob: Got it. That’s a bummer. For me, you probably hear, I’m sick this week. I’ve been working but I just kind of feel hazy and everything is just taking longer to get done. I hate being sick a lot because I feel like I have a bunch of work to do. I got things to do. Who’s got time to be sick these days?
Mike: My son was sick this past week so he was out for one or two days at school.
Rob: That’s crazy. We went to California last weekend for Memorial Day and pretty much my whole family got, it’s just a head cold, we all got it. I don’t know. We had it within the first day we were there so I feel like we caught it before we left. I don’t think you catch it on the plane and then the next day you wake up. I think there’s a longer incubation period for most colds so the timing was kind of bizarre but it seems to be hitting each of us in its own way right now. But the show must go on. Am I right?
Mike: Yes.
Rob: Every week. Every Tuesday morning, we get this podcast out. Apologies in advance if my voice grates on people, I am heavily using the mute button to cough while you’re talking right now.
Mike: What are we talking about this week?
Rob: We have a nice backlog of listener questions once again and so I wanted to run through some of those. Actually, there are a couple that really aren’t questions. They’re just comments and accolades for us. This first one is a voice mail about growing from $1,000 to $5,000 of MRR.
Bryan: [Voice mail] Hey Rob and Mike, Bryan Fleming from Detroit City here. I just wanted to call and say thank you so much for the podcast you guys put out. Being here in Detroit, you just don’t talk to anybody who’s doing the SaaS stuff. It is so goddamn hard.
I’ll tell you what guys. A year ago, I was listening to your podcast, struggling along my SaaS business, doing about $1,000 a month. I listened to an episode where you guys were talking about how to know if your business is viable and sketching out your lifetime customer value on a napkin. I did that and it’s like a lightbulb went off. I moved to a yearly re-bill. I increased my prices and here I am a year later, my business went from $1,000 a month to $5,000 a month. I know you guys have seen ones that have grown better but I’m pretty proud of it.
Again, it’s all from listening to your guys’s episode, that was one in particular. I appreciate everything you guys do like I said. You’re the first one I go to on my podcast, really looking forward to it every week. Keep up the great work guys and when I do have a question, I’m going to call back in and hit you up on it. Take care. Bye.
Mike: That’s good to hear. It’s always nice to hear from listeners who listen to an episode and that are able to act on that advice or the commentary that we give and be able to take things to the next level and really move their businesses forward. I really appreciate hearing that from you, Ryan.
Rob: Yeah, that’s awesome. That’s really why we do it. The fact that anyone ever gets value out of anything you and I say, Mike, it’s just a miracle at this point. Our next question is about how to project revenue growth? It’s from a friend of the show, Craig Hewitt. He’s the founder of Podcast Motor at podcastmotor.com.
He says, “Hey guys, after launching my first SaaS product last week, I’m wondering what realistic ranges for growth scaling are. Heard Mike talking about doubling Bluetick revenue from $1,000 to $2,000 in a month on the last episode, it seems like a great goal but is that kind of expectation realistic? Does the same growth curve slope apply at scale? For instance, did Drip see 100% growth months after you were doing mid five digits a month? Be interested to hear your thoughts.”
Mike: For mine, I looked back over the revenue numbers for Bluetick after I started officially selling it to people. There were a couple of months where I did 100% month over month growth but it was not common. It was like I would get 100% one month and then I would double it and then the month after that, I get like 50% or we just stay even in terms of growth.
There is this exponential curve that you end up getting. Depending on which numbers you’re specifically looking at, whether it is growth from one month to the next versus your total size, those are two entirely different things. What I was looking to do was originally, I was saying, “Okay, I want to go from $1,000 to $2,000.” But even a couple of days later, I looked to that and said, “You know what? That’s actually not very realistic.” Because in my mind it was like, “Oh, I’ll be doubling my customer base.” But the reality is that doubling the customer base is not realistic because it took me x months to get there.
What would more realistic is if I were to say, “Look, I want to double the number of customers that I got last month.” That jump is probably too large. If it takes you four months to get to 20 customers, you’re probably not going to be able to add 20 customers in the next month barring some major marketing strategy that you do or a big launch or something like that. That’s not what I had planned. I realized a couple of days later that that was not going to be a realistic goal.
Paul Graham has some advice about the growth rates for Y Combinator start-ups. They’re looking for 5% to 7% growth per week and he says if you can hit 10% a week, you’re doing exceptionally well. If you can only manage 1%, that’s a sign you haven’t yet figured out what you’re doing.
In the early days, I think it’s a lot easier to get those higher percentage growth numbers just because adding 1 person if you’ve got 10, then that’s a 10% growth rate but once you get to 100, adding one is only 1%. Again, you have to look at both the total size that you’re trying to grow and then back track a little bit and look to see how long it has taken you to get there in order to be able to start projecting forward and establish more realistic goals.
Rob: Yeah, there is kind of early stage growth where you’re just scrapping for every individual customer and it can be a little harder to project at that point. But if you have a nice source of folks that you can bring in almost by hand where you have that 300 person launch list or something and you’re kind of individually choosing people out of that, then you kind of get a feel for how many you can logically bring in in a month.
Once you’re past that, let’s say you’ve launched and you start marketing. You have content marketing. You have Facebook ads. You have SEO. You just have all the channels. You’ll start to see that the numbers just pan out. Depending on the number of unique visitors you get to your site, what percent convert to trial or a customer and you can just look at those numbers and that will allow you to start projecting how in a month, two months, three months and then you’ll be able to see which levers you need to pull in order to increase that. This is where it becomes easier to project but harder to make a dent because adding one, two, three customers doesn’t make a dent when you’re going $5,000 a month MRR.
Now, to really dig in on Craig’s question. He’s asking, “Can you have 100% growth once you’re already doing $50,000 a month?” The answer is not typically. That’s not really a sustainable growth pattern. Don’t think of it in terms of percentages. Think of it in terms of flat dollar amounts.
In the early days, I would shoot for if you can get into four figures of monthly MRR growth, you’re doing okay. Early days is like sub 10,000 or 15,000 of monthly recurring revenue. If you can grow at $1,000 or $2,000 a month during that time, I don’t think of it as a percentage. I just think of it as getting the $1,000 or $2,000.
If you can get to that and make it sustainable so that it’s a flywheel in essence that grows at $1,000 to $2,000 a month, that’s a pretty decent start. If you look out a year, then you think, “Wow, I’m going to have grown $12,000 to $24,000 MRR.” And then what you do is you look at how do I know get to $4,000 or $5,000 MRR and then how do we get to $10,000 MRR growth.
The sooner you can get there, your growth will be huge at first. Let’s say you get to $20,000 MRR and you’re growing at $5,000 a month, man, you have a 25% growth rate. But then next month, you’re going to have a 22% growth rate. And then the month after, if your growth stays flat, you’re going to be at 18%. It’s going to get smaller each month but if you’re still growing at $5,000 MRR, that’s actually a pretty damn good bootstrap business. The percentages start having less of an impact at that point or I should say less meaning because if you continue to grow at $5,000 MRR every month, that’s a good business.
At a certain point, your churn will make it so that your $5,000 MRR growth will get smaller and smaller over time. That’s really a whole separate conversation. That’s when you hit a plateau and there was an episode we recorded probably 100 episodes ago with Reuben Gomes about how to see plateaus coming and how to get over them.
I guess what I’d encourage you to do, Craig, when you’re thinking about this is think about it in terms of absolute dollar amount. People say percentages because A, it sounds impressive or B, they want to not give absolute revenue numbers in public. When I used to talk about Drip’s revenue growth on the podcast, I didn’t want to say we’re growing at $5,000 or $10,000 a month so I use the percentage instead.
But realistically, the absolute dollar amount especially for a bootstrapper in my opinion is what counts because that dollar amount is what allows you to hire people, it’s what allows you to buy Facebook ads, it’s what allows you to run the business whereas percentages really have just such a smaller meaning. You do hear about Y Combinator start-ups or even some bootstrap start-ups that get really good traction.
Growing 20% to 40% a month for a while is totally doable. It’s hard work and you really need to catch a flier. It’s a Cinderella situation but it is possible for the time being but growing 100% month over month for more than just a few months is unheard of. I think you would literally need to be like an Uber or something to be seeing growth like that. I hope that helps, Craig. Thanks for the question.
Our next question is from Chris from vendorregistry.com. He says, “I’ve been thoroughly enjoying your new podcast. I heard you were looking for some questions. Vendor Registry’s SaaS platform and marketplace streamlines the $250 billion dollar local government purchasing market by standardizing and centralizing traditionally paper intensive workflows. Since we have so much green field ahead of us, coming up with new ideas takes only 10% of our time. The other 90% is spent debating which ideas to execute first. Ideally, we stop debating. We let the market decide through AB testing however, implementing AB testing requires money for the tools dab time to build out two versions of everything and enough users to generate actionable data. All of which are in short supply. How do you recommend we get started with AB testing given the resource constraints?”
Mike: I think the resource constraints that you mentioned are probably the place to start because you’ve talked about how there’s a couple of different places where you have these resource constraints. There’s time, there’s money, and then there’s the low numbers of users. I think the low number of users is the thing to really focus on just because it’s really difficult to do AB testing if you don’t have a large number of people that you can put through a particular part of your sales funnel and get the results that you need in a short enough time span.
If you’re running an AB test and it takes you 12 months to get a result, there are two problems to that. The first one is that it takes you 12 months to iterate and do any sort of testing to move you to the next level. The second part of that is that because it’s such a large time span, those results are probably not really statistically relevant anyway.
Even if you go through those calculations and you think to yourself, “Oh, well yeah, this is statistically relevant.” It’s probably not just because of the seasonality of that stuff. It may be statistical over the course of another year but it’s not helpful. That’s the root problem. It’s just not helpful for your business to figure out really what’s going on. I would not really look at AB testing at all.
I would probably look at other areas of your business and try and find places where you can really drive the business forward whether that’s content marketing or advertising. You could try various different advertising channels. You could try face to face meetings.
For this particular industry, it seems to me like it’s very much relationship driven and yes, there are certain lists and stuff that you can get on but having relationships with people who can recommend your services, that’s really what tends to drive business in a larger enterprise environment which is kind of what the Vendor Registry falls under.
Rob: I’m not sure how much I have to add to that other than it doesn’t sound like split testing is the way to go for you. You just need a lot of traffic to do it and it does take a lot of time. It’s easier to split test things like headlines, pricing page. I would not build out two versions of anything.
I think this is much more about talking to customers, potential customers and getting a small sample size but getting real feedback, whether that’s in person at conferences, or trade shows, or Skype calls, or emails, or whatever it takes. That’s going to be a faster and more effective way to try to figure out what to build next.
This is the conundrum of being a good product person. It’s figuring out what to build next. I think there’s a big element that your vision of what it should do is a big part of gut feeling based on your experience in the space. You take a method of landscape. Part of that in my opinion should not be the over riding part is talking to customers and getting ideas and getting a sense of where the market wants you to head.
I don’t think split testing is the way to go. You think about how we decide on what’s the next feature to build in Drip. We don’t build two different features and split test them. We make the decision. It’s based on a bunch of factors that we combine and have honed over the past couple of years. Thanks for the question. I hope that was helpful.
Our next question is from a listener who asked to remain anonymous. He says he’s been following the podcast for a few years, can’t get enough of it. He’s a self funded a B2B SaaS startup for about two years. He says, “I have dozens of potential customer interviews and have been quite involved in helping beta users and testing the product myself. It touches on a sensitive business process that impacts other processes so potential customers are on a certain level of configuration and they are very afraid of bugs. I’ve had a few bad experiences with some beta users who keep asking for improvements and never sign up for the product. To be fair, the app has had some bugs here and there as we were in a hurry to push new features. We may have gone too far in the opposite direction asking customers to sign up first a statement of work contract before doing any implementation. It is really slowing the intake of new customers even if they are enthusiastic after we show them the demo. Now that we can slow down on new features, I’m hesitating on the priority to pursue. I have a number of things we’re looking at. Do you think we should remove any barriers from the sales process for the time being, focus on removing all bugs, and send the message that the app will have to be used mostly as is or do you think we should make Screencasts, write tutorials, etc. to let users start on their own? I’m hesitant to do this because based on their objectives and on their existing processes, they will have to use the app in different ways. For what it’s worth, the cheapest pricing tier is $300 a month and the medium tier is $900 a month.” What do you think, Mike?
Frankly, I don’t think we need to stay within this. He said should I do this or that? I’m not sure that either of those is the right choice. I don’t think we need to stay within the boundaries he’s laid out.
Mike: I think this is an interesting question. The problem here is I think that you’re unsure of how to proceed and it partly depends on where your revenue is and I think at those price points, you’re probably at a point where if you’ve been working on it for a couple of years, my guess is that it’s probably profitable.
I think that given that, you probably want to put yourself in a position where the app is making enough money that you can pour additional money from the business back into it and reinvest those profits to build not just the business but also go back and address some of the quality issues or the bugs and stuff that are in there. That’s just going to take more resources, more engineering time, and to get that stuff, you need money. I would probably focus on anything that is going to be bringing in revenue.
It doesn’t seem to me like that’s doing screencast and tutorials. Those are things that are helpful for users but they’re not necessary to get somebody in the door as a paying customer. You can deal with a lot of issues like that that come up on an ad hoc basis or during on boarding sessions. There are ways to get around that without actually having to do the work.
Basically, do whatever that needs to be done in order to drive those sales through, if people need personal demos or hand holding during a personal walkthrough or something like that. You can use those as well. The price points, my guess is you’re probably talking to people directly for most of your sales especially if you’re going to do any sort of annual contract or anything like that. Those are the places where I would focus and then once you have that money coming in the door, then turn around and start deciding where that money should be allocated to help build the product in the right direction.
For people who are coming in, I think the interesting piece about that was that flip flopping back and forth between, “Oh, there’s some bugs. We should get them taken care of.” Versus going to the other extreme because people aren’t actually signing up for it after you go in and address the issues that they brought up. What I’ve been doing is telling people, “Look, if you sign up, then I will look at that as kind of a higher priority.”
I typically track feature request during a demo and I’ll write down some of these name or tag it on a bug in an open case inside my bug tracker and put the person’s name on it. But if they don’t sign up, automatically, it becomes a lower priority to me because it’s not a paying customer. They didn’t sign up for it. But if I get enough of those onto a particular case, then I can probably justify spending time and effort because then it looks to me like it’s impacting sales.
It doesn’t sound like you are tracking any of that information right now. You’re really just micro focused at the individual person who you’re looking to onboard. I’ll back off from that a little bit and try take more of a macro view to it and say is this a big enough problem that we should deal with it or is this something that we can just log and come back to in the future?
Rob: My sense is it sounds like you have a couple of issues. If you have bugs in the app right now, that’s the first thing you need to take care of. I would put the brakes on everything and fix those because if you get a reputation in this space, and it sounds like it might even be a small space where people know each other and are talking, if you get the reputation of being buggy, that doesn’t go away. It’s like your credit score. Once you jack that score, it is really, really hard, if not impossible to get that back. That would be the first priority of all this stuff that I would basically buckle down.
It’s kind of like having performance problems. It’s like we halt almost all feature development if we find that we’re running into performance problem because you just can’t let that stuff go. It’s only gonna get worse and it’s going to tarnish your reputation. As that’s in process, it sounds like you had folks who ask for improvements and never sign up. By improvements, if you mean bug fixes, then that’s fine. I would do that.
But if they’re asking for more features, something that I would do in the early days of Drip, is if someone asks for a feature, I would say, “Is this the one thing that you need us to implement to be able to use Drip?” If they said, “Yes.” Then I said, “Great. Sign up for a trial and we will have this implemented by the time your trial expires. If we don’t, I will comp your account until this feature is built.” We don’t do that anymore. Tens of thousands of users, you can’t. But in the early days, when we were scrappy, that’s the kind of stuff I was doing to try to get.
If someone had a list of 10,000 or 15,000 people, it’s going to pay us a couple of hundred bucks a month. That was a big deal and it sounds like for you, it’s totally worth doing. It’s interesting. I don’t know about a statement of work contract or any of that stuff. That to me, feel a little cumbersome and a lot of overhead but just having them sign up, put a credit card on file, is a step and you’ll find that some people will balk and they don’t sign up and that’s fine. That’s fine. They were never going to sign up. The ones who do are committed to it and then you implement their features. I just found that that’s a good way to do it.
Given your price point, personally, unless you have a lot of inbound interest, I would personally be hand holding and doing very much one on one sales. The fact that between $300 and $900, I’m assuming you have a higher end tier, your monthly average revenue per user is going to be let’s just say $400 or $500, $600 range. That’s a nice chunk of change. That is definitely worth your time or someone you hire’s time to handle, focus and get them started.
I would not tend to go towards the self on boarding at this point just because of the price point and the value. I think that the value the customers bring to your business is worth the time to spend especially in these early days, to do the learning and to handle everybody into the app. I hope that helps.
Our next question is actually not a question. It’s a compliment for us and the podcast. It’s from Alex Summerfield. He says, “I’ve been listening to your podcasts for about three years. I remember when I first listened to the episodes, just thinking about it, part of your intro really fit for me. I tried a couple of ideas but they never took off. After switching a couple of jobs and getting too busy to work on side projects, I finally got sick of my job and started consulting and working on a startup on the side. I listened to your latest episode and heard that just thinking about it part, and it finally hit me that I’m no longer just thinking about it, I’m actually doing it. I just want to say thank you for the impact you’ve had on my life. All the advices helped and I really enjoy hearing the updates on your businesses.”
Mike: Thanks for the compliment, Alex. I do think that it’s really hard to get started and I think a lot of it has to do with barriers that we encounter as we’re going up in the social environments that we either come out of or immerse in on a daily basis. Breaking away from the things that the people around you are doing is just really challenging, to be honest.
If you don’t interact with the people who are starting their own businesses, or entrepreneurs, or even doing software on their own, it could be very difficult to break the mould and people look at you funny and say what is it that you’re doing because you’re the oddball at that point. You’re weird. It’s nice to hear from someone like you who’s gone out there and actually started working on side products with the intent to move forward and start your own thing.
Rob: A couple more questions. Our next one is from Rob. He says, “I’m part of an early stage bootstraps startups company in the UK with two other co founders. We are between three to six months from launching a product and having any revenue. One of my co founders is pushing for job titles/corporate roles to be assigned like CEO, CTO, etc. Is it important to allocate corporate titles/roles? Is it best to get this sorted early in the life of a company or is it just a distraction?”
Mike, as you think through this, I think corporate titles versus corporate roles is two totally different questions so maybe we can tackle each of those individually.
Mike: I was going to mention that because when you’re doing customer development, it seems to me like CEO and CTO, when you’re talking to a prospective customer, you may think that that sounds impressive but for the most part, especially if you’re early on, you don’t even have a product yet, those titles, I feel like they detract from whatever it is that you’re trying to do because they say, “Oh, you’re the CEO of this company that really has nothing.” And you’re asking people for help. Versus if you are the CEO of a company that has a full blown product and it’s been in the market for three or four years, that has weight behind it.
That early on, I would just say founder or co founder. I wouldn’t even worry about those corporate titles when you’re talking to customers or prospective customers. They’re basically meaningless at that point. In terms of roles though, that’s what people tend to think about when they think of titles like, “Oh, the CEO does this. The CTO does this.” Really, internally, that’s where that matters but externally, it doesn’t.
You want to have a clear expectation of each other and what it is that you’re expected to do inside of the business. And then externally to your clients and customers, I would just say you’re a founder or co founder. I wouldn’t go into what it is your actual title is because it really doesn’t matter. But internally, you do need to be clear on what is your responsibility versus what your co founder’s responsibility is so that each of you know what they’re supposed to working on and when they should be getting guidance and input from the other person into the things that they’re doing.
Rob: I don’t have much to add to that. I think that’s a very good summation of it. There are certain things that are distractions. I found that using the phrase co founder is really helpful especially in the early days when folks know that you’re still being scrappy and doing customer development.
Last question for the day is about sending promotional emails to existing customers. It comes from Bruce. He says, “Thanks for the great show. I have a simple software product I’ve been selling online for several years. My customers all need to provide their email addresses so I can send their payment confirmation and so I can authenticate them when they log in to use my product. I don’t state on my signup page anything about what types of communications the new customer should expect to receive from me when they provide their email. I’m releasing some improvements and new features. Some of these improvements will benefit existing customer with no additional payment. Some new features I intend to sell to existing customers as add ons. Is it okay to email my customers to tell them about the new features? They haven’t given explicit consent to receive email from me. I noticed that whenever I send a SaaS part because I’ve received a lot of emails providers even though I don’t recall opting in to any mailing list. If you guys think it’s ethical, legal, or wise to send emails to my existing customers, would you recommend I add an opt in checkbox or some small print stating the customer is going to expect to receive occasional information/promotional emails and only send only send emails to customers who opt in or have had the chance to read the small print? Thanks in advance for your advice on this. This app is called countingdownto.com.”
I like the way he phrases it. There are legal implications and then there’s ethical. That seems a little strong but your own moral compass is how it feels. What are your thoughts on this, Mike?
Mike: I think we have to provide the standard podcast disclaimer that we’re not lawyers and you can’t rely on us for legal advice, especially given that this business is based in Canada so laws there are different than they are here in the US. Given that, my thoughts on this are that if somebody is signing up for your SaaS product, there are a couple of different types of emails that you could send to them.
When they purchase it and then you send them a receipt, that’s what’s considered to be a transactional email. They purchase something from you, you send them a receipt. That’s completely legit. You are almost expected to send that email and that does not fall under any sort of spam laws that I’m aware of. When you get past that and you start talking about newsletters, and product updates, and things like that, that’s where it gets into the gray area which I think that you’re a little concerned about.
If you’re running a SaaS, it seems to me like you almost have to have people on a newsletter of some kind or a product updates email list of some kind. That said, I would default to adding them to it but I would explicitly give them the ability to opt out at any time. Even if they are an ongoing customer, you might want to segment that list a little bit to give them a profile page that says only email me about product updates and include me in my newsletters. Two different options and then you segment your list based on those two things.
That way, if somebody wants completely out, and they only want the receipts from you, you can still send those email receipts to them. But if they also want to receive your newsletter and the product updates, you’re still going to be sending those to them. Again, just segmenting between those two types of people or if there are other segments that you want to include in that, I would do that but I would opt them in by default and them let them choose otherwise if they want to. That’s probably the way that I would approach it for that.
But again, there are obviously legal implications as well. As to what Rob said about the moral implications of it regarding your moral compass, if you sign up for a SaaS, you expect to get email and told like, “Hey, this is how this product can provide additional value.” I think what you’re probably seeing is when you sign up for new products, people are sending you a lot of onboarding emails. I think that those fall into that grey fuzzy area where a SaaS vendor could easily overwhelm you with a lot of email to the point that you start to consider it spam more than anything else, especially if you decided to not use the product.
There is a difference between those emails that are being sent as onboarding emails where you don’t have the opportunity to opt out because some vendors will send those and you can’t opt out. That’s part of their on boarding process. Depending on whether you allow them to opt out or not, that factors into it.
Rob: Yeah. In the US, CAN-SPAM says if you have a commerce relationship or customer relationship with someone that you can send emails to them related to the business transaction and I think otherwise, there’s always this grey area. It’s like if you read the verbiage exactly, you could interpret it one way or the other. But in general, you do see SaaS apps. If you sign up, they don’t explicitly make you opt in and they send email and people are not being investigated by FTC or FCC or whoever would investigate that. I think the precedent is that this is generally considered a legal thing.
Like Mike said, your moral compass is going to vary. There are people who will say, “Oh, I should explicitly opt in for every single email you’re going to send me.” People get so far into the protection of your email address and your inbox and that no one should ever email you and then on the other end of the spectrum, there are spammers. You gotta ask yourself what’s it worth. There’s some business value to you.
People genuinely do generally really want to hear about feature updates, specifically feature updates to the app. Absolutely! Who doesn’t want to hear that? It’s pretty rare that people unsubscribe. Like Mike said, you should give them the opportunity to do that. Even if you’re sending things that are up sells, my guess is that’s going to be pretty valuable information for people because if it’s an up sell, it’s likely going to have some value that a good chunk of your list is going to at least be interested in and interested in hearing about.
Everything you’ve mentioned in terms of content, it’s not like suddenly you’re taking customers and just starting sending random blog posts with some content. These are really applicable to what they’ve signed up for and what they’re paying you money for. I don’t personally have any kind of issue with you doing that. It sounds like you’re not going to be overly promotional based on how careful you’re thinking about this, which some people don’t do.
Anyways, that’s my take on it as well. It sounds like Mike and I are pretty much in line on this. I think that’s the general consensus of the industry and where it’s at today.
Mike: Yeah. Coincidentally, it’s the first that we’re recording on and I’m getting ready to send out an email to the people who are current subscribers for Bluetick to tell them like, “Here’s the list of all the different updates that have been added over the past four to six weeks that you might be interested in hearing about or maybe you didn’t know about this feature or this was something that was requested by a couple of people.” Just to let them know because there’s a difference between features that a specific person requested and you can let them know when those things go in but you still have to let the rest of the customers know that that new feature has been added. Otherwise, they won’t know anything about it.
I’ve seen a couple of support requests come in and say, “Hey, is it possible for us to be able to do this?” I’m like, “Yup. I actually just added it a week ago and I haven’t gotten around sending out basically the product updates over the past month.” I do think that it’s a good practice to get into. Just be sending those out especially if you’re running a SaaS where it’s regularly changing and the expectations of the customers are that over time, that SaaS app is going to get better.
That matters a lot more I think in the early stages than when you’ve got a late stage SaaS where it’s like you’re not really adding a lot of features. You’re adding scalability and things like that. But if you’re adding features at a very fast pace, it’s probably best to email them on a monthly basis and say, “Hey, here’s all the new things that are going in that you’re getting essentially for free just for being a paying customer.
With that, I think that wraps us up for the day. If you have a question for us, feel free to call it into our voicemail number at 1-888-801-9690 or you can email it to us at questions@startupsfortherestofus.com.
Our theme music is an excerpt from We’re Outta Control by MoOt used under Creative Commons. Subscribe to us in iTunes by searching for Startups and visit startupsfortherestofus.com for a full transcript of each episode. Thanks for listening and we’ll see you next time.
Episode 258 | 9 Common A/B Testing Mistakes
Show Notes
In this episode of Startups For The Rest Of Us, Rob and Mike talk about the common mistakes people make when A/B testing.
Items mentioned in this episode:
Transcript
Mike [00:00]: In this episode of “Startups for the Rest of Us,” Rob and I are going to be talking about nine common A-B testing mistakes people make. This is “Startups for the Rest of Us” Episode 258.
[00:07]: [music plays]
Mike [00:15]: Welcome to “Startups for the Rest of Us,” the podcast that helps developers, designers and entrepreneurs be awesome at launching software products, whether you’ve built your first product or you’re just thinking about it. I’m Mike.
Rob [00:24]: And I’m Rob.
Mike [00:25]: And we’re here to share experiences to help you avoid the same mistakes we’ve made. What’s going on this week Rob?
Rob [00:29]: You know, at this point, it’s almost October 1st, and that gives us just about six weeks to get our stuff done before Thanksgiving happens here in the States. And frankly, the world goes into kind of holiday mode, it seems. So I’ve really been thinking about what the next six or seven weeks holds. I know by the time this goes live, it’ll probably be mid-October. So you only have a month, but think about that with whatever you’re doing. If you have another two to three weeks left to develop a big feature that you wanted to launch, you’re probably not going to launch it until January at this point. Maybe you could launch it first week of December, but it’s just so choppy from here on out. And I know there’s still three months left in the year, but it just starts getting touch-and-go here over the next few months as kind of the retail space – it’s great for eCommerce, not typically that good for B-to-B apps in terms of growth, and in terms of finding new customers. Because our heads are with plans of hanging out with family and doing other things. And a lot of folks are not necessarily like about, what’s the new app that they can sign up with and have a new trial running over your Thanksgiving break or your Christmas break. How about you, what’s going on?
Mike [01:35]: Well, I went on a personal retreat last weekend. And I found it really helpful. I came out of it kind of settled and at peace with myself on a number of different fronts, and more or less really raring to go and go after a couple of different key points that came out of my retreat. So kind of looking forward to the next six to twelve weeks and see what happens.
Rob [01:53]: When you say personal retreat, was it, you thought about work stuff or personal as well?
Mike [01:58]: Personal and work, so like I kind of dedicated some time to think about like things that were going on in my personal life and then also, in terms of like the business, and where I want to go and kind of what I want to do next. So I definitely sat down and analyzed a lot of things that have gone on over the past few years. Again, both on the personal front and on the business front. And then just helps me to make decisions on where to go next.
Rob [02:19]: And where are you going next?
Mike [02:21]: Well, it’s going to be validation for a couple of different ideas. And I basically went through a process that I had laid out in my book last year, and started ranking some of the different ideas and trying to figure out which one I should start validating first. So I kind of got it narrowed down a little bit. And right now, I’m going through the validation process with a bunch of people. I’ve had half a dozen conversations, and I’ve got at least half a dozen more scheduled over the next several days. And basically, like every day one of my tasks is to go through and start lining up more people to talk to for those conversations. So they’re going well so far. And I’ll just kind of see where it takes us. Until I get further through the process, I’ll probably just keep it quiet, I guess a little bit quiet for the time being. But I will talk about it probably a lot more on the podcast as things progress.
Rob [03:03]: Very nice. We have a slew of new iTunes reviews, and I won’t go through them all, but we had one from a couple of months ago, and it says, “I listen while I eat sandwiches. Really enjoy listening to the show, general candor and advice. Your knowledge is being used directly to introduce [Deli?] Empire, which includes a couple of different brands,” he says, “called Sandwich [Knob?] and Seafood [Dial?] to the world. Looking forward to catching up on the newer episodes. Another review says, “The last couple of months has been like watching Netflix.” And EA760 says, “Binged on all Rob and Mike’s past episodes and now have to wait a whole week for new episodes. Each episode is full of so much great info for the self-fronted entrepreneur. It’s the only podcast I listen to at 1x speed, because I’m constantly rewinding to take down notes. Thank you for the consistently invaluable advice and insight.” So thanks so much for your 5-star reviews. Even if you don’t want to write a full commentary like that, if you could log into iTunes, Downcast, Stitcher and plunk us a 5-star if you get any value out of the show, we would really appreciate it.
Mike [04:03]: One other thing to note: about a month ago, I did an interview with Matthew Paulson on email marketing demystified. And his book just came out, so it’s listed on Amazon right now, and we’ll link that up onto the show. But I just wanted to mention that because the episode did come out a little while ago. So before we dive into today’s episode, there is a listener question that we want to address. And this one comes in from Christian, and he says, “Blockers like Ghost Read, Privacy Badger, etc. are blocking third-party widgets like Qualaroo, Mixpanel and others. As content blockers grow in popularity, do you think this will kill Java script widget products?”
Rob [04:35]: So I think that the answer is no. I do think it can have an impact on them. I also know that some of these third-party blockers are getting a little, I’ll say ambitious, and they’re blocking things that aren’t exactly ad widgets. And that if you report – like at one point, the Drip JavaScript got blocked by an ad blocker. And when I reported it, they said, “Oh, you’re not an advertising platform. That’s okay.” And they unblocked it. So I don’t have experience with Ghost Read, Privacy Badger, to know if they’re doing this intentionally or if it’s accidentally, because technically most of these guys don’t want to block non-ads. They really just want to block ad networks and that kind of tracking and cookie stuff. I think some of them are getting, like I said, a little ambitious. And if they start to dive in and block things like Qualaroo or Mixpanel and other things, they’re going to start breaking the web at some point. Because people are relying on these services often to have some mission critical functionality. So in my estimation, number one, most people aren’t going to install these ad blockers. I mean, it’s a lot of people in our circles. But if you look at the total number of people on the internet versus the total number of people with ad blockers, it’s a tiny, tiny percentage. And then secondarily, I think that it’s kind of like you can go around the internet with cookies blocked and JavaScript blocked, but it breaks a lot of things. And that’s where these guys are going to have to find their line, because if they push it too far and it actually starts breaking things that require the site needs to do the fundamental functions, then they’re going to have to back off with that. And I think there will be backlash if they push it too far. So to answer that, the original question, I don’t think these are going to kill JS widget products. They may put a small damper on it on the short term as things adjust, but long-term, I feel like these things are going to continue.
Mike [06:17]: Yeah, I’m in agreement with you. I don’t think that they’re going to take over the world and get rid of every JavaScript widget product that’s out there. The fact of the matter is there’s a bunch of these products out there, and these types of products have been around easily for at least 10 years. And so like ad blockers of some kind, and this is really a variation of those, is like an ad blocker. But those have been out for at least 10 years and like I still don’t use an ad blocker. I can’t name anybody off the top of my head who does. I know that people do use them and I know that historically I’ve seen them advertised around and seen different places where they are present. But the bulk of the internet is not using them, and I just don’t think they’re going to become popular enough to make that significant of a dent in the businesses that they’re trying to I guess go against.
Rob [07:02]: So what are we talking about today for the main topic?
Mike [07:04]: Well, today’s topic is nine common A-B testing mistakes that people make. And this topic doesn’t come from any specific source, but I did use a bunch of different resources from Kissmetrics and kickSprout and ConversionXL to essentially put together this list. So some of the things are repeated on their list, but they have, I guess if you were to total up all of theirs, there’s probably 25 or so. It seemed like there was a bunch of low-hanging fruit that were highly overlapped between them. So I essentially concentrated on those for the outline for this episode. But the main gist of this is that a lot of people will advocate that you do A-B testing, but when you start getting into A-B testing, there is a ton of things that people will simply accept as true or fact. And unfortunately those things have a lot of subtle nuances to them. And I think that the main idea that will become a little bit more clear as we start going through these different mistakes that people are making.
Rob [07:54]: Let’s dive in.
Mike [07:55]: So the first one, is testing random stuff. And everyone’s probably heard about the 41 shades of blue test at Google where they were testing 41 different shades of blue to figure out which one would convert better on one of the Google buttons. And I would classify that as essentially random stuff. Because I look at something like that, it’s like, why would you even test that? What would possess Google to do that? And the fact is that they’ve got that kind of time on their hands to be able to do something like that, and they don’t know what else to do. And my question would be, is that really the best place in your sales funnel to be testing. And in their case, it may very well have, because they’ve only got a one-page website that they want people to click through on that button. So for them, it kind of makes sense. But if you’re going to start A-B testing, really figure out where you could make it the biggest impact on your sales funnel. Don’t just randomly test stuff on your website, changing buttons from one color to another. Really look at your sales funnel and try to analyze where could you make the biggest impact. So for example, if you have a 10-stage sales funnel, and you have let’s say 10 sales per month, and at the last stage of your sales funnel, you have 60 people. Well, if you can increase that last stage by just 10%, that conversion rate from the last stage to the paying customer, then you’re going to convert an extra six customers. Well, that just raises your revenue by 60%. That 10% conversion rate increase will raise your revenue by 60%. So your definitely key piece is where you can increase revenue, but knowing where to tweak those buttons is really, really important.
Rob [09:25]: Yeah, and I mean, to answer your earlier kind of ponderance of Google is I think the reason they test all the shades of blue is because they have such an enormous volume, that they can, and they can get definitive results and they can repeat them. Whereas, with someone, if you have 10,000 uniques a month or 20-30,000 uniques a month, you can’t test that many things, and it does become a waste of time. So the point you’re making here is don’t test things that aren’t going to have a really big impact on your numbers, on the numbers that matter too, right? It’s what’s going to have the biggest impact on revenue or on trial count, and look at those. Focus on those first, because that is your low-hanging fruit. And you’re not off wandering, trying to test button colors, which at some point may be worthwhile, but with our meager 30-40,000 uniques a month, it might take you quite a long time to get any type of result from that.
Mike [10:13]: Yeah, and all of that is about, just having some sort of a methodology that you’re following and knowing where you can get the most reward for the things that you’re doing or identifying whether you’re at a local maximum to do broad testing or if you really need to narrow down your focus on to those little things, like as you mentioned, the button colors. I mean, maybe that’s the best place for your time. But again, just testing random things, is not a good strategy. Like, you have to have a strategy going into this.
Rob [10:40]: And the second common A-B testing mistake is assuming that tests other people have run are going to turn out the same for your business. So it’s basically looking at a website, reading a blog post about someone who ran a test where the red button outperformed the green button, and then just changing all your buttons to red without testing. The point here is that you shouldn’t take the stuff as face-value that you read on the internet. Not that it would be fake, but it’s just not going to work for your audience. So good examples of this are, do you ask for credit card up front, or do you allow credit card without a trial, right? So that could be one thing that you might read about and you hear the rules of thumb. We talk about them here on this show for sure, and there’s other discussions. And you can take that as a default and then test it. Same thing with button colors. There’s all this debate about what button colors work well and orange and yellow are often cited as the best. So that’s kind of the design rule of thumb that I would start with a my control, and then I would test from there – even long form versus short form, landing pages or home pages, same thing. You’ll hear certain copywriters swear by long form, and then you’ll hear people run tests and have there be no difference and even have the long form perform worse. We actually had this with Micropreneur.com landing page, where we had a short form, a long form and basically a medium form. And the middle one outperformed the other two in repeated split tests, and that’s never something that I’ve heard in particular, kind of a mid-form page would work. But once we tested it, we realized that was the best result for us.
Mike [12:10]: The third common mistake people make is not running tests long enough. Your tests need to be run long enough in order for it to be statistically significant. And I’m not going to go into the math behind that, but essentially you need to be reasonably sure and confident that enough people have gone through that test that you can look at that and say, yes, this is statistically significant. I would say rule of thumb for using something like this, is if you’re not sure what that really means, then you should have at least 100 conversions coming through on the tests that you’re running. And that’s not 100 visitors, it’s 100 conversions actually going through in both directions. And then you also want to run those tests for at least one to two weeks. Don’t run a test like over the weekend or for a partial week. Running it longer is generally better. Obviously there’s exceptions to that, but you want to run it for at least one week, if not at least two weeks, so that you’re not dealing with any sort of variation where you get this influx of traffic from a particular website, and then it drops off very quickly, because those types of events that happen can radically alter the skew of the numbers and the percentages and how those number map out inside of you’re A-B test.
Rob [13:13]: Yeah, the formulas for A-B testing and getting the statistical significance are a little bit complicated. And there is a website out there, and I forget who does this, but he basically split-tested the same two pages against each other, and they will have different statistical significance, like one will outperform the other by 10 or 20% with statistical significance. And that’s just kind of a downer to think about it, that split tests are not, they’re not fool-proof basically, right? You’re not going to get the exact same response from the same page if you split the audience. It is fallible. And that’s how I kind of think about split testing, is it’s like a good guide, it’s the best we can do, but there’s always room for error, not in terms of the math itself being wrong, but just that the way the audience is split is not going to be identical. And there’s room for it not to be statistically significant, even if it looks like it should be. It’s kind of easy to trip yourself up on that, I think. The fourth common A-B testing mistake is not killing insignificant tests. What you’ll notice is that most of the tests you run are not going to show statistically significant differences. And if you have a bunch of tests running or even if you have one test and it’s running for a long time, and there’s not a significant difference, then it’s basically a distraction, right? It could negatively influence other tests. It could also be a waste of your time because you only have so many visitors and so much time to run these tests that you really want bigger wins. You don’t want marginal ones that are going to take forever to identify themselves, but still not be significant. The nice part about getting something drastically different in terms of results is those tests tend to run very quickly, and those are the kind of wins you want to go after. So if you start a test and things are close, I tend to shut them down pretty early and try to do something more dramatic to get a more dramatic result.
Mike [15:02]: The flip side of that is that you don’t want to ignore small gains if they are significant. So if you’re able to get a small statistically significant gain over the course of two weeks or four weeks or something like that, there’s no reason to not implement it. Because if you can get a 1% day over day gain, then over a course of a year, that’s a 37x yearly gain. So there’s clearly benefits to doing that, but only if it actually is statistically significant. The fifth mistake is to not consider the needs of your prospects. So when you’re looking to determine which tests you’re going to run, think about how this test is going to impact your prospects. When they’re looking at a website that you’re changing, what information are you taking away or adding to the site, and specifically why are you doing that? How is it going to help them or how is it going to benefit them? Is it going to move them closer to a decision faster or is it going to do so at the expense of making them feel tricked down the road and experiencing some sort of buyer’s remorse. So that’s something you have to be a little bit careful of. And you could recognize that if you have higher conversions, but also a higher churn down the road. And unfortunately you’re not going to see that churn until they get past the point of paying you, and they’re going to ask for refunds or cancel your service. So you will see those a little bit further down the road, and you may have to back pedal on some of your tests, especially if it showed that significant increase and you implemented it as kind of a permanent change, and then down the road you see that churn. You’re going to have to back that out. But those are the types of things that you need to watch for and be mindful of what it is that the person who is looking at the website are seeing.
Rob [16:33]: Our sixth mistake is trying to move too fast. What we mean by that is testing too many things at the same time on the same page is really hard. This is multi-variant testing, you need a lot of traffic to do this. And you kind of need even more of a plan because of the complexity you run into. Same thing with running too many simultaneous tests, right? If you’re testing headlines on the homepage and then something on the tour page and then something else on the pricing page, these things can interact with each other. And it’s easy to make a mistake and either misattribute something or, like you said with the fifth mistake, it’s easy to make a change and then not realize there’s a kind of a downward element to it that doesn’t show up for two or three months. So if you’re just starting out or you’re really not an expert in split-testing, this is always something I say, tell people to do, is not to jump into multi-variant testing, but to kind of run one test at a time until you feel comfortable with it, and you feel like you’re able to make progress and interpret the results before you jump in to try to do any type of multi-variant stuff.
Mike [17:30]: The seventh mistake is to not use any widely accepted A-B testing tools. And there are a lot of A-B testing frameworks and libraries out there. Some people will decide that they’re going to build their own. I would question whether or not that’s a good use of your time, because there are so many good tools out there. In terms of the A-B testing frameworks and libraries themselves, I would be somewhat cautious about using something that was an open source library or something that was freely available, just in case the math behind it doesn’t work. And I did run into a case where I started using a library and then realized after the fact that some of my results didn’t seem to quite be matching up with what I would have expected, and I started digging. And I found out that the way they had implemented the A-B testing framework itself, was actually wrong. It wasn’t a real A-B test, it was actually kind of, I forget the specifics of it, but it literally did not work right. So you do have to be careful when you start going out and doing those things. The common wisdom is to basically do an A-A test where you’re testing the same thing against itself to be sure that the tool you’re using is a valid tool to be using. But there’s a lot of different things that you can use like Google Analytics, Optimizely, Visual Website Optimizer, Unbounce, all these types of things make your life easier when you’re trying to do A-B tests, and make sure you’re getting statistically significant results.
Rob [18:48]: Our eighth mistake for A-B testing is that your tests should have a specific hypothesis. In other words, don’t run a test just to “see what happens.” Run it with a desired result in mind, typically this is to increase the number of people who click to the next page, or it’s to increase the number of trials that you get out of this, or increasing engagement with a particular page. The problem with running one just to see what happens is it’s hard to learn anything unless you actually have a goal in mind. So you might see the result, but you aren’t able to match it against your beliefs about what is actually going on. And frankly, if you’re just testing to see what happens, your time is probably better spent elsewhere rewriting copy on another page or just doing some other type of marketing activity.
Mike [19:30]: Yeah, I think the basic idea behind this is to really kind of challenge what your own beliefs are. To make sure that, one, you’re on the right track, and, two, that if you’re not on the right track, that you can be corrected. As Rob said, if you’re just running the tests to see what happens, the reason you’re not learning anything is because there’s no opportunity for you to be wrong. And that’s really what you’re trying to do is try to find those places where you believe something to be true and you can either verifiably prove it, or verifiably disprove it. If you just do it to see what happens, there is no opportunity for that to happen. And the last A-B test mistake that people make is to ignore the potential for skewed or broken tests. Don’t ever assume that all your data is correct. There’s plenty of opportunities for either bad tooling or things to be blocked sometimes within the communication framework. There could be seasonal shifts. So for example, if you run a test in the middle of December, for example, then there’s probably a very high likelihood that that test is going to be dramatically impacted by the amount of online shopping that’s going on. There’s definitely seasonal things that can happen throughout the year. Over at the summer for example, people are searching for different things than they are in the winter. You can also run into issues where if you’re sending out emails to your email list, then those subscribers are going to have a bit of familiarity of who you are and what your website looks like. So those people are going to have different conversion rates than a new visitor to your website. And then of course, you have to also deal with the fact that there’s sometimes parts of your website that are just going to be broken within a web browser. And you may or may not know that. So those are the types of things that you need to be at least mindful of and realize that A-B testing itself, it does generally work mathematically, but there’s always those things that you have to be careful of because it’s not a foolproof mechanism. It’s a tool, and like any other tool, it can be broken in certain ways. So you have to be careful that you just don’t accept everything as fact, and dig a little bit to make sure that nothing’s going wrong. So to do a quick recap, the nine common A-B testing mistakes are: 1) Testing just randomly instead of having a plan. 2) Assuming that other tests are going to be valid for your business. 3) Not running tests long enough. 4) Not killing insignificant tests quickly enough. 5) Not considering the needs of your prospects. 6) Trying to go too fast too quickly. 7) Not using A-B testing tools. 8) Not having a specific hypothesis in mind when doing A-B tests. 9) Ignore the potential for skewed or broken tests.
Rob [22:02]: If you have a question for us, call our voicemail number at: (888) 801-9690, or email us at: questions@startupsfortherestofus.com. Our theme music is an excerpt from “We’re Outta Control” by MoOT used under Creative Commons. Subscribe to us on iTunes by searching for “startups.” And visit: startupsfortherestofus.com for a full transcript of each episode. Thanks for listening, we’ll see you next time.