Why we rolled our own AB testing tool at POP


We’re happy to announce that we’ve unleashed our custom rolled AB testing tool, Winston, to the masses. While we’re currently only using it in our development and test environments, we’re satisfied with it’s current alpha stability enough to let others know they can dig in and have at it.  

With all of the options for AB testing out there, you may ask why on earth would we roll our own a/b testing tool? Some of our core reasons are as follows:

Crowd sourcing software benefits everyone.

By building this library and making it publicly available, we’ve opened the doors to accept bug reports, bug fixes, feature requests, enhancements, and more. By promoting an open source environment, we hope to help future startups get up and running more quickly with battle tested solutions. This is a community effort; we just happen to be championing it.

We like to keep things in house.

If we run into problems, it’s much easier to diagnose and fix ourselves than to rely on third party support. This is definitely a trade off as we increase our technical debt in favor of reduced reliance on others. We believe that this approach leads to better stability, security, and uptime as our libraries mature.  The biggest gain in our opinions is that we can debug our own code. There’s nothing worse than finding a bug in someone else’s software and not being able to patch it.

We control the data.

There’s much to be said about controlling your own data. It’s easy to manipulate and we can mash it up with the plethora of data we collect on our site to better improve our service to our customers. There’s no need to utilize API clients to hit third parties and request datasets; we can just hit our own datastore.

We can build tools around our library.

Let’s say we want marketing or a growth hacker to be able to create their own tests and variations. With our own custom tool and the fact we are active practitioners of API first development, we can add that flexibility fairly quickly. For instance, we’re looking at building a dashboard to manage Winston tests and variations as well as view overall test performance. 

We like to learn new things.

We built Winston in our free time. You know, that little amount of time you have between trying to get hockey stick growth and building out the next big feature. Having previously used a number of popular third party testing tools, we just weren’t satisfied. Some of them had poor load times and blocked our page, others were too busy trying to ruin our angular.js heavy UI with custom generated HTML pages. The end result for us was to do a bit of research and create a tool that was both minimalist and practical to use. This library isn’t rocket science or as full featured as alternatives, but it fits a very targeted niche for us: run test variations and track performance over time. As a startup that is rapidly iterating and testing new ideas, minimalist has been our mantra.

How do you know Winston is safe/accurate?

As with all implementations of A/B testing, the number one rule is not to trust your test results at first glance. We here at POP treat our tests like a series of hypothesis and experiments. You need to take results with a grain of salt as any number of factors could affect your outcome. For instance, a surge in traffic from a different locality may affect your results. It’s these types of factors that we’d eventually like to build into Winston.

If you’re interested in Winston, the source code is openly available on github.  We’d love additional contributors!

Share this post on Tweet about this on TwitterShare on FacebookShare on Google+Share on TumblrShare on LinkedInShare on RedditDigg thisBuffer this page