What is Validated Learning and how can it be applied to Kanban?

I’ve been reading the Lean Startup by Eric Ries and he has introduced me to the concept of Validated Learning. Validated Learning is the practise of effectively measuring the accuracy of assumptions and using the results of the Validation to understand whether the assumption was correct and if so, continue onto the next test. If not, decide whether your strategy, assumption or feature needs to be improved or to change direction.

The main thing that I found interesting was an example of how this could be integrated with Kanban, a practise that I am an avid user of since launching Mousebreaker.com in 2008.

The example spoke of the founder of Grockit, an online teaching site that enables students to learn either socially with other students online or individually. They assumed that a user story was not actually delivered until it had been confirmed as a success through measurement of Validated Learning.

In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

If it is found that a feature has not been a success and actually improved the product, then that product is removed. In addition, the feature is aligned to the product’s strategy and therefore had a set of assumptions associated with it. The process of Validated Learning either proves or denies the assumptions.

In the example, Grockit uses A/B Testing and cohort analysis to validate the success of the feature being delivered by the development team.

A/B testing allows you to test different versions of pages or parts of pages to different proportions of your users to test a metric such as registrations or visits to sections of your site – although you can measure much more than this. You can then determine which version works best and then make the winner available for all users.

Cohort analysis is very similar but takes place a step before the A/B Testing. You may use a number of different marketing channels using a number of different messages, for example. Each user that comes from one type of channel/message would be associated with that channel/message and then tracked throughout their lifetime on your product. This would allow you to understand the cohort group’s lifetime value for example, so that you can understand the value of those messages or that channel.

You can use each of these methods exclusively or together and measure against the metrics that matter to your business. What you are looking for is actionable data, something that up you can make a decision on or understand the value of a change.

I really like this concept; however it does seem to go against the basic principles of Kanban. Kanban is an agile methodology that is used, partially, to eliminate waste during the build process of a product through practises such as limiting work in progress and kaizen – the constant improvement of the process.

My question is whether removing a feature that is not deemed a success as wastage; and if it is, could that have been avoided?

Whenever I have built a web site and we have launched a new feature, we have measured and iterated based on the data collected until it has worked – a form of validated learning. Only, of a feature has not worked, I have seldom, if ever, removed this feature. My main thought on this is that it would have been a waste of the time to develop the user story.

Perhaps I would have removed the feature if it had a detrimental effect on the metrics used to validate the functionality.

I would be happy to do this as long as it informed decision making in the future. This is what Eric Ries is arguing. A user story or feature that is being developed is actually a test of a theory or an assumption. You are assuming that this new feature will improve the product and will improve a core measurement to your business.

If it does not do this, why keep this feature live? If it does not deliver enough improvement, then why keep it live? As long as these lessons learned helps to change the strategy or assumptions around your business, you should be able to reduce wastage as your strategy should be directly influenced by validated learning through the use of your product.

I really like this concept and will keep this in mind when I next get the opportunity. I will definitely be posting about it once I have some personal evidence of this in practice.

Please comment if you have used Validated Learning in practise and how you have found it in the comments below.

What is Cohort Analytics?

Cohort Analytics in this context is the measurement of how often a user returns to a website over a given period of time.  By understanding how well you retain your users, the better you will understand how best to monetise them – which is what we are all chasing in digital publishing.

This is not the same as the standard vanity stats that you may find in Google Analytics or Adobe’s SiteCatalyst for return visits or visitor retention, this will give you a more detailed understanding.

Also, if done correctly, you can use cohort analysis to measure not only general users to your website but also registered users, logged in users, users that purchase or convert to something.

Cohort analytics is quite a new concept for digital publishing.  In the past, CPMs have been high enough for publishers to only have to worry about unique users, visits and pages. But now, with the advent of Google Adsense and Facebook Ads, advertisers can now target audiences, so now publishers need to focus on other ways to measure and monetise audiences.

How to measure retention

Cohort Analytics is not something that is available out of the box with most standard web analytics tools.  Unfortunately it takes a bit of hacking to get it to work with Google Analytics – even then, it is quite limited as you can only measure over five units of time.  This will all become apparent shortly.

In addition, this explanation will only give you a basic overview for general tracking.

Google Analytics allows you to configure custom variables – of which there are five – and these segments can be persistent over a number of visits.

See Google’s documentation on custom variables here.

Custom variables should be set per time period that you would wish to track.  In this instance it would track user retention over a rolling 5 month period.

Here is some example code for month one using the first of five custom variables:

*** CODE ***

pageTracker._setCustomVar(
      1,                   // This custom var is set to slot #1 for the first month. 
      "Month",           // The top-level name for your online content categories.  Required parameter.
      "January 2011",      // Sets the value of "January 2011" to "Month" for this particular aricle.  Required parameter.
      1                    // Sets the scope to visitor level.   
 );
 pageTracker._trackPageview();

*** END CODE ***

Once this code is in your site, it will need to change each month. The two paramters that will need to change are the first and third variables where each will increment when month 2 begins.

*** CODE ***

pageTracker._setCustomVar(
      2,                   // This custom var is set to slot #2 for the second month. 
      "Month",           // The top-level name for your online content categories.  Required parameter.
      "February 2011",      // Sets the value of "February 2011" to "Month" for this particular month.  Required parameter.
      1                    // Sets the scope to visitor level.   
 );
pageTracker._trackPageview();

*** END CODE ***

Once this has been implemented correctly and has gathered the relevant correct data, you will see some hopefully nice results. In addition, if you are clever with your naming and strategy you will be able to measure much more than this.

The results

Cohort Analysis

Hopefully you will see something like the image above after a period of time.  What the above table shows is how many of the users return in the following months from their first visit.

The reason for the Month 1 statistics all being 100% is that all users in Month 1 are new. In Month 2 it shows how many users from Month 1 returned in Month 2. Month 3 highlights how many users returned to the site in Month 3.

By fully understanding how long your users keep coming back to your site, you can really start to focus on some new metrics.

You will be able to work out the Lifetime Value (LTV) of your users which would help you to work out how much you may want to spend on marketing. By understanding this, you can ensure that your marketing stays profitable.

You can also start to focus your development attention on lengthening the lifetime value of your users.