Should Managers Attend Retros?

Recently Luis Goncalves published a blog post about retrospective smells. It’s a good article and worth a look and you can find it here. Although the article in general is interesting and I do agree with most of it, I do disagree about one point being an antipattern: “Line Managers want to attend”.

He quite logically deduces that although they might very well attend on good intentions, their participation can interfere with team member’s confidence in raising issues and talking freely about topics at hand. Leaving manager’s out of retros seems like a proper cure, but I get the feeling we are here treating a symptom to a way bigger underlying issue – lack of trust.

Even managers can feel left out

Even managers can feel left out

It’s quite common that team members and employees in general are careful when manager’s around, especially if they are new to the team or the company, but instead teaching people to avoid direct communication it should be fostered. It takes time and effort to build that trust, but eventually team should feel comfortable talking about real issues directly and openly with the management. Properly facilitated retrospective is the perfect place to practice and train this open and direct communication.

This trust will eventually work both ways and it’s extremely important for transparency and openness in the company culture. Still it will take time and there is no benefit in trying to force it so I would actually recommend doing retros sometimes with and sometimes without management. Depending how well it goes and start bringing the management in more and more if the trust starts to build up.

There is another benefit in having management present and namely they are more likely to support and help solve the issues if they understand “why” something should be done instead of just “what”. People can bring their whole expertise and experience to the table only if they have proper understanding of the situation and participating in retros can really help in achieving that. People (includes managers!) commit to certain tasks or plans a lot better if they take part in designing them or better yet: let them be part of the solution.

The retrospective is the place for process improvements and probably the most important improvement you could ever achieve is is fixing the communication gap between the team and the management.

Finally I’d like to point out that most of the retrospectives I have facilitated had a manager attending and it never seemed to be a problem. Maybe I’ve just been lucky, but I personally do intent to push my luck and I will keep inviting everyone to the retros.

Illustration by Martha Bárcenas

A Retro in Practise

I recently had a chance to facilitate a retrospective for a fellow team at my department. That team had been through a lot of changes and the most longest serving member had been there for less than a year. So it really looked like a good time to invest a bit of time to really open the conversation channels.

I had not facilitated a proper retrospective in a long time and this was a completely new team for and I hardly knew anyone of them so I did have few concerns.

  • Would everybody feel confident enough to speak up and voice their concerns?
  • Would they really engage and concentrate on the topic at hand?
  • Would I be able to explain the exercises properly after such a long time and in English?

 To address the first concern I decided to do a safety check in form of a ESVP -vote. I also wanted to underline the purposes of the retrospective and what we wanted to achieve. I also planned to make  a point about being fair / just and show the little cartoon.

For the second point I decided to write some slides to explain the exercises. I usually prefer to have as little technical devices in the room as possible and I like to make the point about leaving mobile phones and laptops aside, but this time I decided to make an exception.

Here is the plan I made for this retrospective that was scheduled to take 3 hours. Apart from the exercises I will share my time tracking and some comments how I feel it went.

1. Set the Stage

1.1. Safety check / ESVP -vote

To understand how people felt about this retro I wanted to see how comfortable they felt.

How did it go?

Luckily the results were really encouraging and only one gave an S and everyone else gave an E so I the results were really encouraging,

1.3. Unlikely Superheroes

I borrowed the idea for this exercise from one of my favorite tv-shows: whose line is it anyway. In that show comics improvise and give each other silly superhero names and then act out a scene trying to solve a silly crisis that the audience gave them. I changed it a bit and only ask each attendee to think about their role and contributions to the project and give themselves a superhero name and identify their superpower and weakness. People tend to be really good at coming up with funny metaphors and this sounded like a fun way to get the people thinking how they see themselves in the team and also identifying their own shortcomings in a relaxed and humorous manner.

How did it go?

I think it worked out quite fine and we got some pretty funny names and superpowers coming out and few laughs as well. This exercise definitely helped to create a relaxed atmosphere and we were ready to move on. In general the 1. stage took only 15 minutes.

2. Gather Information

2.1. Timeline

Timeline tends to be my de facto way of gathering information of what has happened. This time planned to give the team 10 minutes to write down 5-10 events or things that each found meaningful and that had an impact on their moral or attitude towards work (each on their own, so everyone wrote their own notes). After that I would ask each one to step forward and quickly explain each note before putting it on the timeline I would sketch on the whiteboard. Apart from time axis the timeline had moral axis meaning that events that were considered positive would be higher and negative events lower.

How did it go? (time spent: 40 minutes)

In general these seems to be an instant hit. Everyone always has plenty to say and this is a great way to make everyone participate. Even the guy who had been with the team only for 2 weeks had something to say. Some even had way more than 10 tickets and as we seemed to have plenty of time, I let to put them all up.

(break, 15 min)

2.2. Identify patterns

Next I would ask the them to form small teams  of 2-3 people and then to have a closer look at the timeline to identify patterns and themes that emerge. Then we would together list them on a separate walls and group and merge them to one combined list of higher abstraction level themes and topics to talk about. When it comes to timekeeping I did drop the ball. This took a lot more time than anticipated.

How did it go? (time spent: 1h 00min)

In general the small teams got underway fast and started to pick out themes and patterns. Then when we wanted to merge the list from each team the conversation really started booming. There seemed to be endless possibilities to discuss and pretty much everyone seemed to be participating.

2.3. Point voting for top 3

After all the topics were discussed we would need to pick top three for further analysis. Everyone would get 3 votes and the top 3 of topics would then be picked.

How did it go? (time spent: 10min)

Well it’s simple enough and got the job done.

3. Generate Insight

3.1. Why-Map

It’s a combination of 5 why’s and mind map. The idea is to create 3 teams (1 each topic) and have them analyze the reasons leading to current situation regarding the topic. The reason as many and they are not linear so what you end-up with is a mind map like graph of reasons. I still recommend following up each initial path to at least 5 whys.

How did it go? (time spent: 10min)

4. Planning Future Actions

4.1. The perfect world

This inspired by Toyota Kata and the idea is to think how things would look if they were perfect.. This exercise also concentrates on the topics chosen in earlier exercise. I find it very valuable to think where you want to go before trying to come up with steps to get there.

How did it go?

Skipped due to lack of time.

4.2. Planning game (planned duration: 20 minutes)

Ask teams to plan 2-.3 concrete steps to get slightly closer to the perfect situation in previous. In the end present the tasks to the  team and take responsibility.

How did it go? (time spent: 10min)

At this point we were really stressed with time so I had to simplify and push the timelimit. In the end we did have action points for every team and we did have some pretty good ones.

5. Closing

5.1. Feedback

I planned to write 3 questions to a white board and then ask each to write their answers on a separate note. Here are the questions:

  1. What did you like best in this retro?
  2. What did you dislike?
  3. On a scale from 1 to 5 (1 great – 5 horrible), how bad waste of time was it?

How did it go? (time spent: 5min)

Most seemed to want to give the orally and publically saying mostly positive things. Few actually gave me their feedback on paper. Mostly people seemed to like it and the criticism concentrated on lack of time for the planning and the poor time management.

Self-reflection

The criticism over time management was spot on and that seems to always be an issue for me. Somehow I always get excited when the team really starts to talk about things and it’s really hard for me to stop it, especially in a case like this when I feel that this was the first time the team actually talked about non technical stuff properly. Still it would make sense to have some proper time for planning the actions to be able to validate and review them properly.

Why Scala?

scala

Recently we started a new project and I’m happy to say we had quite a lot of freedom to choose the tech stack we wanted to implemented it.

Technically the project did not seem too difficult, basically just aggregating data from few different web service api:s. So it looked like a good chance to take a small chance and try something new. We decided to go with Scala and Play Framework and here is why:

Reactive Model

I had previously done some smaller project using node.js and event driven programming. I definitely think that “reactive” is the way to go and makes sense to learn to do it properly. The thing I was missing was a proper type system which leads to…

Static Typing

Scala’s type system is extremely powerful and type inference allows some compact and concise code. Read more here:

Functional

After (too ) many years of mostly Java development it was definitely time for something more powerful. Scala is pretty much as functional as a language can be. Some purist may argue that it’s not purely functional like Haskell, but in the real world situations Scala is as functional as they come.

Specifically Pattern Matching deserves to be mentioned as one of my favourite features.If you manage to specify most of your data model in case classes, life gets a lot easier.

Cake Pattern

Cake pattern is seems to be the go-to way of wiring Scala apps together. It basically allows you to do modular design and dependency injection without using any library. It does include a bit of boilerplate, but I think that the advantages of using statically typed, compile time checked dependency injection is better than using any separate library even with the price of a little bit of boilerplate. Read more about cake pattern here:

Java Interoperability

If Java has one strength, it’s the plenitude of well tested libraries. Using Java lib’s is trivially easy from Scala.

Maturity

Scala recently turned 10 years old and the language is definitely mature enough. It still evolves, but latest stable releases are worthy of their name and stable.

Play Framework has reach version 2.3.7 and accompanying Activator makes starting projects very easy. Activator has pretty decent template mechanism and you got bunch of templates to choose from when you start a new project.

Sbt the Scala Build Tool has evolved like Scala. It’s regularly updated and has a working plugin system. It comes with a nice REPL. It might not look fancy, but it get’s the job done.

When it comes to IDE:s you got basically two fine choices: Eclipse based Scala IDE and IntelliJ. I personally found IntelliJ:s scala & play plugin to work better and eventually settled on that. The only downside is that play plugin requires the registered (paid) version.

Performance

Scala compiles to java bytecode so the performance is just as good. Static typing allows the compiler to optimize better. Just have a quick look at these benchmarks: http://benchmarksgame.alioth.debian.org/u32/compare.php?lang=scala&lang2=clojure
I chose to compare Scala to another modern jvm language with dynamic typing. Of course this is just a little sample, but Scala is across the board faster. It good to keep in mind that usually performance should be one of the last criterias when selecting the language, but it’s nice to know that when push comes to shove, Scala will deliver. There is a reason why internet giants like Twitter and LinkedIn chose Scala.

Summary

There you have, our reasoning for choosing Scala. After about 4 months into the project it still looks like a good choice. Don’t get me wrong, it has not been a walk in the park and we’ve had some difficulties and problems, but that’s the topic of an upcoming post.

JSON values to typed Id:s in Scala & Play

I recently published a post about how to deal with JSON objects using Scala’s case classes and Play Framework. To keep up with the theme here is another post about the same topic, but this time it’s specifically about types.

JSON format does have types, but as they are not visible in the actual content and how they are mapped to data types at the receiving end depends a lot on developers first look at the incoming data. Majority of content seems to be in String format and that seems like a safe choice, after all you can fairly safely represent a number as a String as long as you don’t process the data in any meaningful form. You would not be so lucky doing it the other way around.

This logic seems pretty valid especially if you are not certain about the format of the data. Maybe the value just happened to be a number this time, but it might include a letter next time and then using anything but String would result in a runtime exception and we definitely don’t want that. Going all-in with String does have some unfortunate side effects and some problems might be creeping into your code.

def doIt(someId: String, someOtherId: String, foo: String, bar: String)

It’s really easy to mix up the parameter order when you have methods like his and what’s the point of having static typing if you deal mostly with strings anyway. In the worst case it will “kind of” work, but the results are wrong. Moreover code like this is a pain to refactor.

So instead of a mess like this wouldn’t it be nice to deal with properly typed values instead? (From here on I concentrate more on id -values, if you need to pass on many values you probably have other design flaws as well)

def doIt(someId: SomeId, someOtherId: SomeOtherId, foo: String, bar: String)

Now we have also regained the ability to trust the developer’s best friend – the compiler. Of course there is nothing new or fancy about wrapping values to classes, but what turned out to be tricky was to maintain the handy JSON-parsing that comes with Play Framework and case classes without too much boilerplate. So, how do we actually achieve this?

First solution: Implicit conversion to typed Id -classes

Well first of all we implemented a proper base trait representing any typed id case class which we will declare later on.

trait BaseId[V] {  val value: V  }

Based on that basic trait we can then implement another trait for every value type we want to support.

trait StringBaseId extends BaseId[String]
trait NumberBaseId extends BaseId[BigDecimal]

What we now need is an implicit conversion of the JavaScript’s primitive type to an instance of a typed id implementation. We do this using an implicit class for every primitive type we support. For example String -based ids we implement like this:

implicit class StringTypedIdFormat[I <: BaseId[String]](factory: Factory[String, I]) 
    extends Format[I] {
  def reads(json: JsValue): JsResult[I] = json match {
    case JsString(value) => JsSuccess(factory(value))
    case _ => JsError(s"Unexpected JSON value $json")
  }
  def writes(id: I): JsValue = JsString(id.value)
}

The provided factory will be used to instantiate a concrete implementation based on a String. The type factory is declared as follows:

type Factory[V, I <: BaseId[V]] = V => I

In the next step we declare concrete id -classes for each id type we need. Staying consistent to our requested method signature of the previous example we write:

case class SomeId(value: String) extends StringBaseId
case class SomeOtherId(value: BigDecimal) extends NumberBaseId

Both case classes extend from our base corresponding base traits. The case class representing the JSON’s would look like this:

case class SomeObject(id:SomeId, name:String)
case class SomeOtherObject(id:SomeOtherId, name:String, value:Number)

To get an implicit conversion between JSON and those two case classes we must provide implicit read and write -functions or JsonCombinator formats like (https://www.playframework.com/documentation/2.3.x/ScalaJsonCombinators)

implicit val someObjectFormat: Format[SomeObject] = 
  Json.format[SomeObject]
implicit val someOtherObjectFormat: Format[SomeOtherObject] = 
  Json.format[SomeOtherObject]

This format wouldn’t yet work because we still need an implicit conversion between JSON and the typed id case classes. We can finally provide those format using the helper classes previously declare. For the two ids we declare:

implicit val someIdFormat: Format[SomeId] = 
  new StringTypedIdFormat[SomeId](SomeId.apply _)
implicit val someOtherIdFormat: Format[SomeOtherId] = 
  new NumberTypedIdFormat[SomeOtherId](SomeOtherId.apply _)

The default apply method of the id case classes can be used directly as the declared factory method to generate the concrete instance of the case class. We can then implicitly convert between JSON primitive type id values and our typed id case classes in Scala.

A simple test to demonstrate the conversion:

val someObjectAsJson: JsValue = Json.parse("""
  {
    "id":"111",
    "name": "someName"
  }
""")

"Parsing generic id object" should {
   "SomeId will be parsed correctly" in {
     val test = someObjectAsJson.as[SomeObject]
     test.id === SomeId("111", “someName”)
   }
}

We can further simplify the id format declaration by adding an additional method to the Scala’s Json object:

object TypedId {

  //implicit convertion to extended json object
  implicit def fromJson(json: Json.type) = TypedId

  //extended format function
  def idformat[I <: StringBaseId](fact: Factory[String, I]) = 
    new StringTypedIdFormat[I](fact)
  def idformat[I <: NumberBaseId](fact: Factory[BigDecimal, I]) = 
    new NumberTypedIdFormat[I](fact)
}

Now we can replace the format declaration with the following simpler version:

implicit val someIdFormat: Format[SomeId] = 
  Json.idformat[SomeId](SomeId.apply _)
implicit val someOtherIdFormat: Format[SomeOtherId] = 
  Json.idformat[SomeOtherId](SomeOtherId.apply _)

We still need to declare the factory method because we can’t instantiate a typed class at runtime.

We are not completely happy with this solution because with the current solution we would have to declare a concrete case class as well as an implicit format for every id class we create. We are looking for a more generic way to declare such id’s and it could look something like this:

def doIt(someId: StringId[SomeObject], someOtherId: NumberId[SomeOtherObject], foo: String, bar: String)

Bye the way:
All the base implementations can be found on GitHub.

Mike Toggweiler, a partner @ Tegonal co-authored this post.

Parsing json with over 22 fields with case classes

We recently started to develop a new product using play framework in the back end and angular js for the client side. The back end is rather light and mainly consist of aggregating data from different web services. We do have a local database and for that we chose MongoDB. So what we have is many sources of data and pretty much all of them provide data in json format. Needless to say json parsing has to work flawlessly.

For the most part that worked great. With play you can create a case class with all the same fields as the corresponding json and then you just create a format for it in the companion class. This is easy and elegant and works great… until…

…until you notice that one of the web services you are calling has more than 22 fields and is a problem because Scala does not allow more than 22 fields in a case class. Normally that’s fine and usually that is a red flag about poor design, but sometimes the web services you use have more than 22 fields in the response objects.

case class FooBar(
   field01: String,
   field02: String,
   field03: String,
// ... many fields ...
   field22: String,
   field23: String)

// compilation errors

object FooBar{
   implicit val foobarFormat: Format[FooBar] = Json.format[FooBar]
}

Luckily there is pretty neat way around this and it might even improve your design. You can use nested case classes and all you need to do is implement a wrapper writer and reader for that class, but first you can create case classes that represent subset of the fields of the response.

case class Foo(
   field01: String,
   field02: String,
// ... many fields ...
   field09: String,
   field10: String)

object Foo{
   implicit val fooFormat: Format[Foo] = Json.format[Foo]
}

case class Bar(
   field11: String,
   field12: String,
// ... many fields ...
   field21: String,
   field22: String,
   field23: String)

object Bar{
   implicit val barFormat: Format[Bar] = Json.format[Bar]
}

So now we have smaller classes that contains subset of the original fields and the only thing missing is a “wrapper” class the represents the complete class. This is rather simple and in this case it has two fields, namely the two classes I just defined.

case class FooBar(
   foo: Foo,
   bar: Bar)

object FooBar {
   implicit val foobarReads: Reads[FooBar] = (
      (JsPath).read[Foo] and
      (JsPath).read[Bar])(FooBar.apply _)

   implicit val foobarWrites: Writes[FooBar] = (
      (JsPath).write[Foo] and
      (JsPath).write[Bar])(unlift(FooBar.unapply))
}

Now in scala you can access fields quite neatly using the dot -notion, but the json is serialized back to the original format. This parsing can be easily tested.

val foobarJson: JsValue = Json.parse("""
  { "field1":"value1", 
    "field2":"value2", 
    ...
    "field22":"value22", 
    "field23":"value23"}
  """)

"Parsing json to nested case classes" should {
  "work just fine" in {
    val foobar = foobarJsonAsString.as[FooBar]
    foobar.foo.field1 === "value1"
// ...
    foobar.bar.field23 === "value23"
  }
}

Power to Powershell

The somewhat “new” and fancy powershell does seem to have some rather nice features. This is not to say it’s anyway better than a real unix shell, but you can get some pretty neat stuff done with it in a rather simple manner. One of the old problems I’ve had (other than being stuck on windows) is that when ever a new version of java comes, I need to juggle between different versions of them depending on which project I am working on.

There are 2 environment variables you need to do to change current java version when using command line:

  1. JAVA_HOME
  2. Path

Normal way to add the java commands to path is using the JAVA_HOME environment variable (ie JAVA_HOME\bin). The problem is that the path is resolved when you start the powershell so it replaces all the environment variables with their values so changing the JAVA_HOME is not enough. You need also update the Path and updating both manually is quite tedious.

However powershell provides a way to define functions in the profile and they are quite perfect way to manage java versions.

function java8 {
  $env:JAVA_HOME="C:\Program Files\Java\jdk1.8.0"
  $env:Path=$env:JAVA_HOME + "\bin;" + $env:Path
}

function java7 {
  $env:JAVA_HOME="C:\Program Files\Java\jdk1.7.0_25"
  $env:Path=$env:JAVA_HOME + "\bin;" + $env:Path
}

This is certainly not perfect and if you change the java version many times, the path will get quite long, but I don’t consider that a much of a problem. I usually fire up a new instance anyway and then Path is reverted back to the original.

Emergent Leadership @ XP 2014

Image

 

Sometime ago I got a confirmation for both my conference trip to one of the most important agile conferences in the old continent – XP2014 and that my suggestion for lightning talk was accepted. I plan to do a quick introduction about Emergent Leadership which I blogged about the my previous post.

Lightning talk should be a perfect for this as what I want is to stir a bit of conversation and question static team structures and roles. Agile is not what you do, it’s what you are.

It does not hurt that this conference is hosted in historically one of the most intriguing cities in the world.

So what I still need to do is to clarify my thoughts and how I want to present the topic. 5 minutes is not much, but it can be plenty if well used.

http://www.xp2014.org/

Emergent Leadership

Managing and leading are two very different concepts, but many times you see terms like “manager” and “leader” used interchangeably. I personally like to define them very differently and in my opinion there is a very clear distinction.

In short “managers” are appointed and leaders emerge.

Managing refers to managing conditions and good managers create condition where leadership can naturally emerge. Some people are natural leaders and they will naturally take the lead unless their leadership skills are suppressed by management.

Leaders are very context dependent. In a software team during different development phases different people can emerge as leaders and even designing different parts of the system it might well be that the people who best understand the requirements and situation naturally take the lead.

A good manager should understand to cede control when someone else is better fit to lead. Arbitrarily holding on to power will only interfere with teams capability to perform. Ceding control requires a confidence and trust in the team.

Emerging leadership is a natural phenomena. Take a travelling flock of birds for instance. A wedge of cranes has a leader, somebody has to fly in the peak. The leader position is however circulated naturally.

flock-birds

The form of the wedge is fairly stable, but some other flocks change the form dramatically and peculiar forms and patterns can emerge. “When birds fly in flocks, they often arrange themselves in specific shapes or formations. Those formations take advantage of the changing wind patterns based on the number of birds in the flock and how each bird’s wings create different currents. This allows flying birds to use the surrounding air in the most energy efficient way. ” (http://birding.about.com/od/birdbehavior/a/Why-Birds-Flock.htm) Should an agile team think it’s form should stay constant.

Leadership implies followers. When it is not by authority as when manager tells his team to do something, its naturally good measure to check if the actions are agreed upon. If you have a good idea and people are willing to follow you because they agree with you, that makes you a leader.

Emerging leadership is an ongoing self organization activity by the team. Self organization is one of the fundamental agile -principles, but it’s not always understood as an ongoing process. Emerging leadership is a natural phenomena in a truly self organizing team. Agile team should be more like a flock of bird that is prepared to change forms to best benefit from the “currents” every team members skills create any given moment of time.

Leading By Anything But Example Is Futile

Since kid we learn to copy our parents behavior. We hardly ever listened to our parents, but instead we copy their actions, attitudes, religion, political views, behavior models, vocabulary everything. This behavior adoption is also known as “learning” and it goes on without much critical thinking until we arrive to teenage and start rebelling everything they say.

Image

This mindless copying actually seems to one of the big difference between humans and our closest relatives in animal kingdom: chimps. I remember watching a documentary on differences between human and chimpanzee children. When showing a predefined set of movement to open a box that contained a reward (candy). If the set of movement included unrelated moves that did not help opening the box, the chimpanzee actually skipped these moves and went straight to candy whereas the human children repeated the whole set of movements that did not seem to have any function.

Humans are great at repeating rituals that don’t seemingly have any function at all and this comes very naturally for us, imitation is the most important build-in learning mechanism we have and this is why the best way to lead and teach is by example.

The managers are appointed, but leaders emerge. These are commonly seen as being pretty much the same, but I see them quite differently. Firstly manager is appointed as an authority that has certain control of the team. This does not imply leadership. Leadership must be earned. Leadership implies followers and nobody willingly follows someone they don’t trust and respect. Respect and trust be earned. In the best case the manager will eventually win over the team and  earn their respect and trust and becomes a leader. One way to gain trust is to give trust. Trust your team and give them a chance to raise up to the occasion.

The best a manager can do war her team is to be an enabler, a servant of the team. It is the manager’s job to help the team do their job and make sure they have what they need. That includes knowing what needs to be done. The team can function without the manager, but the manager is nothing without a team. A good manager allows the team to grow and self organize.

It’s essential for manager to foster teamwork and spirit and to steer clear from internal competition within the team. Internal competition has traditionally seen as a good way to encourage and motivate members to do a better job, but this has been proven wrong.  Management guru Edwards Deming has written about the subject at great lengths and any manager new or experienced should pick up his books and try to learn some. It’s much better to reward team members for working together for the benefit of the whole team instead of individual praise that can easily lead to envy, defensive attitudes and rotten team spirit.

The manager needs to show by example how to give and get feedback. He has to show that it is ok to to give feedback and he has to be able to take it as well. He needs to make sure the team understand what is expected of him, but it goes both ways, he needs to learn what the team expects of him. It is not enough to say, you have to do as you say.

Feedback in general is something we all need and want if we really want to improve and learn. Even bad feedback is information. Giving feedback to his peers and own managers is a good  way to show example that everyone can do it. Feedback is communication and is essential for any organization that wants to be a proper learning organization. Any manager who wants to be a true leader, has to show example and act accordingly, because actions speak louder than thousand words.

Minimum Viable Product has to be Viable

Minimum Viable Product is not a new idea and it has been around for a while now. Like any buzzword that takes of it has had it’s fair share of confusion around it and plain miss interpretations. Some people to see it as overly simplistic idea where you get started with the absolute minimum.

While it is still good idea to get the first round of feedback from colleagues, friends and family as they certainly can provide some very useful early feedback even for something as simple as a login page, but a login page is hardly a product and some way of being viable. Minimum Viable Product is not just minimum, it is VIABLE.

A steering wheel alone is not a valid car

A steering wheel alone is not a viable car

In short the idea is to do a simplified version of any given product and then put out out there in the market to start gathering all important real end user feedback. The product has only the key functionality implemented.

This makes sense not only because it saves time, but it may also lead to vastly better product. Simple product tends to be simple to use and majority of software features are never or hardly ever user. Analysis based on Pareto principle can be applied to software engineering and leaves us with a statement that says “20 percent of the features provide 80% of the value”. The features that are never used is widely considered as one of the greatest causes of waste in software engineering.

Not used features have been implemented making the code more complex than it needs to be. More code lines mean more bugs and harder and more time consuming maintenance. Bug fixing will be targeted to fixing bugs that affect features that users don’t need etc.

Scrum co-creator Jeff Sutherland also makes a point on this. He talks about value curves where he explains that after delivering 20% of the highest priority stories the value produced starts to decline and it’s time to re-prioritize to get a new value curve. Scrum methodology fits well into developing a minimum viable product.

The fundamental challenge with Minimum Viable Product is not just minimum. A login page is hardly a product, it still has to be VIABLE. This leaves us with the biggest challenge when it comes to Minimum Viable Products –  what is viable? This can very hard to define and various greatly depending on the existing market. When creating something new when the market is in virgin state something very simple can be enough. When there are already players in the given market, something viable can be a lot more complete product.

Still we history has shown that consumer tend to value good user experience over bunch of features. Just look what happened with iPhone, when it first came out, it did not have copy / paste, no multimedia messages, no 3rd party apps and hardly anything we take for granted now.

To get started we have to make assumptions about customer demand and needs. The Minimum Viable Product helps us validate these assumptions. There various ways our assumptions can be wrong:

  1. The customers don’t want given product

  2. our assumption of minimum viable is too restricted and is not useful for customers

  3. the customers would settle less than was originally thought

Only number 1 is clearly result capable of killing the business.

Harvard Business Review offers two versions of Minimum Viable Product: validating and invalidating. This post touches mostly validating minimum viable prodcut. You can read the whole article here: http://blogs.hbr.org/2013/09/building-a-minimum-viable-prod/.