State of JVM languages

Java is still the top dog on JVM, but there are plenty of alternatives for those programmers who a looking for a change, but how valid are these options? It’s hard to make a case for learning a fringe language apart from personal enjoyment, but do these alternate languages offer valid career paths. To do so, what you need is a critical mass of developers using it to motivate tool makers to create proper tools for those languages. When there are users, then libraries will follow.

I selected tree of the most talked jvm languages apart from Java which are: Scala, Clojure and Kotlin.

Scala is the oldest of these three being released already in 2004. Clojure followed in 2007 and Kotlin being the most recent of these languages being unveiled in 2011 and reaching version 1 early 2016.
 

New Interest

Past 12 Months:

Screen Shot 2017-10-25 at 10.40.59 PM
Blue: Scala tutorial, yellow: Kotlin tutorial, Red: Clojure tutorial

Link to most recent graph.

Past 5 years:

Screen Shot 2017-10-25 at 10.43.49 PM
Link

Scala seems still the most interesting to newcomers. Kotlin popularity clearly spiked mid 2017, but the hype has slowed down a bit since.

Job Market

How useful are these languages in the job market.

LinkedIn Job Search:

software engineer scala Showing 5,540 results
software engineer clojure => Showing 684 results
software engineer kotlin => Showing 586 results

engineer scala => Showing 7,701 results
engineer clojure => Showing 778 results
engineer kotlin => Showing 433 results

data scala => Showing 10,076 results
data clojure => Showing 758 results
data kotlin => Showing 254 results

Based on LinkedIn Worldwide job search Scala is mentioned in roughly 10 times more job adds than Clojure. Kotlin seems to be catching on quite quickly, apparently being officially supported by Android drives adoption. I would be surprised if it did not take over Clojure in popularity in the next 6 months.

Scala benefits from growing data science/engineering market as it’s one of the most important languages in that domain alongside Python. Quite a few data processing tools(Spark, Kafka) are written in scala making it the most natural fit for

Salaries

How well do these salaries pay:
It’s hard to find reliable data on how jobs in given languages pay. Googling around I found this article:
https://gooroo.io/GoorooTHINK/Article/16300/Programming-languages–salaries-and-demand-May-2015/18672#.WfEECxOCzXE
Looks like clojure pays pretty well and clearly better than scala. Both however pay clearly better than numerous java or javascript jobs. Jobs that require functional language knowledge are still fairly few, but if you manage to land one, you will be pretty well compensated.

Advertisements

Getting Started With Spark 2.x Streaming and Kafka

I’ve been digging into spark more and more lately and I had some trouble finding up to date tutorials on getting started with Kafka and Spark Streaming (especially for 2.x and for kafka 0.10). Especially if you want to run your own code easily.While running streaming jobs with spark-shell is not really recommended I find it very convenient to get started as you don’t even need to compile the code.

Up and running with Kafka

First things first, you need a kafka producer running. You can find the official quickstart guide here: https://kafka.apache.org/quickstart, but for the sake of simplicity I will repeat the relevant parts here.

1. Get the kafka distribution:

2. Run zookeeper

  • $ [kafka_home]/bin/zookeeper-server-start.sh config/zookeeper.properties

3. Run kafka server

  • $ [kafka_home]/bin/kafka-server-start.sh config/server.properties

4. Create topic

  • $ [kafka_home]/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

5. Run kafka producer

  • $ [kafka_home]/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

You can skip creating the consumer.

Setup Spark 2.x

First we need to download and setup spark.

1. Get spark

2. Verify Spark works

  • You can verify that spark-shell works by launching it [spark_home]/bin/spark-shell
  • Use CTRL + C to quit

Create Spark Streaming Application

Let’s create a new folder for our streaming applications. Lets call it “kafka-spark-stream-app”.
So now the folder structure should look something like:
/[kafka_home]
/[spark_home]
/kafka-spark-stream-app

Let’s create a file for the word count streaming example, use any text editor to create a file called /kafka-spark-stream-app/kafkaSparkStream.scala

import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe

// Create the context with a 1 second batch size
val ssc = new StreamingContext(sc, Seconds(1))

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "localhost:9092",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> "use_a_separate_group_id_for_each_stream"
)

val topics = Array("kafka_metadata_example")

val stream = KafkaUtils.createDirectStream[String, String](
  ssc,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)

val lines = stream.map(_.value)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()

To run this, we need to still add few jars to the classpath so let’s create a subfolder to the /kafka-spark-stream-app/jars and add two jars there:

Both can be find for example on mvnrepository.com.So now the folder structure should look like this:
/[kafka_home]
/[spark_home]
/kafka-spark-stream-app
/kafka-spark-stream-app/kafkaSparkStream.scala
/kafka-spark-stream-app/jars/kafka-clients-0.10.2.1.jar
/kafka-spark-stream-app/jars/spark-streaming-kafka-0-10_2.11-2.1.1.jar

*In case you use different version of Spark, make sure you have the corresponding version of the spark-streaming-kafka -library as well.

Running the spark streaming script

Now all we have is left is to run the script assuming you have the kafka running as setup in the beginning of this article.

$ [spark_home]/bin/spark-shell --jars ../kafka-spark-stream-app/jars/spark-streaming-kafka-0-10_2.11-2.1.1.jar,../kafka-spark-stream-app/jars/kafka-clients-0.10.2.1.jar -i ../kafka-spark-stream-app/kafkaSparkStream.scala

Basically we just tell spark-shell that the script will require those two jars to run and where they are and with the switch -i we tell spark-shell to run the script from the given file. Now post some text from the kafka console producer and see the spark streaming application printing out the word counts of the given phrase.

Followup

Of course it does not make sense to run any real spark streaming application like this, but it’s very convenient to be able to run scripts without having to set up a proper Scala project. I will follow up on how to wrap to application in a proper sbt project soon.

Should Managers Attend Retros?

Recently Luis Goncalves published a blog post about retrospective smells. It’s a good article and worth a look and you can find it here. Although the article in general is interesting and I do agree with most of it, I do disagree about one point being an antipattern: “Line Managers want to attend”.

He quite logically deduces that although they might very well attend on good intentions, their participation can interfere with team member’s confidence in raising issues and talking freely about topics at hand. Leaving manager’s out of retros seems like a proper cure, but I get the feeling we are here treating a symptom to a way bigger underlying issue – lack of trust.

Even managers can feel left out

Even managers can feel left out

It’s quite common that team members and employees in general are careful when manager’s around, especially if they are new to the team or the company, but instead teaching people to avoid direct communication it should be fostered. It takes time and effort to build that trust, but eventually team should feel comfortable talking about real issues directly and openly with the management. Properly facilitated retrospective is the perfect place to practice and train this open and direct communication.

This trust will eventually work both ways and it’s extremely important for transparency and openness in the company culture. Still it will take time and there is no benefit in trying to force it so I would actually recommend doing retros sometimes with and sometimes without management. Depending how well it goes and start bringing the management in more and more if the trust starts to build up.

There is another benefit in having management present and namely they are more likely to support and help solve the issues if they understand “why” something should be done instead of just “what”. People can bring their whole expertise and experience to the table only if they have proper understanding of the situation and participating in retros can really help in achieving that. People (includes managers!) commit to certain tasks or plans a lot better if they take part in designing them or better yet: let them be part of the solution.

The retrospective is the place for process improvements and probably the most important improvement you could ever achieve is is fixing the communication gap between the team and the management.

Finally I’d like to point out that most of the retrospectives I have facilitated had a manager attending and it never seemed to be a problem. Maybe I’ve just been lucky, but I personally do intent to push my luck and I will keep inviting everyone to the retros.

Illustration by Martha Bárcenas

A Retro in Practise

I recently had a chance to facilitate a retrospective for a fellow team at my department. That team had been through a lot of changes and the most longest serving member had been there for less than a year. So it really looked like a good time to invest a bit of time to really open the conversation channels.

I had not facilitated a proper retrospective in a long time and this was a completely new team for and I hardly knew anyone of them so I did have few concerns.

  • Would everybody feel confident enough to speak up and voice their concerns?
  • Would they really engage and concentrate on the topic at hand?
  • Would I be able to explain the exercises properly after such a long time and in English?

 To address the first concern I decided to do a safety check in form of a ESVP -vote. I also wanted to underline the purposes of the retrospective and what we wanted to achieve. I also planned to make  a point about being fair / just and show the little cartoon.

For the second point I decided to write some slides to explain the exercises. I usually prefer to have as little technical devices in the room as possible and I like to make the point about leaving mobile phones and laptops aside, but this time I decided to make an exception.

Here is the plan I made for this retrospective that was scheduled to take 3 hours. Apart from the exercises I will share my time tracking and some comments how I feel it went.

1. Set the Stage

1.1. Safety check / ESVP -vote

To understand how people felt about this retro I wanted to see how comfortable they felt.

How did it go?

Luckily the results were really encouraging and only one gave an S and everyone else gave an E so I the results were really encouraging,

1.3. Unlikely Superheroes

I borrowed the idea for this exercise from one of my favorite tv-shows: whose line is it anyway. In that show comics improvise and give each other silly superhero names and then act out a scene trying to solve a silly crisis that the audience gave them. I changed it a bit and only ask each attendee to think about their role and contributions to the project and give themselves a superhero name and identify their superpower and weakness. People tend to be really good at coming up with funny metaphors and this sounded like a fun way to get the people thinking how they see themselves in the team and also identifying their own shortcomings in a relaxed and humorous manner.

How did it go?

I think it worked out quite fine and we got some pretty funny names and superpowers coming out and few laughs as well. This exercise definitely helped to create a relaxed atmosphere and we were ready to move on. In general the 1. stage took only 15 minutes.

2. Gather Information

2.1. Timeline

Timeline tends to be my de facto way of gathering information of what has happened. This time planned to give the team 10 minutes to write down 5-10 events or things that each found meaningful and that had an impact on their moral or attitude towards work (each on their own, so everyone wrote their own notes). After that I would ask each one to step forward and quickly explain each note before putting it on the timeline I would sketch on the whiteboard. Apart from time axis the timeline had moral axis meaning that events that were considered positive would be higher and negative events lower.

How did it go? (time spent: 40 minutes)

In general these seems to be an instant hit. Everyone always has plenty to say and this is a great way to make everyone participate. Even the guy who had been with the team only for 2 weeks had something to say. Some even had way more than 10 tickets and as we seemed to have plenty of time, I let to put them all up.

(break, 15 min)

2.2. Identify patterns

Next I would ask the them to form small teams  of 2-3 people and then to have a closer look at the timeline to identify patterns and themes that emerge. Then we would together list them on a separate walls and group and merge them to one combined list of higher abstraction level themes and topics to talk about. When it comes to timekeeping I did drop the ball. This took a lot more time than anticipated.

How did it go? (time spent: 1h 00min)

In general the small teams got underway fast and started to pick out themes and patterns. Then when we wanted to merge the list from each team the conversation really started booming. There seemed to be endless possibilities to discuss and pretty much everyone seemed to be participating.

2.3. Point voting for top 3

After all the topics were discussed we would need to pick top three for further analysis. Everyone would get 3 votes and the top 3 of topics would then be picked.

How did it go? (time spent: 10min)

Well it’s simple enough and got the job done.

3. Generate Insight

3.1. Why-Map

It’s a combination of 5 why’s and mind map. The idea is to create 3 teams (1 each topic) and have them analyze the reasons leading to current situation regarding the topic. The reason as many and they are not linear so what you end-up with is a mind map like graph of reasons. I still recommend following up each initial path to at least 5 whys.

How did it go? (time spent: 10min)

4. Planning Future Actions

4.1. The perfect world

This inspired by Toyota Kata and the idea is to think how things would look if they were perfect.. This exercise also concentrates on the topics chosen in earlier exercise. I find it very valuable to think where you want to go before trying to come up with steps to get there.

How did it go?

Skipped due to lack of time.

4.2. Planning game (planned duration: 20 minutes)

Ask teams to plan 2-.3 concrete steps to get slightly closer to the perfect situation in previous. In the end present the tasks to the  team and take responsibility.

How did it go? (time spent: 10min)

At this point we were really stressed with time so I had to simplify and push the timelimit. In the end we did have action points for every team and we did have some pretty good ones.

5. Closing

5.1. Feedback

I planned to write 3 questions to a white board and then ask each to write their answers on a separate note. Here are the questions:

  1. What did you like best in this retro?
  2. What did you dislike?
  3. On a scale from 1 to 5 (1 great – 5 horrible), how bad waste of time was it?

How did it go? (time spent: 5min)

Most seemed to want to give the orally and publically saying mostly positive things. Few actually gave me their feedback on paper. Mostly people seemed to like it and the criticism concentrated on lack of time for the planning and the poor time management.

Self-reflection

The criticism over time management was spot on and that seems to always be an issue for me. Somehow I always get excited when the team really starts to talk about things and it’s really hard for me to stop it, especially in a case like this when I feel that this was the first time the team actually talked about non technical stuff properly. Still it would make sense to have some proper time for planning the actions to be able to validate and review them properly.

Why Scala?

scala

Recently we started a new project and I’m happy to say we had quite a lot of freedom to choose the tech stack we wanted to implemented it.

Technically the project did not seem too difficult, basically just aggregating data from few different web service api:s. So it looked like a good chance to take a small chance and try something new. We decided to go with Scala and Play Framework and here is why:

Reactive Model

I had previously done some smaller project using node.js and event driven programming. I definitely think that “reactive” is the way to go and makes sense to learn to do it properly. The thing I was missing was a proper type system which leads to…

Static Typing

Scala’s type system is extremely powerful and type inference allows some compact and concise code. Read more here:

Functional

After (too ) many years of mostly Java development it was definitely time for something more powerful. Scala is pretty much as functional as a language can be. Some purist may argue that it’s not purely functional like Haskell, but in the real world situations Scala is as functional as they come.

Specifically Pattern Matching deserves to be mentioned as one of my favourite features.If you manage to specify most of your data model in case classes, life gets a lot easier.

Cake Pattern

Cake pattern is seems to be the go-to way of wiring Scala apps together. It basically allows you to do modular design and dependency injection without using any library. It does include a bit of boilerplate, but I think that the advantages of using statically typed, compile time checked dependency injection is better than using any separate library even with the price of a little bit of boilerplate. Read more about cake pattern here:

Java Interoperability

If Java has one strength, it’s the plenitude of well tested libraries. Using Java lib’s is trivially easy from Scala.

Maturity

Scala recently turned 10 years old and the language is definitely mature enough. It still evolves, but latest stable releases are worthy of their name and stable.

Play Framework has reach version 2.3.7 and accompanying Activator makes starting projects very easy. Activator has pretty decent template mechanism and you got bunch of templates to choose from when you start a new project.

Sbt the Scala Build Tool has evolved like Scala. It’s regularly updated and has a working plugin system. It comes with a nice REPL. It might not look fancy, but it get’s the job done.

When it comes to IDE:s you got basically two fine choices: Eclipse based Scala IDE and IntelliJ. I personally found IntelliJ:s scala & play plugin to work better and eventually settled on that. The only downside is that play plugin requires the registered (paid) version.

Performance

Scala compiles to java bytecode so the performance is just as good. Static typing allows the compiler to optimize better. Just have a quick look at these benchmarks: http://benchmarksgame.alioth.debian.org/u32/compare.php?lang=scala&lang2=clojure
I chose to compare Scala to another modern jvm language with dynamic typing. Of course this is just a little sample, but Scala is across the board faster. It good to keep in mind that usually performance should be one of the last criterias when selecting the language, but it’s nice to know that when push comes to shove, Scala will deliver. There is a reason why internet giants like Twitter and LinkedIn chose Scala.

Summary

There you have, our reasoning for choosing Scala. After about 4 months into the project it still looks like a good choice. Don’t get me wrong, it has not been a walk in the park and we’ve had some difficulties and problems, but that’s the topic of an upcoming post.

JSON values to typed Id:s in Scala & Play

I recently published a post about how to deal with JSON objects using Scala’s case classes and Play Framework. To keep up with the theme here is another post about the same topic, but this time it’s specifically about types.

JSON format does have types, but as they are not visible in the actual content and how they are mapped to data types at the receiving end depends a lot on developers first look at the incoming data. Majority of content seems to be in String format and that seems like a safe choice, after all you can fairly safely represent a number as a String as long as you don’t process the data in any meaningful form. You would not be so lucky doing it the other way around.

This logic seems pretty valid especially if you are not certain about the format of the data. Maybe the value just happened to be a number this time, but it might include a letter next time and then using anything but String would result in a runtime exception and we definitely don’t want that. Going all-in with String does have some unfortunate side effects and some problems might be creeping into your code.

def doIt(someId: String, someOtherId: String, foo: String, bar: String)

It’s really easy to mix up the parameter order when you have methods like his and what’s the point of having static typing if you deal mostly with strings anyway. In the worst case it will “kind of” work, but the results are wrong. Moreover code like this is a pain to refactor.

So instead of a mess like this wouldn’t it be nice to deal with properly typed values instead? (From here on I concentrate more on id -values, if you need to pass on many values you probably have other design flaws as well)

def doIt(someId: SomeId, someOtherId: SomeOtherId, foo: String, bar: String)

Now we have also regained the ability to trust the developer’s best friend – the compiler. Of course there is nothing new or fancy about wrapping values to classes, but what turned out to be tricky was to maintain the handy JSON-parsing that comes with Play Framework and case classes without too much boilerplate. So, how do we actually achieve this?

First solution: Implicit conversion to typed Id -classes

Well first of all we implemented a proper base trait representing any typed id case class which we will declare later on.

trait BaseId[V] {  val value: V  }

Based on that basic trait we can then implement another trait for every value type we want to support.

trait StringBaseId extends BaseId[String]
trait NumberBaseId extends BaseId[BigDecimal]

What we now need is an implicit conversion of the JavaScript’s primitive type to an instance of a typed id implementation. We do this using an implicit class for every primitive type we support. For example String -based ids we implement like this:

implicit class StringTypedIdFormat[I <: BaseId[String]](factory: Factory[String, I]) 
    extends Format[I] {
  def reads(json: JsValue): JsResult[I] = json match {
    case JsString(value) => JsSuccess(factory(value))
    case _ => JsError(s"Unexpected JSON value $json")
  }
  def writes(id: I): JsValue = JsString(id.value)
}

The provided factory will be used to instantiate a concrete implementation based on a String. The type factory is declared as follows:

type Factory[V, I <: BaseId[V]] = V => I

In the next step we declare concrete id -classes for each id type we need. Staying consistent to our requested method signature of the previous example we write:

case class SomeId(value: String) extends StringBaseId
case class SomeOtherId(value: BigDecimal) extends NumberBaseId

Both case classes extend from our base corresponding base traits. The case class representing the JSON’s would look like this:

case class SomeObject(id:SomeId, name:String)
case class SomeOtherObject(id:SomeOtherId, name:String, value:Number)

To get an implicit conversion between JSON and those two case classes we must provide implicit read and write -functions or JsonCombinator formats like (https://www.playframework.com/documentation/2.3.x/ScalaJsonCombinators)

implicit val someObjectFormat: Format[SomeObject] = 
  Json.format[SomeObject]
implicit val someOtherObjectFormat: Format[SomeOtherObject] = 
  Json.format[SomeOtherObject]

This format wouldn’t yet work because we still need an implicit conversion between JSON and the typed id case classes. We can finally provide those format using the helper classes previously declare. For the two ids we declare:

implicit val someIdFormat: Format[SomeId] = 
  new StringTypedIdFormat[SomeId](SomeId.apply _)
implicit val someOtherIdFormat: Format[SomeOtherId] = 
  new NumberTypedIdFormat[SomeOtherId](SomeOtherId.apply _)

The default apply method of the id case classes can be used directly as the declared factory method to generate the concrete instance of the case class. We can then implicitly convert between JSON primitive type id values and our typed id case classes in Scala.

A simple test to demonstrate the conversion:

val someObjectAsJson: JsValue = Json.parse("""
  {
    "id":"111",
    "name": "someName"
  }
""")

"Parsing generic id object" should {
   "SomeId will be parsed correctly" in {
     val test = someObjectAsJson.as[SomeObject]
     test.id === SomeId("111", “someName”)
   }
}

We can further simplify the id format declaration by adding an additional method to the Scala’s Json object:

object TypedId {

  //implicit convertion to extended json object
  implicit def fromJson(json: Json.type) = TypedId

  //extended format function
  def idformat[I <: StringBaseId](fact: Factory[String, I]) = 
    new StringTypedIdFormat[I](fact)
  def idformat[I <: NumberBaseId](fact: Factory[BigDecimal, I]) = 
    new NumberTypedIdFormat[I](fact)
}

Now we can replace the format declaration with the following simpler version:

implicit val someIdFormat: Format[SomeId] = 
  Json.idformat[SomeId](SomeId.apply _)
implicit val someOtherIdFormat: Format[SomeOtherId] = 
  Json.idformat[SomeOtherId](SomeOtherId.apply _)

We still need to declare the factory method because we can’t instantiate a typed class at runtime.

We are not completely happy with this solution because with the current solution we would have to declare a concrete case class as well as an implicit format for every id class we create. We are looking for a more generic way to declare such id’s and it could look something like this:

def doIt(someId: StringId[SomeObject], someOtherId: NumberId[SomeOtherObject], foo: String, bar: String)

Bye the way:
All the base implementations can be found on GitHub.

Mike Toggweiler, a partner @ Tegonal co-authored this post.

Parsing json with over 22 fields with case classes

We recently started to develop a new product using play framework in the back end and angular js for the client side. The back end is rather light and mainly consist of aggregating data from different web services. We do have a local database and for that we chose MongoDB. So what we have is many sources of data and pretty much all of them provide data in json format. Needless to say json parsing has to work flawlessly.

For the most part that worked great. With play you can create a case class with all the same fields as the corresponding json and then you just create a format for it in the companion class. This is easy and elegant and works great… until…

…until you notice that one of the web services you are calling has more than 22 fields and is a problem because Scala does not allow more than 22 fields in a case class. Normally that’s fine and usually that is a red flag about poor design, but sometimes the web services you use have more than 22 fields in the response objects.

case class FooBar(
   field01: String,
   field02: String,
   field03: String,
// ... many fields ...
   field22: String,
   field23: String)

// compilation errors

object FooBar{
   implicit val foobarFormat: Format[FooBar] = Json.format[FooBar]
}

Luckily there is pretty neat way around this and it might even improve your design. You can use nested case classes and all you need to do is implement a wrapper writer and reader for that class, but first you can create case classes that represent subset of the fields of the response.

case class Foo(
   field01: String,
   field02: String,
// ... many fields ...
   field09: String,
   field10: String)

object Foo{
   implicit val fooFormat: Format[Foo] = Json.format[Foo]
}

case class Bar(
   field11: String,
   field12: String,
// ... many fields ...
   field21: String,
   field22: String,
   field23: String)

object Bar{
   implicit val barFormat: Format[Bar] = Json.format[Bar]
}

So now we have smaller classes that contains subset of the original fields and the only thing missing is a “wrapper” class the represents the complete class. This is rather simple and in this case it has two fields, namely the two classes I just defined.

case class FooBar(
   foo: Foo,
   bar: Bar)

object FooBar {
   implicit val foobarReads: Reads[FooBar] = (
      (JsPath).read[Foo] and
      (JsPath).read[Bar])(FooBar.apply _)

   implicit val foobarWrites: Writes[FooBar] = (
      (JsPath).write[Foo] and
      (JsPath).write[Bar])(unlift(FooBar.unapply))
}

Now in scala you can access fields quite neatly using the dot -notion, but the json is serialized back to the original format. This parsing can be easily tested.

val foobarJson: JsValue = Json.parse("""
  { "field1":"value1", 
    "field2":"value2", 
    ...
    "field22":"value22", 
    "field23":"value23"}
  """)

"Parsing json to nested case classes" should {
  "work just fine" in {
    val foobar = foobarJsonAsString.as[FooBar]
    foobar.foo.field1 === "value1"
// ...
    foobar.bar.field23 === "value23"
  }
}

Power to Powershell

The somewhat “new” and fancy powershell does seem to have some rather nice features. This is not to say it’s anyway better than a real unix shell, but you can get some pretty neat stuff done with it in a rather simple manner. One of the old problems I’ve had (other than being stuck on windows) is that when ever a new version of java comes, I need to juggle between different versions of them depending on which project I am working on.

There are 2 environment variables you need to do to change current java version when using command line:

  1. JAVA_HOME
  2. Path

Normal way to add the java commands to path is using the JAVA_HOME environment variable (ie JAVA_HOME\bin). The problem is that the path is resolved when you start the powershell so it replaces all the environment variables with their values so changing the JAVA_HOME is not enough. You need also update the Path and updating both manually is quite tedious.

However powershell provides a way to define functions in the profile and they are quite perfect way to manage java versions.

function java8 {
  $env:JAVA_HOME="C:\Program Files\Java\jdk1.8.0"
  $env:Path=$env:JAVA_HOME + "\bin;" + $env:Path
}

function java7 {
  $env:JAVA_HOME="C:\Program Files\Java\jdk1.7.0_25"
  $env:Path=$env:JAVA_HOME + "\bin;" + $env:Path
}

This is certainly not perfect and if you change the java version many times, the path will get quite long, but I don’t consider that a much of a problem. I usually fire up a new instance anyway and then Path is reverted back to the original.

Emergent Leadership @ XP 2014

Image

 

Sometime ago I got a confirmation for both my conference trip to one of the most important agile conferences in the old continent – XP2014 and that my suggestion for lightning talk was accepted. I plan to do a quick introduction about Emergent Leadership which I blogged about the my previous post.

Lightning talk should be a perfect for this as what I want is to stir a bit of conversation and question static team structures and roles. Agile is not what you do, it’s what you are.

It does not hurt that this conference is hosted in historically one of the most intriguing cities in the world.

So what I still need to do is to clarify my thoughts and how I want to present the topic. 5 minutes is not much, but it can be plenty if well used.

http://www.xp2014.org/

Emergent Leadership

Managing and leading are two very different concepts, but many times you see terms like “manager” and “leader” used interchangeably. I personally like to define them very differently and in my opinion there is a very clear distinction.

In short “managers” are appointed and leaders emerge.

Managing refers to managing conditions and good managers create condition where leadership can naturally emerge. Some people are natural leaders and they will naturally take the lead unless their leadership skills are suppressed by management.

Leaders are very context dependent. In a software team during different development phases different people can emerge as leaders and even designing different parts of the system it might well be that the people who best understand the requirements and situation naturally take the lead.

A good manager should understand to cede control when someone else is better fit to lead. Arbitrarily holding on to power will only interfere with teams capability to perform. Ceding control requires a confidence and trust in the team.

Emerging leadership is a natural phenomena. Take a travelling flock of birds for instance. A wedge of cranes has a leader, somebody has to fly in the peak. The leader position is however circulated naturally.

flock-birds

The form of the wedge is fairly stable, but some other flocks change the form dramatically and peculiar forms and patterns can emerge. “When birds fly in flocks, they often arrange themselves in specific shapes or formations. Those formations take advantage of the changing wind patterns based on the number of birds in the flock and how each bird’s wings create different currents. This allows flying birds to use the surrounding air in the most energy efficient way. ” (http://birding.about.com/od/birdbehavior/a/Why-Birds-Flock.htm) Should an agile team think it’s form should stay constant.

Leadership implies followers. When it is not by authority as when manager tells his team to do something, its naturally good measure to check if the actions are agreed upon. If you have a good idea and people are willing to follow you because they agree with you, that makes you a leader.

Emerging leadership is an ongoing self organization activity by the team. Self organization is one of the fundamental agile -principles, but it’s not always understood as an ongoing process. Emerging leadership is a natural phenomena in a truly self organizing team. Agile team should be more like a flock of bird that is prepared to change forms to best benefit from the “currents” every team members skills create any given moment of time.