Exploring Delegation in Kotlin

I’m a huge fan of interfaces in Java and also of composition over inheritance. Inheritance is some magic wiring with tight coupling that creates lots of friction when evolving a code base. I’ve written about interfaces before several times, for example here or here. Looking at Kotlin I wanted to see what I can do with interfaces and composition.

Take this example:

Why would one introduce a HasName interface in this case? It reduces dependencies and coupling. This makes reasoning about code easier and speeds up compilation, especially incremental compilation.

How would we use the HasName interface? A function that checks for a good name could look like this:

Now the function depends on the whole of person, not just the name part. The function can not operate on a lot of things, just persons. What about other things with Name, like Dogs? Rewritting the code to

makes the function more reusable.

Inside our Person class we have the code for the HasName functionality. It woud be nicer to be able to reuse the functionality from somewhere else.

In Kotlin we can delegate Interfaces to objects:

This looks a little unnatural to me, as the user of the Person class needs to know about the NameMixin. See if we can do better

looks cleaner as the consumer of Person does not know about NameMixin.

Kotlin can also use data classes (Thanks to Christian Helmbold for pointing this out).

If we want to have more control, we can use a Factory inside Person.

The name of the companion object, in this case Name, is optional but helps to structure the factory methods.

The mixin can be accessed with this:

Using a companion object with a factory method is better, because it gives us more control in the creation of the Mixin. But the control is not optimal. I wish we had something like

where I have access to name and more control over it. It would also be nice to have a way to access other Mixins from a Mixin. But overall, some nice functionality in Kotlin.

CEO to CTO: What is your RewriteRatio?

I’ve been thinking a lot about the interaction of CEOs and CTOs over the last years. On that topic I gave talks, educated CEOs and held seminars. What type of interactions do CEOs have with technology? What are KPIs that make sense? What are levers forthe CEO? What questions should a CEO ask? What should CEOs know about technology?

I still have the feeling after many years of technology success for many CEOs technology is a black box, that either works or explodes, with nothing in between. Do I get a good return of investment from IT? Should I have more developers? When talking to CEOs many feel not in control when it comes to technology.

Recently after talking to Jens Schumann from OpenKnowledge about the life cycle of frameworks, I had an idea for a lever and a possible KPI around technology. This would be the ratio of the effort of migrating the current system to a new technology vs. rewriting the system from the scratch.

Say the ratio is 50%, this means migrating this system to a new technology is 50% of the effort of writing it new. 150% means migrating the system is more expensive than writing the system from scratch. I would assume many systems are between 80% and 120% with the majority at 100%. If no guidance is given, most code is written in a way that ties it tightly into frameworks.

RewriteRatio obviously depends on the technology the system should migrate to. e.g. migrating from Scala Lift to Scala Play is easier than migrating from Ruby with Rails to Javascript with Node. If you switch the programming language it’s probably always the same as writing from scratch, as essentially this is what you do.

Why should a CEO ask this question? Systems come to the end of their life cycle, or parts of these systems do. Web frameworks are no longer supported, in Javascript we have seen Backbone, Knockout, Angular, Ember and React in rapid succession, with older frameworks massively losing momentum and community. So this always is a hidden risk in your books potentially in the millions. Rewrites also have a huge impact on the delivery of new features. Often rewrites block the delivery of new features for months or even years. During some company phases rewrites are no problem and even planned for, e.g. during a first to market phase financed with external money. During some other phases rewrite costs can be painful, e.g. when there is no longer hyper growth and profit margins are thinner. The less they are, the better.

If not asked, CEOs often assume there is no need for a complete rewrite but only upgrades or migrations. Technology people often think everyone knows that we need to rewrite sooner or later. Or they just assume they’ll leave the company before that date. So often code is written in a way where migration is not easily possible.

The RewriteRatio can also be used as a communication device and alignment between CEO and CTO. e.g. the goal could be to maintain a 50% rewrite ratio. The CTO then manages according to this corridor. Developers know how much effort to put into abstracting technologies or packaging business logic away with anti corruption layers or if fast and furious is ok with the CEO (100% RewriteRatio).

RewriteRatio is just one lever or KPI which helps CEOs manage IT. I will explore more in upcoming posts.

A Little Guide on Using Futures for Web Developers

Disclaimer This guide uses Scala to illustrate concepts but aims to be useful for other languages with Futures. The guide views Futures as a concurrent API independent of the concurrency implementation. It aims for understanding not for absolute correct usage of nomenclature. The guide is based on my little understanding of the topic but I hope it might still be useful to others.

Why – Or Better Web Performance by Using Futures

Performance of web applications is important to users. A web site that is snappy will engage users much more. In frontend controllers you often need to access several backend services to display information. These backend services calls take some time to get the data from the backend servers. Often this is done one call after the other where the call times add up.

Suppose we have a web controller that accesses three backendservices for loading a user, getting new messages for the user and getting new job offers. The code would call the three backend services and then render the page:

All three calls need to be executed to render the HTML for the page. With the timings of the three calls, the rendering will take at least 400ms. It would be nice to execute all three in parallel to speed up page rendering. To achieve this we can modify our service calls to return Futures.

Futures in Scala are executed in a different thread from a thread pool. Futures are boxes or wrappers around values that wrap the parallel execution of a calculation and the future value of that calculation. After the Future is finished the value is available. With Futures in place the code will take only 200ms to execute instead of 400ms as in the first version. This leads to faster response times from our website.

How to work with the value inside a Future? Suppose we want the email address of the User. We could wait until the execution of the service call is finished and get the value. This would diminish the value of our parallel execution. Better we can work with the value in the future!

This way the mapping function is called when the future is completed but our code still runs in parallel.

Getting the Value From a Future

When is a Future executed and the value rendered in a web framework? When we think of the Future as a box, at one point someone needs to open the box to get the value. We can open the box and get the value of all combined futures with Await. Await also takes a timeout value:

Await waits for the Future to return and in this case with a maximum execution time of 1 second. After this we can hand the email to our web templating for rendering the template. We want to open the box as late as possible as opening each box is blocking a thread which we want to prevent.

WebFramework that Supports Futures

So why open the box at all? Better yet to use a web framework that can handle asynchronicity natively like Play Framework.

Play can directly work with the Future and return the result to the browser as soon as the Future is completed, while our own code finishes and frees the request thread.

Combining Futures into new Futures

We do not want to use Await for every Future or hand every Future to the async web framework. We want to combine Futures into one Future and work with this combined Future. How do you combine the different futures from the backend calls into one result? We can use map as above and then nest service calls.

Ordinary map calls do not work, as we get a deeply nested Future. So for the outer map method calls we use flatMap to flatten a Future[Future[A]] result into a Future[A]

With many service calls this becomes unreadable though.

Serial Execution

Scala has a shortcut for nested flatMap calls in the form of for comprehensions. Scala for comprehensions – which are syntactic sugar for flatMap and filter – make the code more readable. The yield block is executed when all Futures have returned.

Example: Serial execution

The for comprehension is desugared into a chain of flatMaps (see above) and the methods are called after each other and therefor the futures are created after each other. This also means the futures are not executed in parallel.

Sometimes we want to execute Futures in serial one after the other but stop after the first failure. Michael Pollmeier has an example:

which can be implemented by:

Parallel Execution

For parallel execution of the futures we need to create them before the for comprehension.

Example: Parallel execution

Sitenote: If you’re more into FP you can use Applicatives to combine Future results into a tuple.

Working with dependent service calls

What happens if two service calls depend on each other? For example the messages call depends on the user call. With for comprehensions this is easy to solve as each line can depend on former lines.

Now userByEmail and newMessages run in serial, as the second depends on the first, while the call to newOffers runs in parallel, as we create the Future before the for comprehension and therefor it starts to run before.

The nice thing about Futures is how composable they are. For cleaner code we can create a method userWithMessages and reuse this in our final for comprehension. This way the code is easier to understand, parts are reusable and serial and parallel execution can easier be seen:

Using the method call userWithMessages in the for comprehension block works here, because it is the first and only method call. With several calls like above use

to execute in parallel before the for comprehension block.

Turning a Sequence of Futures into a Future of Sequence

If we want to get several users and our API does not support usersByEmail(Seq[String]) to get many users with one call we need to call the service for all users.

Here the usage of futures has major benefits as it dramatically speeds up execution. Suppose we have 10 calls with 100ms each then the serial version will take 10x100ms (1 second) whereas a parallel version from above will take 100ms.

How do we compose the result of Seq[Future[A]] with other futures? We need to turn this Sequence of Futures into a Future of Sequence. There is a method for this:

The future returns when all futures have returned.

Using Future.traverse

Another way is to directly use traverse

Composing Futures that contain Options

Often service calls or database calls return Option[A] for the possibility that no return value exists e.g. in the database. Suppose or User Server returns Future[Option[User]] and we want to work with the user. Sadly this does not work:

This does not work as ‘for’ is syntactic sugar for flatMap, meaning the results are flattened, e.g. List[List[A]] is flattened into List[A]. But how to flat Future[Option[A]]? So composing Futures that return Options is a little more difficult. It also doesn’t work because we wrap a container (Option) into another container (Future) but want to work on the inner value with map and flatmap.

Using a FutureOption Class

One solution for keeping the Option is to write a FutureOption that combines a Future and an Option into one class. On example can be found on the edofic Blog:

In our example from above it could be used like this:

Using OptionT

As this problem often arises in functional code working with containers it has been solved before with transformers. These allow stacking of containers and transform the effects of two containers (Option having the optional effect and Future the future effect) into a new container that combines these effects. Just the thing we need.

Luckily Scalaz has a generic transformer for Option called OptionT that can combine the effect of Option with another container. Combining also means that our flatMap and map methods work with the innermost value.

This way we can compose calls that return Future[Option[A]].

Combining Option[A] with Seq[A]

Combining this with our messages service call that returns Seq[Message] gets a little more complex, but can be done.

The interesting line is messages <- newMessages.liftM where newMessages returns Future[Seq[Message]]. To get this working with Future[Option[A]] we can either transform this by hand or use liftM. LiftM automatically “lifts” the value into the right containers, OptionT[Future, Seq[Message]] in this case.

Combining Iterating over Lists with Futures

Sometimes you want to work with futures and iterate over Future[List[A]]. This can be achieved with nested for comprehensions:

Error Handling

For Futures and Futures with Options all the error handling strategies exist that I wrote about in “Some thoughts on Error Handling”.

But there are more. For combining Try with Future one can convert the Try to a Future. It makes more sense to flatten Future[Try[]] than flatten Future[Option[]]. Try and Future have similar semantics – Success and Failure – whereas Future and Option differ in their semantics.

Beside this Future in Scala has many ways to handle error conditions with fallbackTo, recover and recoverWith which to me are the preferred ways instead of failure callbacks which also exist.

Futures can make your frontend code faster and your customers happier. I hope this small guide helped you understanding Futures and how to handle them.

If you have ideas about handling Futures or general feedback, reply to @codemonkeyism


  • The article uses container or box for describing Future and Option. The term that is usually used is Monad. Often this term confuses people so I have used the simpler box and container instead.

  • The article assumes creating Futures creates parallel running code. This is not always the case and depends on the number of available cores and the thread pool or concurrency implementation underlying the Futures. For production deployment and performance you need to read about thread pools and how to tune them to your application and hardware. Usually an application has different thread pools e.g. for short running code, long running code or blocking (e.g. IO) code.

  • The article assumes that code in Future is not blocking. If you use blocking code in Futures, e.g. IO with blocking database drivers, this has impact on your performance.

Stephan the codemonkey writes some code since 35 years, was CTO in some companies and currently helps startups with their technical challenges