Functional and Reactive Domain Modeling

When we introduced Scala into Hello Heart, one of our greatest challenges was not having anyone on the team with any kind of functional programming experience. We grappled with functional concepts and how they fit together and ended up with an object oriented/semi functional style code that can’t be considered even remotely good in any of these paradigms.

One of my realisations at the time was that as developers, we do most of our learning by reading and imitating other people’s code. We don’t always realise it when it happens because reading and imitating is what most of us do for a living at the beginning if our career, but when you are required to introduce a new paradigm you have no experience with as lead developer, the lack of that kind of experience becomes apparent.

Another thing I was lacking at the time was some understanding of common functional programming idioms and design patterns. Despite the community’s distaste for GoF, I still appreciate the way it shaped my object oriented thinking and the understanding it gave me of how the different object oriented concepts fit together, and how design patterns can be composed to build  software. I spent quite some time looking for a resource that will give me a similar experience with functional concepts.

I am now reading the book Functional and Reactive Domain Modeling, and it gives me exactly that. The book presents one functional design approach from the ground up and explains the different functional concepts that come into play on the way. While I’m not sure this is an approach I would like to adopt as is, it gives me great insight into the thought process of one functional system designer.

In other words, sh!t functional programmers say is starting to make sense to me.

Small Note About Concurrency

I’m reading Scala in Depth. In Section 2.3.2 on concurrency, the author gives the following example of a thread safe “index service” (removed trait definition for brevity):

class ImmutableService[Key, Value] {
  var currentIndex = new ImmutableHashMap[Key,Value]
  def lookUp(k : Key) : Option[Value] = currentIndex.get(k)
  def insert(k : Key, v: Value) : Unit = synchronized {
    currentIndex = currentIndex + ((k, v))
  }
}

The author shows that this implementation, is much faster than an implementation which uses a value reference to a mutable map, and synchronises both lookup and insert operation.

When I was reading this code, I was wondering whether this implementation is indeed safe. After all, the assignment operation to the var itself is not synchronised – do we have a guarantee that it is atomic?

The Java Language Specification answered my question:

For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.

Writes and reads of volatile long and double values are always atomic.

Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.

Note that in general, updating 64-bit chunks of memory in the JVM may not be atomic, but reference updates specifically must be atomic as per the JLS.

This is a rather delicate point – in the general case, if reference assignments were not guaranteed to be atomic (as is sometimes the case in other languages), this example would not be correct since the reading threads may see a partially initialised reference.

Since this is such a delicate point, I think it’s worth mentioning explicitly in the book’s text.