Functional and Reactive Domain Modeling

When we introduced Scala into Hello Heart, one of our greatest challenges was not having anyone on the team with any kind of functional programming experience. We grappled with functional concepts and how they fit together and ended up with an object oriented/semi functional style code that can’t be considered even remotely good in any of these paradigms.

One of my realisations at the time was that as developers, we do most of our learning by reading and imitating other people’s code. We don’t always realise it when it happens because reading and imitating is what most of us do for a living at the beginning if our career, but when you are required to introduce a new paradigm you have no experience with as lead developer, the lack of that kind of experience becomes apparent.

Another thing I was lacking at the time was some understanding of common functional programming idioms and design patterns. Despite the community’s distaste for GoF, I still appreciate the way it shaped my object oriented thinking and the understanding it gave me of how the different object oriented concepts fit together, and how design patterns can be composed to build  software. I spent quite some time looking for a resource that will give me a similar experience with functional concepts.

I am now reading the book Functional and Reactive Domain Modeling, and it gives me exactly that. The book presents one functional design approach from the ground up and explains the different functional concepts that come into play on the way. While I’m not sure this is an approach I would like to adopt as is, it gives me great insight into the thought process of one functional system designer.

In other words, sh!t functional programmers say is starting to make sense to me.

Advertisements

Programming Warm Up

In software development, productivity is all about getting in the Zone. When You are in the Zone, you are one with the code and writing software is easy as typing. When you are not, a whole day can pass without doing much work because thinking is just so hard.

One of the things I do to get myself into the zone are little “warm up” sessions before I start to code features that are hard for me to approach. I choose small problems that I particularly enjoy as warmups. Little treats, you might say. For example, I may start programming a feature by refactoring some remotely relevant code, commenting on legacy code around it or focusing on a small part of the feature that seems interesting.

To the outside viewer (a.k.a my boss when I was a developer) it may appear as though I’m wasting time on things that are not the core of my work – “We have no time to invest in code refactoring right now, we should ship this feature”, but I argue that this time is not spent but invested in making the development faster, and that the time it takes to develop the feature after investing it is much shorter.

Think about it – in many other areas of our life, the concept of warm up is very prevalent. Athletes warm up before an exercise to make it more effective and less dangerous. Singers warm up their vocal chords and musicians warm up their fingers before a show, and when I took an animation course, we were taught to start every day with a several minutes quicksketching session to get our creative juices flowing.

Why should programming be different?

Concentration and focus is a problem of acceleration and when you try to move from 0 to 100 kmph you should take the car that gives you the largest boost at the shortest time, and this is what programming warm up is all about.

Some Scala Randomness

I thought it’d be a fun exercise to implement a ?: operator for Scala, like the one Java has:

boolValue ? ifTrue : ifFales

I turned it into a ?! operator, though, because Scala didn’t like me trying to define a method called ‘:’.

implicit class FancyBoolean(val b: Boolean) {

	def ?[T] (ifTrue: => T) = if (b) {
		new TrueBranch[T] {
			override def ![A >: T, F <: A](ifFalse: => F): A = ifTrue
		}
	} else {
		new TrueBranch[T] {
			override def ![A >: T, F <: A](ifFalse: => F): A = ifFalse
		}
	}

	trait TrueBranch[T] {

		def ![A >: T, F <: A] (ifFalse: => F): A
	}
}

Let’s see how this stuff behaves:

scala> true ? 1 ! 2
res2: Int = 1

scala> false ? 1 ! 2
res3: Int = 2

scala> false ? 1 ! "Hi!"
res4: Any = Hi!

scala> true ? println("Evaluated true!") ! println("Evaluated false!")
Evaluated true!

Nice :)

Small Note About Concurrency

I’m reading Scala in Depth. In Section 2.3.2 on concurrency, the author gives the following example of a thread safe “index service” (removed trait definition for brevity):

class ImmutableService[Key, Value] {
  var currentIndex = new ImmutableHashMap[Key,Value]
  def lookUp(k : Key) : Option[Value] = currentIndex.get(k)
  def insert(k : Key, v: Value) : Unit = synchronized {
    currentIndex = currentIndex + ((k, v))
  }
}

The author shows that this implementation, is much faster than an implementation which uses a value reference to a mutable map, and synchronises both lookup and insert operation.

When I was reading this code, I was wondering whether this implementation is indeed safe. After all, the assignment operation to the var itself is not synchronised – do we have a guarantee that it is atomic?

The Java Language Specification answered my question:

For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.

Writes and reads of volatile long and double values are always atomic.

Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.

Note that in general, updating 64-bit chunks of memory in the JVM may not be atomic, but reference updates specifically must be atomic as per the JLS.

This is a rather delicate point – in the general case, if reference assignments were not guaranteed to be atomic (as is sometimes the case in other languages), this example would not be correct since the reading threads may see a partially initialised reference.

Since this is such a delicate point, I think it’s worth mentioning explicitly in the book’s text.

Understanding the Y Combinator

I am now reading Types and Programming Languages, and I’ve been trying to wrap my head around the Y combinator and why it works. This is how I explained it to myself:

We want to define a recursive function, which means the function should be able to refer to itself. Unfortunately, Lambda calculus does not give us this option, so we define a function of the form:

g = \lambda func. \lambda param.\space\text{return something or} func(param')

Where param' is some reduced version of param . In order for this thing to be a recursion, we need to find a parameter func for which:

func = g(func)

We notice that func is a fixed point of g by definition, hence the name fixed point combinators. We also notice that it must be a function of g (otherwise it’s a constant and we can show trivially that it doesn’t work, duh), so we can write:

Y(g) = func = g(func) = g(Y(g))

Cool. Now we are looking for term_0 such that:

Y = \lambda f.\underbrace{\text{ } f(term_0) \text{ }}_{term_0}

Hmmm… Ok. Let’s try to name term_0 and pass it as a parameter to f , will that work?

Y = \lambda f.\underbrace{(\lambda x. f(x))}_{term_0} term_0 =\lambda f.\underbrace{(\lambda x. f(x))}_{term_0}\underbrace{(\lambda x. f(x))}_{term_0}

This sucks. After our change we no longer need to pass term_0 to f , but term_1 :

Y = \lambda f.\underbrace{(\lambda x. f(term_1))(\lambda x. f(term_1))}_{term_1}

Fortunately, can easily express term_1 using our bound variable x as (x x) ! So if we write:

Y = \lambda f.(\lambda x. f(x x))(\lambda x. f(x x))

We win!

We can see that the Y combinator as we defined it is a fixed point of the function g, just like we wanted.
Great success :)

I ignored reduction rules for the sake of simplicity, but it’s interesting to note that the book talks about another type of fixed point combinator, which should be used under different reduction rules:

fix = \lambda f. (\lambda x. f (\lambda y. x x y))(\lambda x. f (\lambda y. x x y))

I wonder how many fixed points g has under each set of reduction rules.