One of the nicer things about being a CTO of a growing startup is being able to experience firsthand all the things the software engineering literature you read all the years talk about.

Hello Heart started as a B2C company. In our early days we built a product – blood pressure tracking apps, and sold these apps to our users. As we grew, we became a B2B2C company  – we started selling our apps to employers who were looking for health benefits for their employees.

With this change, we had to shift our thinking in so many ways. Suddenly, marketing and sales became completely different, as did business development. Along with these changes, the new features we had to implement changed. We were required to implement more and more features that weren’t about the individual experience, but about the employees’ relationship with their employer. Are these employees eligible for Hello Heart? To which employer do they belong? Integrations with other benefit programs. That kind of stuff.

The most interesting thing for me was the way our thinking about the product changed as we started growing. Our “product” is no longer an appliccation that we sell. It’s a service. Complete with enrollment program and dashboards and operations and all the magic that goes into Hello Heart (did I write that? Yes, I did). Our software had to change as well in order to accommodate that. We started writing code to manage clients and enrollments and shipping centers.

Recognizing this change in the way we need to think took us a while and it has been fascinating to watch. For me, the key was noticing that we do a lot of things in a really hard way, and thinking – hey! Shouldn’t we have software to do that for us? It’s ok to do things in stupid and hard ways when you are a small startup and can’t afford to build all the software you need and just trying to figure out if your busines works. But when you find out it works and start operating as a business, this approach leaves you with a gap between what your business needs and what your technology provides, and your next challenge is to align the two.

We are have been doing that for a while now and it is exciting and fun. I feel like I live in a fairy tale case study from a software engineering book.

## Functional and Reactive Domain Modeling

When we introduced Scala into Hello Heart, one of our greatest challenges was not having anyone on the team with any kind of functional programming experience. We grappled with functional concepts and how they fit together and ended up with an object oriented/semi functional style code that can’t be considered even remotely good in any of these paradigms.

One of my realisations at the time was that as developers, we do most of our learning by reading and imitating other people’s code. We don’t always realise it when it happens because reading and imitating is what most of us do for a living at the beginning if our career, but when you are required to introduce a new paradigm you have no experience with as lead developer, the lack of that kind of experience becomes apparent.

Another thing I was lacking at the time was some understanding of common functional programming idioms and design patterns. Despite the community’s distaste for GoF, I still appreciate the way it shaped my object oriented thinking and the understanding it gave me of how the different object oriented concepts fit together, and how design patterns can be composed to build  software. I spent quite some time looking for a resource that will give me a similar experience with functional concepts.

I am now reading the book Functional and Reactive Domain Modeling, and it gives me exactly that. The book presents one functional design approach from the ground up and explains the different functional concepts that come into play on the way. While I’m not sure this is an approach I would like to adopt as is, it gives me great insight into the thought process of one functional system designer.

In other words, sh!t functional programmers say is starting to make sense to me.

## Programming Warm Up

In software development, productivity is all about getting in the Zone. When You are in the Zone, you are one with the code and writing software is easy as typing. When you are not, a whole day can pass without doing much work because thinking is just so hard.

One of the things I do to get myself into the zone are little “warm up” sessions before I start to code features that are hard for me to approach. I choose small problems that I particularly enjoy as warmups. Little treats, you might say. For example, I may start programming a feature by refactoring some remotely relevant code, commenting on legacy code around it or focusing on a small part of the feature that seems interesting.

To the outside viewer (a.k.a my boss when I was a developer) it may appear as though I’m wasting time on things that are not the core of my work – “We have no time to invest in code refactoring right now, we should ship this feature”, but I argue that this time is not spent but invested in making the development faster, and that the time it takes to develop the feature after investing it is much shorter.

Think about it – in many other areas of our life, the concept of warm up is very prevalent. Athletes warm up before an exercise to make it more effective and less dangerous. Singers warm up their vocal chords and musicians warm up their fingers before a show, and when I took an animation course, we were taught to start every day with a several minutes quicksketching session to get our creative juices flowing.

Why should programming be different?

Concentration and focus is a problem of acceleration and when you try to move from 0 to 100 kmph you should take the car that gives you the largest boost at the shortest time, and this is what programming warm up is all about.

## Some Scala Randomness

I thought it’d be a fun exercise to implement a ?: operator for Scala, like the one Java has:

boolValue ? ifTrue : ifFales


I turned it into a ?! operator, though, because Scala didn’t like me trying to define a method called ‘:’.

implicit class FancyBoolean(val b: Boolean) {

def ?[T] (ifTrue: => T) = if (b) {
new TrueBranch[T] {
override def ![A >: T, F <: A](ifFalse: => F): A = ifTrue
}
} else {
new TrueBranch[T] {
override def ![A >: T, F <: A](ifFalse: => F): A = ifFalse
}
}

trait TrueBranch[T] {

def ![A >: T, F <: A] (ifFalse: => F): A
}
}


Let’s see how this stuff behaves:

scala> true ? 1 ! 2
res2: Int = 1

scala> false ? 1 ! 2
res3: Int = 2

scala> false ? 1 ! "Hi!"
res4: Any = Hi!

scala> true ? println("Evaluated true!") ! println("Evaluated false!")
Evaluated true!


Nice :)

I’m reading Scala in Depth. In Section 2.3.2 on concurrency, the author gives the following example of a thread safe “index service” (removed trait definition for brevity):

class ImmutableService[Key, Value] {
var currentIndex = new ImmutableHashMap[Key,Value]
def lookUp(k : Key) : Option[Value] = currentIndex.get(k)
def insert(k : Key, v: Value) : Unit = synchronized {
currentIndex = currentIndex + ((k, v))
}
}


The author shows that this implementation, is much faster than an implementation which uses a value reference to a mutable map, and synchronises both lookup and insert operation.

When I was reading this code, I was wondering whether this implementation is indeed safe. After all, the assignment operation to the var itself is not synchronised – do we have a guarantee that it is atomic?

The Java Language Specification answered my question:

For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.

Writes and reads of volatile long and double values are always atomic.

Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.

Note that in general, updating 64-bit chunks of memory in the JVM may not be atomic, but reference updates specifically must be atomic as per the JLS.

This is a rather delicate point – in the general case, if reference assignments were not guaranteed to be atomic (as is sometimes the case in other languages), this example would not be correct since the reading threads may see a partially initialised reference.

Since this is such a delicate point, I think it’s worth mentioning explicitly in the book’s text.

## Understanding the Y Combinator

I am now reading Types and Programming Languages, and I’ve been trying to wrap my head around the Y combinator and why it works. This is how I explained it to myself:

We want to define a recursive function, which means the function should be able to refer to itself. Unfortunately, Lambda calculus does not give us this option, so we define a function of the form:

$g = \lambda func. \lambda param.\space\text{return something or} func(param')$

Where $param'$ is some reduced version of $param$. In order for this thing to be a recursion, we need to find a parameter $func$ for which:

$func = g(func)$

We notice that $func$ is a fixed point of $g$ by definition, hence the name fixed point combinators. We also notice that it must be a function of $g$ (otherwise it’s a constant and we can show trivially that it doesn’t work, duh), so we can write:

$Y(g) = func = g(func) = g(Y(g))$

Cool. Now we are looking for $term_0$ such that:

$Y = \lambda f.\underbrace{\text{ } f(term_0) \text{ }}_{term_0}$

Hmmm… Ok. Let’s try to name $term_0$ and pass it as a parameter to $f$, will that work?

$Y = \lambda f.\underbrace{(\lambda x. f(x))}_{term_0} term_0 =\lambda f.\underbrace{(\lambda x. f(x))}_{term_0}\underbrace{(\lambda x. f(x))}_{term_0}$

This sucks. After our change we no longer need to pass $term_0$ to $f$, but $term_1$ :

$Y = \lambda f.\underbrace{(\lambda x. f(term_1))(\lambda x. f(term_1))}_{term_1}$

Fortunately, can easily express $term_1$ using our bound variable $x$ as $(x x)$! So if we write:

$Y = \lambda f.(\lambda x. f(x x))(\lambda x. f(x x))$

We win!

We can see that the Y combinator as we defined it is a fixed point of the function $g$, just like we wanted.
Great success :)

I ignored reduction rules for the sake of simplicity, but it’s interesting to note that the book talks about another type of fixed point combinator, which should be used under different reduction rules:

$fix = \lambda f. (\lambda x. f (\lambda y. x x y))(\lambda x. f (\lambda y. x x y))$

I wonder how many fixed points $g$ has under each set of reduction rules.

## Please Don’t Use the Word Voodoo

Not many things irk me as the term Voodoo, when it’s applied to software engineering.

“I found a piece of code that solves this bug, but it’s completely Voodoo”

“Working with this library is like doing Voodoo”

Don’t. Say. That. It makes the problem you are working on seem like there is some unknown force that makes it impossible to solve in logical and analytical means. It makes it sound like the problem cannot be solved at all. It makes you sound like you are powerless against it. It makes you sound like you gave up.

It makes you sound unprofessional.

There are no ghosts in your computer. Everything has a logical explanation, you just haven’t found it yet. And wherever you use the term Voodoo, you can replace it with the words “I don’t understand” and get a perfectly valid sentence, that invites further analytical investigation, questioning and rational decision making.

“I found a piece of code that solves this bug but I don’t understand why it works”

“I don’t understand the library I’m working with”

From here you can continue to ask questions and make informed decisions. You can always invest more time in understanding the piece of technology you are dealing with. But you can also ask how much time it is going to require, what is the risk if you don’t, and if it is worth your time.

Please don’t use the term Voodoo when you talk about software engineering. We all are professionals here.

## Tesseract OCR + Open CV 2 on iOS

Disclaimer: This is a quick and messy post that I wrote just so that I will not forget what I did. I am not sure that this is the best way to do things and there may be redundant or missing steps. I will clean this up if I need to do it again. If you happen to go through this procedure and find mistakes, drop me a note and I will fix the post.

# The problem

• I needed to add this build of Tesseract OCR to my iOS project, which uses Open CV 2.
• Following the installation procedure in the link, caused the following error:

• I tried solving it using this stack overflow solution, but it caused a circular dependency issue – every time I removed and added some framework, I got the following error for another framework:

clang: error: linker command failed with exit code 1 (use -v to see invocation)

# What worked

• Added the Tesseract OCR iOS project from the link into my workspace
• Added another build target for the Tesseract project, of type: Cocoa Touch Static Library
• For my new target:
• Under build phases, compile sources – added Tesseract.mm (add any file that needs compiling, really)
• Under build phases, copy files – added Tesseract.h