Variance expression in functional programming

This time we’re going to explore how functional programs express variance. We’ll consider the same case as we did for TDA – getting a stored value from a Map, and storing a new value in a Map.

Retrieval

lookup :: Ord k => k -> Map k a -> Maybe a

In Haskell, the variance is expressed in the return type.

data Maybe a = Nothing
             | Just a

We might express this as follows:

variance ⊂ return type

Storage

We want to put a new value into the map.

Once again, the entropy of this function is two, see the previous variance post for further discussion.

Here’s the comprehensive version:

insertLookupWithKey :: Ord k => (k -> a -> a -> a) -> k -> a -> Map k a -> (Maybe a, Map k a)

This function actually allows the caller to do more than our original requirement, so a little explanation of that type signature is perhaps necessary.

(k -> a -> a -> a)

The caller passes in this function, which takes the key, the new value and the old value and produces the value that will be stored (most implementations will simply return the new value, rather than performing a calculation, mind).

Data.Map will only call this function if a value is already present – this information is present in the last part of the type signature:

k -> a -> Map k a -> (Maybe a, Map k a)

This is the part we’re going to focus on. Here we pass the key, new value, and the map to operate on. Note that the return type contains both the possibility of an old value (the presence of which tells us our value transforming function was called) and the new map.

Diversion one: encapsulation in FP

It’s starting to look as though functional programs eschew encapsulation in favour of type signature totality (definitely a thing). This is not true.

  • What if Maybe didn’t export its two constructors?

I fear the answer for some may be “huh”? So, a brief explanation of how this might work.

Haskell makes it relatively simple to obscure your implementation details. At the moment, we know that:

Maybe a = Just a
        | Nothing

In fact though, we only get to know this because the original author allowed those constructors out to play. It would be just as valid to export two functions that did the same but obscure the constructors, like so:

module Maybe (Maybe,just,nothing) where

data Maybe a = Just a
             | Nothing

just :: a -> Maybe a
just = Just
nothing :: Maybe a
nothing = Nothing

instance Functor Maybe where
fmap f (Just a) = Just $ f a
fmap _ Nothing = Nothing

Now, at the point of invocation, we can’t pattern match any more. We have to use the fact that Maybe is an instance of Functor in order to do anything with the value inside; there’s no way at all for us to get it out.

Example: reversing the value, if it exists.

fmap reverse $ lookup "key" map

So, it looks like the core of the FP world; Functor s and Monad s are very interested in not letting us execute a function on a value directly; you pass the function you would like executed to the enclosing context, and it can choose (based on the value inside) how to execute it.

In particular in Haskell, syntactic sugar ( do , <- , etc) is provided to make it appear to the eye that this is not the case. That’s only for humans, though; in reality the value never escapes.

Uh, so what now?

We used the term “return type” on the assumption that it is the last thing we see in the type signature of a function. This is either almost true or almost completely false.

Tony Morris tells us that all Haskell functions take one argument, and, once you’ve read that link, I’ll hope you’ll agree he is right.

This means that for a function which appears to take n arguments there are in fact n - 1 return types. Or, perhaps, they have one return type, and it’s everything we see after the first -> .

We’ll stick with:

variance ⊂ return type

Totalitarianism, dogma and consequences

That’s enough pretentious subheadings -Ed

Just as we distilled OOP into a very dogmatic TDA, we’ve taken a very dogmatic view of what FP is. Effects are delayed until the last possible moment, and mutable state is excised.

Oddly, unlike in the TDA case, our chosen implementation language, Haskell, is very keen on enforcing this dogma. Attempting to perform calculations or effects outside of a function’s type signature is* a compile error.

From here on out, we’ll assume dogmatic FP === FP and that non dogmatic usages of FP languages are charlatanic. This isn’t necessarily true, but it will save us a few words here and there.

(*almost always is. There is always Debug.trace )

Diversion 2: A mirror image

Cast your mind back to our purified TDA map insert. What would it look like if we translate it to Haskell?

put :: (Ord k) => Map k a -> k -> a -> (a -> IO()) -> IO() -> IO()

You can think of a -> IO() as being a bit like Consumer and IO() as being like Runnable . Haskell’s lazy predisposition allows us to pass around putStrLn "nothing here" as an expression without executing it.

Let’s rename that, and fiddle with argument order:

tda_insert :: (Ord k) => k -> a -> Map k a -> (a -> IO()) -> IO() -> IO()

Let’s also consider the core of the FP signature:

fp_insert :: (Ord k) => k -> a -> Map k a -> (Maybe a, Map k a)

Now, recall our stricter, non pattern matching Maybe implementation from earlier. This told us that Maybe is actually a context for executing functions that take an a . It also provides the following:

maybe :: b -> (a -> b) -> Maybe a -> b

If we just plug b = IO() into that, we get:

maybe :: IO() -> (a -> IO()) -> Maybe a -> IO()

and with some further shuffling, it becomes this:

maybe_ish :: Maybe a -> IO() -> (a -> IO()) -> IO()

Now, if we elide Map k a from fp_insert (thus making the state change implicit)

fp_insert_ish :: (Ord k) => k -> a -> Map k a -> Maybe a

…and now, we almost have

tda_insert = maybe_ish . fp_insert_ish

This is very interesting! It feels like we’ve come across a little piece of symmetry between the two approaches.

One way to think of this, perhaps, is that our TDA insert has inlined Maybe . It acts both as an associative container and as a context for function execution. The only difference (mind out though, it’s a big one) is that fp_insert is far stricter about what those functions are allowed to do.

We perhaps shouldn’t be surprised at this; what we’ve really found is an artifact of tight encapsulation, which we introduced by definition in both cases.

Why compose isn’t quite working

fp_insert_ish , as far as . is concerned, has type

fp_insert_ish :: (Ord k) => k -> (a -> Map k a -> Maybe a)

and maybe_ish is really wanting a Maybe a to start with. One can bind enough of fp_insert_ish ‘s parameters to get this to work; we’ll not spend the time doing that here.

Having found some symmetry (admittedly with a little bit of hand waving), let’s consider a less friendly a symmetry.

Interactions between FP and OOP

A lot of hot air has been generated about the exciting possibility of combining the powers of OOP and FP approaches, as if they are some sort of transformer robots that get a attack bonus, amazing music and lens flares whenever they touch.

It’s not particularly clear whether this is true. We won’t try to answer this question here at all. All we can do here is look at TDA ‘s interaction with FP.

  • Can FP code call TDA code?

No

The TDA approach deliberately obscures some variance in its type signatures. You’d have to do some serious gymnastics to express the same thing in Haskell, and, once the core is infected with mutability, the IO() virus spreads inexorably outwards.

  • Can TDA code call FP code?

Just about.

One can start to preach the church of return type in the core of an application and gently push it outwards (TDA style callers can update a mutable reference to the newly returned FP core, then send out the (also returned) events).

This will cause some dissonance, but, and this is the very important point, only at changeover points. Purity, unlike mutability, isn’t upwardly mobile.

Diversion 3: Not today…

Whether this has any bearing on OOP / FP interweaving is left temporarily as an exercise to the reader.

Serendipitously, Eric Meijer just wrote something excellent on the wider question of hybrid OOP/FP approaches.

Conclusion

We found a tiny Maybe hidden in our TDA code; a symmetry due to the encapsulation shared between our two approaches.

One wonders if other exercises TDAing the living daylights out of other data structures would uncover any other fundamental types.

We found also one asymmetry between TDA and FP; implicit mutation simply isn’t permitted in FP, so we can nest the approaches in one direction only.

Extremely satisfying summary

Once we start to mutate, anyone above must mutate with us.

So: Pick the point of no return very carefully.

Next time

A selection from:

  • Do our truths about TDA tell us anything about OOP?
  • Does our TDA world bring us any new problems?
  • Does our TDA world bring us any benefits?
  • Will we ever get to talking about the Actor model?

Post script

There may be a small hiatus over the next couple of months while I develop related material into a workshop for 2014’s SPA Conference ( This session is mine ).