Two types of people - complexifiers and simplifiers

It is a well known truth, beyond any dispute, that the world can be divided into two types of people: those who believe that the world can be divided into two types of people, and those who don’t.

Whilst I don’t particularly like dividing people up into groups, I have found, over many years, that modellers can be divided into two camps - the complexifiers and the simplifiers.

The complexifiers love complexity. They see complexity as a good thing - often as a mark of quality. They will sometimes pursue complexity as an end in itself according to the principle that anything that isn’t complex should be made complex if it can be made complex. Complexifiers in modelling will use obscure modelling language features just because they are there. Complexifiers are often seen from the outside as being high value, intelligent, and indispensable to a project. This is because no one else can understand what they have done.

On the other hand, simplifiers (and I include myself in this camp) hate complexity. They will seek to remove complexity at every turn. If something is complex, they will try to make it simpler. Worst case, they might simplify too much, and, “throw the baby out with the bath water”. Simplifiers are often seen from the outside as being low value and not to bright. After all, their work products are so easy to understand - even obvious. Simplifiers limit themselves to the modelling language features that they actually need, and hate obscure features that few understand. They are often seen as dispensable to a project, because, after all, if it is so simple, then surely anyone can do it.

I think you can see where I am going with all this. Who do you want on your project? Someone who makes things seem complex and difficult, or someone who makes things seem simple and easy.

Michael Jackson (no, not that Michael Jackson - THE Michael Jackson, the father of Jackson structured programming) tells an interesting story about visiting a project that I shall summarise here because I can’t find the actual reference. In fact, it is a koan of sorts, provided you use the term very loosely:

“Jackson went to visit a project in action. The project manager introduced the team and said, “this is Bob and Bob is a great programmer. Every project we give Bob proves to be really difficult and complex, but Bob always provides a solution. We don’t know what we would do without Bob, because no one else understands the really complex work he is doing.” The manager went on to introduce Alice. “And this is Alice, we’re not sure how good Alice is yet, because every problem we have given her has turned out to be easy. She still has to prove herself”.

Wow! That just about sums it up.

When Ila and I write a book, the reaction we are looking for from our readers is, “Well - that was easy - just common sense really”. We try to get this reaction to even very abstruse, complex and subtle ideas. It is a fun game, and we don’t always succeed, but we always try as hard as we can. We do this because we know that when the reader has that reaction, the information we are trying to impart has been completely subsumed into the readers cognitive map of reality. It has become “obvious”, it has become “common sense”, it has become “just the way the world works”. 

Normally, extending someones map of reality involves them going through an uncomfortable state of cognitive dissonance where the new ideas in the extended map clash with the old ideas in the old map. But with a lot of hard work, a lot of this dissonance can be finessed away. This is what we try to do.

A big part of this is striving to simplify - to make things as simple as possible but no simpler. Another big part of this is presenting the information in an appropriate form (we discuss this in depth in Secrets) - using structured text, pictures and models as appropriate. This takes a lot of effort, but, in our view, it is worth it.

Self publishing “Secrets of Analysis"

After many years, Ila and I have finally published “Secrets of Analysis”. In the end, we decided to go the self publishing route, and publish the book (at least initially) on the Apple iBook Store for iPad. In fact, the book is designed specifically to be read on an iPad (although it also works just fine on a Mac), and it looks great!

The whole process of publishing this book has left us quite disillusioned with conventional publishers. We originally had a contract with Wiley, and we delivered a first draft on time. Then they asked for a very significant change  that pushed the publishing back by a year. When we delivered the text, our commissioning editor told us that Wiley had undergone a reorganisation and was no longer interested in publishing this sort of book. Telling us sooner, rather than just waiting until we submitted, seemed to be entirely beyond him. So we wrote for an extra year, for at least 6 months of which, completely unbeknown to us, we had no contract. Ironically, Ila and I ultimately decided that the change was detrimental, so a lot of that work went into the wastebasket.

We tried a few other publishers, but because the book is very difficult to categorise, none would publish it despite the fact that several thought it was important. Finding a commissioning editor with vision and imagination is very hard when the general consensus is that the professional publishing market is moribund. 

Because we tend to write the books we need, or that we want to read, we tend to finish a text, and then try to find a publisher for it. Publishers simply don’t want to work like this. They want an idea, a proposal, a sample chapter etc. etc. You would think that a finished text that just needed a review, some editorial changes and polishing would be a gift to them, but they are not interested. It just doesn’t fit into their workflows and they are not agile enough to accommodate a different approach.

So here we are, publishing it ourselves on iBooks. In many ways, this is actually the optimum outcome for us, because we keep control of all of our intellectual property, and because we can publish a book that is exactly as radical and iconoclastic as we desire, without any ”watering down” by an editor who (often but not always) knows very little of what we are actually trying to achieve.

We have had quite a lot of success with self publishing with Introduction to BPMN2, which is (as far as we know) the first interactive modeling book, and with Interactive Computational Geometry”, which is live Mathematica (sorry - I should really say, “Wolfram Language”) code published as a CDF (Computational Document Format). 

Publishing isn’t a problem for us. Getting paid for our work is. Secrets took several man-years to write, and we put everything we know (well - almost everything) into it. We are unlikely to ever get any reasonable recompense for this work even at minimum wage levels. Do we care? Yes - we care a bit - because we both believe that if authors like us can’t get a reasonable return, then fewer and fewer books like “Secrets" will ever be published. For us, the time consuming business of earning a living means that we will probably never be able to write another Secrets.

The big problem Ila and I face is marketing. Neither of us have any training in, or inclination towards, this darkest of dark arts. As such we sell very few copies of our self published books. To give you some idea, our “Introduction to BPMN2” training course has had well over 100,000 views on Slideshare (putting us in Slideshares top 5%), but the book itself has only sold in the hundreds. Most of our books have associated training courses, and we give these away free to over a hundred Universities world-wide as our way of contributing something back to the software development community. This has been advantageous to us because these courses have driven sales of our print books very well, but of our eBooks - not so much. We’re not sure why this is. 

Nowadays, everyone wants, “free”, and the thought of actually paying someone for the hard work they have put into a book seems very far from many peoples thoughts (our readers excepted). This is a shame.

Reducing complexity in UML tool development

Perhaps the biggest challenge with UML is it's sheer size and complexity. UML is complex in many obvious ways: conceptually, syntactically and semantically. However, it is also complex in a less obvious way - it is complex for tool developers to implement. It is this form of complexity that I want to talk about here because I think that at least some of it can be reduced.

Think about what a UML tool has to do in order to simply render a UML class model:

  • It reads in a textual representation of the model. This is should be in XMI (XML Metadata Interchange) format, but it is often in some proprietary format.
  • It has to convert this into an abstract syntax tree (AST) which is implemented in whatever programming language the tool is written in.
  • It has to walk the tree and render the graphical elements.

When it saves the model to disk, it has to do the first two steps of this in reverse.

So there's quite a lot of complexity here, but much of it (I suggest) is unnecessary and may be removed by using the appropriate tools. As a starting point, let's assume that the UML tool is written in Java. Think about the transformation:

XMI -> Java

Wouldn't it be better if we could get rid of this transformation entirely. Conceptually it is quite unnecessary. The XMI represents the AST of the UML class diagram reasonably well. It is purely a matter of pragmatics that we have to convert that XML AST to Java, a language that has no built-in support for ASTs. This, of course is another problem. Our target language has to implement some sort of framework to represent the AST of the UML class diagram. This is another level of complexity which, as we will see, is entirely unnecessary, provided the right development tools are used.

If instead of using XML and Java, we were to use a language that has built-in support for ASTs, then this whole area of complexity would simply vanish. Such a language is Lisp. In particular, Clojure is a modern Lisp that runs on the JVM and allows access to all of the Java libraries. This would seem to be the optimal choice for developing a UML modelling tool. The big advantages of Clojure over Java for this task are:

  • Clojure is homoiconic - Lisp code is Lisp data. This makes metaprogramming a breeze. It means that the AST and the code that manipulates it can be in precisely the same language (Lisp). 
  • Clojure (Lisp) syntax actually is an AST - no transformations or class libraries are necessary. 

So a compelling vision for a modern UML tool would be:

  • Textual representation implemented as Clojure code which is already an AST.
  • Graphical rendering - Clojure leveraging the Java Graph libraries.

Essentially, everything is Lisp! XMI can trivially be emitted from an AST in Lisp simply by walking the tree. Similarly, (but slightly harder), the Lisp AST may be constructed from XMI if needed. This last step is slightly harder because of all of the cruft that XML adds to the AST representation.

Think about this further. UML has the Object Constraint Language that allows constraints to be stated on UML models. Because the AST is Lisp, and Lisp code is Lisp data, implementing a constraint language on a Lispy representation of UML is trivial. No new language is required. The same is true for UML transformation and action languages. All can be implemented as (at worst) a very thin layer on top of an underlying Lisp substrate.

Here is a simple example of a "Lispy" representation of a simple UML class model:

(def m1 (model "M1"

          (package "DataTypes"

            (datatype "int")

            (datatype "string")


           (package "P1"

             (package "P2"

               (klass "C2"

                 (attribute "a1" "DataTypes::int")

                 (attribute "a2" "DataTypes::string")

                 (attribute "a3" "P1::P3::C1"))


               (package "P3"

                 (klass "C1"

                   (operation "op1")


This representation is just Lisp (Clojure) code. It may be executed to create an in-memory AST. It may be saved to disk in this format, then read back in and executed. It is inherently human readable.

It's not magic - model, package, klass etc. are all functions. Calling these functions generates the AST, which is also Lisp code. Each function looks like this:

(defn klass [name & params]

  {:metaclass :class :name name :id (gensym) :visibility "public" :elements (set params)})

Yes - that's right - just a single line of code that returns a map with a nested set of child nodes. 

The generated AST looks like this:

{:metaclass :model, :name "M1":id G__758, :elements

 #{{:metaclass :package:name "P1":id G__757, :visibility "public":elements

    #{{:metaclass :package:name "P3":id G__756, :visibility "public":elements

       #{{:metaclass :class:name "C1":id G__755, :visibility "public":elements

          #{{:metaclass :operation:id G__754, :name "op1":visibility "public":elements #{}}}}}}

      {:metaclass :package:name "P2":id G__753, :visibility "public":elements

       #{{:metaclass :class:name "C2":id G__752, :visibility "public":elements

          #{{:metaclass :attribute:name "a3":type "P1::P3::C1":id G__751, :multiplicity "1":visibility "public"}

            {:metaclass :attribute:name "a1":type "DataTypes::int":id G__749, :multiplicity "1":visibility "public"}

            {:metaclass :attribute:name "a2":type "DataTypes::string":id G__750, :multiplicity "1":visibility "public"}}}}}}}

   {:metaclass :package:name "DataTypes":id G__748, :visibility "public":elements

    #{{:metaclass :datatype, :name "string", :id G__747}

      {:metaclass :datatype, :name "int", :id G__746}}}}}

As you can see, there is a bit more complexity in the AST, but not really very much. In particular, there are no frameworks, no design patterns (Visitor etc.), just pure Lisp. The AST is constructed from maps - it is very simple and flexible, and there is no need for an OO representation at all. The transformation from the Lispy UML to AST is trivial.

Similarly, redering the AST as XMI is trivial - just walk the tree and call the appropriate rendering function for each node. We use a polymorphic function called emit-xmi that is dispatched on the metaclass of the node. These functions look like this:

(defmethod emit-xmi :class [params]

  (prxml [:ownedMember { :isAbstract "false" :isLeaf "false" :name (params :name) :visibility "public" :xmi:id (params :id) :xmi:type "uml:Class"

          (map emit-xmi (params :elements))]))

The function uses the Clojure prxml library to emit the XMI. Again, note how simple and direct this is. No Visitors or any of the rest of the Java cruft.

Because the AST is just Lisp, rather than (Java + some framework + some design patterns), constraint, transformation, human readable and action languages are also all just Lisp.

So perhaps we have been missing a trick. Choosing the right tool for the job makes all the difference.

© Clear View Training 2012