Myths that give model-driven development a bad name

It seems that people that resist the idea of model-driven development (MDD) do so because they believe no tool can have the level of insight a programmer can. They are totally right about that last part. But that is far from being the point of MDD anyways. However, I think that unfortunate misconception is one of the main reasons MDD hasn’t caught on yet. Because of that, I thought it would be productive to explore this and other myths that give MDD a bad name.

Model-driven development myths

Model-driven development makes programmers redundant. MDD helps with the boring, repetitive work, leaving more time for programmers to focus on the intellectually challenging aspects. Programmers are still needed to model a solution, albeit using a more appropriate level of abstraction. And programmers are still needed to encode implementation strategies in the form of reusable code generation templates or model-driven runtime engines.

Model-driven development enables business analysts to develop software (a variation of the previous myth). The realm of business analysts is the problem space. They usually don’t have the skills required to devise a solution in software. Tools cannot bridge that gap. Unless the mapping between the problem space and solution space is really trivial (but then you wouldn’t want to do that kind of trivial job anyways, right?).

Model-driven development generates an initial version of the code that can be manually maintained from there on. That is not model-driven, it is model-started at most. Most of the benefits of MDD are missed unless models truly drive development.

Model-driven development involves round-trip engineering. In MDD, models are king, 3GL source code is object code, models are the source. The nice abstractions from the model-level map to several different implementation artifacts that capture some specific aspect of the original abstraction, combined with implementation-related aspects. That mapping is not without loss of information, so it is usually not reversible in a practical way, even less so if the codebase is manually maintained (and thus inherently inconsistent/ill-formed). More on this in this older post, pay attention to the comments as well.

Model-driven development is an all or nothing proposition. You use MDD where it is beneficial, combining with manually developed artifacts and components where appropriate. But avoid mixing manual written code with automatically generated code in the same artifact.

What is your opinion? Do you agree these are myths? Any other myths about MDD that give it a bad name that you have seen being thrown around?


Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Interview at

Last December I had the pleasure of being interviewed by Jordi Cabot, the maintainer of, a web site on all things model-driven. We talked mostly about the TextUML Toolkit project, but Jordi also asked about my opinions on more general subjects, such as modeling notations, textual modeling frameworks, DSLs, UML and trends in modeling.

Jordi has recently made a transcription of the interview available on his web site. Take a read, feel free to leave a comment, I am very keen on discussing on any of the topics covered.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Model-driven prototyping presentation @ VIJUG

Last week I did a short presentation on “Model-driven prototyping” for the Vancouver Island Java User Group (VIJUG). It was lots of fun, with good participation from the group. I also showed a quick demo of AlphaSimple, our upcoming service for model-driven prototyping, which seemed to be well received.

For the benefit of those not there, here is a web-version of that presentation, with notes showing on the slides (click here for a full screen view).

Comments are very welcome. I would be very happy to discuss the approach with anyone interested.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Model-driven prototyping with AlphaSimple

It’s been a while since the last post, but I have a good excuse. I have been working on a new MDD product named AlphaSimple.

AlphaSimple is our upcoming web-based service that renders functional prototypes straight from rich domain models. The goal is to bridge the gap between design and requirement analysis, creating a fast feedback loop between those two activities. The result is much more precise and complete requirements early in the project lifecycle, and a sound design model that not only will be a breeze to implement, but that even your customers will understand.

We are craving feedback. If you want to take an alpha version for a spin, sign up at



Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

On code being model – maybe not what you think

I have heard the mantra ‘code is model’ several times. Even though I always thought I got the idea of what it meant, only now I decided to do some research to find out where it came from. Turns out that it originated from a blog post that MS’ Harry Pierson wrote back in 2005. It is a very good read, insightful, and to the point.

The idea that gave title to Harry’s post is that whenever we use a simpler representation to build something that is more complex and detailed than we want to care about, we are creating models. 3GL source code is a model for object code. Byte code is a model for actual CPU-specific executable code. Hence code is model.

He then goes to ask that if we have been successfully reaping the benefits of increased levels of abstraction by using 3GLs for decades now, what prevents us from taking the next step and using even higher level (modeling) languages? He makes several good points that are at the very foundations of true model-driven development:

  • models must be precise“- models must be amenable to automatic transformation. Models that cannot be transformed into running code are “useless as development artifacts“. If you like them for conceiving or communicating ideas, that is fine, but those belong to a totally different category, one that plays a very marginal role in software development, and have nothing to do with model-driven development. Models created using the TextUML Toolkit are forcefully precise, and can include behavior in addition to structure.
  • models must be intrinsic to the development process” – models need to be “first class citizens of the development process” or they will become irrelevant. That means: everything that makes sense to be modeled is modeled, and running code is generated from models without further manual elaboration, i.e., no manually changing generated code and taking it from there. As a rule, you should refrain from reading generated code or limit yourself to reading the API of the code, unless you are investigating a code generation bug. There is nothing really interesting to see there – that is the very reason why you wanted to generate it in the first place. Build, read, and evolve your models! Generated code is object code.
  • models aren’t always graphical” – of course not. I have written about that before here. The TextUML Toolkit is only one of many initiatives that promote textual notations for modeling (and I mean modeling, not diagramming – see next point).
  • explicitly call out models vs. views” – in other words, always keep in mind that diagrams != models. Models are the real thing, diagrams are just views into them. Models can admit an infinite number of notations, be them graphical, textual, tabular etc. Models don’t need notations. We (and tools) do. Unfortunately, most people don’t really get this.

The funny thing is that, most of the times I read someone citing Harry’s mantra, it is misused.

One misinterpretation of the “code is model” mantra is that we don’t need higher-level modeling languages, as current 3GLs are enough to “model” an application. The fact is: 3GLs do not provide an appropriate level of abstraction for most kinds of applications. For example, for enterprise applications, 4GLs are usually more appropriate than 3GLs. Java (EE) or C# are horrible choices, vide the profusion of frameworks to make them workable as languages for enterprise software – they are much better appropriated for writing system software.

Another unfortunate conclusion people often extrapolate from the mantra is that if code is model, model is code, and thus it should always be possible to translate between them in both directions (round-trip engineering). Round-trip engineering goes against the very essence of model-driven development, as source code often loses important information that can only exist in higher level models. The only reason people need RTE is because they use models to start a design and generate code, but then they switch to evolving and maintaining the application by directly manipulating the generated code. That is a big no-no in true model-driven development – it implies models are not precise or complete enough for full code generation.

So, what is your view? How do you interpret the “code is model” mantra?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Slashdot: Is Open Source Software a Race To Zero?

Great discussion over @ Slashdot: Is Open Source Software a Race To Zero?

I really think the open source approach has lots of benefits, for the software itself and all parties involved. However, I would say it will probably take a decade before sound business models based on open source are really understood and start to become mainstream.

At this point in time, (as most people) I still think it is considerably harder/trickier to make money developing software as open source than it is with closed source. At least for small companies. A few reasons:

  • reduced barrier to entry for new competitors as they can easily leverage the fruits of your hard work. Even more so if you choose a more liberal license such as a BSD, EPL or Apache (JBoss and MySQL use GPL, for instance).
  • lower profit margins, if you decide to adopt a services-based business model instead of one based on selling product licenses, which is a common approach.
  • the overhead of maintaining the open source software while developing the closed source extensions or providing the related services, the very activities that will actually make money, could be unbearable.

The TextUML Toolkit is open source (EPL) since release 1.1. The decision of making the TextUML Toolkit open source was based on the fact that I (a.k.a. Abstratt Technologies) never intended to make any money directly off of it, wanted to attract external contributions and maybe get some visibility to other future offerings. But I wouldn’t have done it if I had any plans of selling the TextUML Toolkit as a product on its own.

Well, I am interested in your thoughts. Do you know of cases of small companies making good money from developing and selling open source software (using liberal licenses such as the EPL)?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

What can UML do for you?

Do you know what UML can do for you? I mean, did you know that UML models can actually do things?

One of the least known features of UML is that you can model detailed imperative behavior. The UML “instruction set” can do things like:

  • create and destroy objects
  • create and destroy links (associations) between objects
  • read and write attributes and local variables
  • invoke operations and functions
  • throw and catch exceptions
  • conditional statements
  • loops

That is quite amazing, isn’t? And all that while still preserving a high level of abstraction. Such capability is generally referred to as ‘action semantics’. Action semantics provides the basic framework for executability in UML and has been there for quite a while now. It was originally added to the spec, first as patch, in UML 1.5 (2003), and then more seamlessly integrated into UML 2.0 and following spec releases.

Action Semantics and TextUML

An even more well-kept secret is that the TextUML notation supports UML action semantics and thus the creation of fully executable UML models. This support is not yet shipped as part of the TextUML Toolkit, but will be in the next release. Meanwhile, if you want to give it a try or take a closer look, you will have to grab the source from the SVN repository.

I plan to go into more details in the near future, but just to wet your appetite, here is one example of an executable UML model described in the TextUML notation:

package hello;

apply base_profile;
import base;

class HelloWorld
    static operation hello();
        Console#writeln("Hello, World");


Cool, isn’t? If you had a UML runtime, this model could be executed even before you made a decision about what platform to target. Also, if your code generator were action semantics aware, you could trigger code generation for the target platform(s) of choice, with the key difference that you could achieve (or get very close) to full code generation, as the model now also describes behavior. No more of that monkey business of having to edit the generated code and manually fill in all those /* IMPLEMENT ME! */ methods.

Do you think this has value? Would you want to work with a tool that supported that? I am really keen on knowing your opinion.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

OMG issues RFP: concrete syntax for UML action semantics

This is actually old news for many people, but recently I learned (by pure chance) that the OMG issued a RFP for a “Concrete Syntax for a UML Action Language”. Letters of intent are due on December 8th. Submissions, one year after. OMG members only need apply (Aww…). I wonder if anyone in the Eclipse Modeling project is involved in submitting a proposal. Anyone?

Soapbox: since version 1.5, UML has had support for algorithmic behavior specification, commonly called action semantics. It is still hardly used, and most people that consider they know a lot of UML have never noticed it. Some people believe that the lack of an official concrete syntax for action semantics is a barrier to adoption. You see, the OMG defined the semantics and an abstract syntax, but left concrete syntax as an exercise for tool vendors.

As I wrote here before, I don’t really see the value in the OMG godfathering one concrete syntax over all others. Of course, we are talking here about a syntax for human beings, not for tools. Tools certainly don’t need a human-readable concrete syntax, a standard binary or XML format will do. The problem is: we all have our own preferences for what makes a good syntax, and there is no single syntax that will make everybody happy, so we are bound to have multiple concrete syntaxes anyway. We all like interoperability between tools, but when it comes to sugar, we like choice.

But maybe I am wrong. Maybe an OMG-blessed C-like concrete syntax for UML is all that is missing for Executable UML to become mainstream in the software development community. Go figure, we are an amusing bunch. Personally, I don’t care that much. I have been a Java developer for around 12 years now, so I can certainly stand another C-like syntax. We are not talking about a language anyway, it is just a syntax for an existing language, and syntax, a bit like UI, is inherently disposable, if you take it away, the real stuff is still there.

One clear positive outcome of the RFP is that submitters must provide, along with the proposal for a concrete syntax, any changes to fUML* that would be required to support such action language. That will probably help closing some gaps in the UML specification that make it hard to execute if you are stuck with the standard.

There are many other interesting bits in the proposal, but I will leave a more detailed analysis to a future post.

* the Executable UML Foundation Submission says:“Foundational UML Subset (fUML) is that subset of UML required to write ‘programs’ in UML”

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Diagrams != Models

I often see the TextUML Toolkit being compared to tools that produce UML diagrams from textual descriptions. Examples of tools like that are ModSL, MetaUML and UMLGraph. But that doesn’t really make sense, and here is why: these tools produce diagrams from text. The TextUML Toolkit produces models from text. But what is the difference between models and diagrams?

According to Wikipedia:

  • A diagram is a 2D geometric symbolic representation of information according to some visualization technique.
  • A model is a pattern, plan, representation (especially in miniature), or description designed to show the main object or workings of an object, system, or concept.

Note that even though both terms are defined around the word “representation”, the term “diagram” implies graphical visualization, whereas the term “model” admits any kind of media, basically because models have no concrete form per se.

Now, please, if you are not convinced yet, read aloud 5 times: MODELS ARE NOT DIAGRAMS!

If that didn’t work, well, maybe the facts below will help:

  • models, not diagrams, are the subject matter of model-driven development.
  • models, not diagrams, can be validated.
  • models, not diagrams, can serve as input to code generation.
  • models, not diagrams, can be automatically generated from reverse engineering source code.
  • models, not diagrams, can be executed.

Even though diagrams are commonly used for representing models, they are not the only way, and non rare not the most appropriate one (yes, I am talking again about the virtues of textual notations, but I won’t repeat myself).

P.S. Maybe it helps adding to the confusion the fact that the TextUML Toolkit has an optional feature that will generate class diagrams automatically from the UML models created with it. However, that is just an optional, loosely integrated feature, and it is definitely not the Toolkit’s thing. Weirdly enough, from my observation, that feature is the main reason most people become interested in the TextUML Toolkit. Well, go figure.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

When UML meets Slashdot

There was a recent thread about UML on Slashdot, as a reaction to this blog post . The headline: “Is UML Really Dead, Or Only Cataleptic?“. Many posters are clearly bitter towards UML. There seems to be a strong preference for using UML as a communication tool as opposed to as a basis for partial or full code generation (see post on UML modes). Many also complain that the graphical notation is cumbersome and that it hurts productivity (+1!). A few actually like UML; however, it is said that the UML specification is vast and complex and that you should pick the parts of UML that make sense for your case/goals (+1 here too).

Lots of interesting points, but the most negative posts are just misinformed. But that is Slashdot, what should I expect? Java and Eclipse have, in general, a poor receptivity on Slashdot, so I guess UML is in good company.

Of course, while I disagree with those that ditch UML because it was not properly employed in some project they worked, I strongly agree with the complaints about UML (graphical) diagrams being cumbersome and hard to deal with, as I have written here before. But that is not a problem with UML per se, but with the fact most still see it as a graphical language.

UML certainly has its share of problems (design by committee, no reference implementation), but I strongly believe it is very useful and that there isn’t anything out there (I am talking to you, home grown DSLs) that can replace it as the lingua franca for model-driven development.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

On code and diagrams

TextUML is a textual notation for UML. The TextUML Toolkit is an Eclipse-based IDE-like tool for creating UML models using the TextUML notation.

Other tools follow the same approach. Emfatic (now an EMFT subproject) has been doing the same for EMF Ecore for a long time; the TMF project aims to be for textual modeling what GMF is for graphical modeling, and will be based on GMT‘s TCS and xText components.

Still, people are often puzzled when I explain what the TextUML Toolkit is. A common question is: “if I am going to write code (sic), why do I need UML anyway?“.

Dean Wampler from Object Mentor wrote on his blog a while ago a post entitled “Why we write code and don’t just draw diagrams” (which is now gone but still available via It is a short post, but he presents very good points on why a graphical notation is usually not suficient and is bound to be less productive than a textual one when it comes to modeling details. For instance, on the saying “a picture is worth a thousand words“, Dean wrote:

What that phrase really means is that we get the ‘gist’ or the ‘gestalt’ of a situation when we look at a picture, but nothing expresses the intricate details like text, the 1000 words. Since computers are literal-minded and don’t ‘do gist’, they require those details spelled out explicitly.

Couldn’t have said it better.

I strongly advise you to read the original post in its entirety, but I will leave you with another pearl from Dean’s post (emphasis is mine):

I came to this realization a few years ago when I worked for a Well Known Company developing UML-based tools for Java developers. The tool’s UI could have been more efficient, but there was no way to beat the speed of typing text.

Enough said.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Textual notations and UML compliance

One common misconception is that UML is a graphical language and that any tools adopting alternative notations (such as a textual one) are inherently non-compliant. That can’t be farther from the truth. Read on to understand.

The fact is that the UML specification clearly separates abstract syntax (the kinds of elements and how they relate) and semantics (what they mean) from concrete syntax (what they look like), and states that there are two types of compliance:

  • abstract syntax compliance
  • concrete syntax compliance

Concrete syntax compliance means that users can continue to use a notation they are familiar with across different tools. This is important when UML is used as a communication tool in a team environment. Architects, designers, programmers and even many business analysts speak the same language.

Abstract syntax compliance means that users can move models across different tools, even if they use different notations. This is essential when UML is used as a basis for model-driven development. You might want to use tool A for creating the model, tool B for validating the model, tool C for somehow transforming/enhancing the model and tool D for generating code from it (a common form of MDD tool chain).

The TextUML Toolkit uses a textual notation that strictly exposes the UML semantics, but it is not compliant with the language concrete syntax by any means. On the other hand, the toolkit uses Eclipse UML2′s XMI flavor for persisting models and thus is fully compliant regarding the abstract syntax. That is consistent with the vision for the product: a tool from developers for developers that want to build software in a more sane way: the model-driven way. Developers can create models using a notation they are more productive with. Models can then be used as input for code generation using many of the tools available in the market. If non-developers might frown upon a textual modeling notation, they will always have the option of using their favorite graphical-notation based tools for reading the models. I mean, if their tool of choice is abstract syntax compliant as well.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Full code generation from UML class, state and activity diagrams

2011-03-10 – UPDATE: Interested in UML/TextUML and full code generation? You can now do that online using AlphaSimple

UML has become the lingua franca for modeling applications using the object-oriented paradigm. People use UML in many different ways (see the post on UML modes), ranging from as a communication tool to as a full fledged programming language that supports full code generation. This last way of using UML should puzzle most readers – how can UML models lead to full code generation?

UML has two diagrams that are used for behavior specification: the activity diagram and the state diagram. These two diagrams (more one than the other, depending on the nature of the subsystem being modeled), plus the class diagram (for modeling the structural aspects of the object model) provide the framework that supports the design of complex applications in a way that is fully complete (and thus allowing 100% code generation) while still implementation independent (see earlier post on platform independence). All the other diagrams (use case, sequence, collaboration) are interesting for gathering requirements, but are useless in modeling a solution that can be automatically transformed into a running application, and thus we will ignore them here.

Specifying structure with the Class Diagram

The class diagram is the most well understood of all diagrams in UML. You can model all structural aspects of your object model in the form of classes, attributes, operations and relationships between classes. This specification of structural aspects can then be used for generating (boilerplate) code, database schema, configuration files and so on so forth. This is great already, as that is most of the work. But without including behavioral aspects, it is impossible to do full code generation solely from the class diagram, you are forced to fill the empty methods with handwritten code (unfortunately this is how most vendors expect you to do model-driven development). Still, the class diagram is a fundamental one in which it provides a base framework the other diagrams can build upon.

Specifying dynamics with the State Diagram

The UML state diagram (derived from David Harel’s state charts) allows for a full design of the dynamic aspects of a system. One can model complex state machines using the state diagram, always in the context of a class described in the class diagram. Many mainstream applications do not have any interesting dynamics though, so in those cases the state diagram has limited value. However, in applications for certain industries (such as robotics, telecom and automotive) it is the most important diagram.

Specifying business logic with the Activity Diagram

The most underrated of the UML diagrams, the activity diagram has a key role: it is the only one to allow the modeler to specify behavior in a precise way. The Activity Diagram provides elements (such as actions, pins, data and control flows, signals) that allow specifying the meaning of a behavioral element (such as the body of an operation from the class diagram, or the effect of a state transition from the state diagram).

But no matter how important the UML activity diagram is, it has one strong limitation: it demands too much detailed information in order to be fully defined (and thus actually useful in the context of code generation). Any simple logic that could be written in a few lines in, say, Java, requires a graph with so many nodes that it is virtually impractical to use it with the graphical notation, severely hampering its more widespread adoption.

Have I just suggested that activity diagrams are useless for any serious usage? No! It is just the case that the graphical notation is too cumbersome, and it is not just a problem with the specific choice of symbols – there will never be any graphical representation that can be as expressive and concise for specifying behavior as your programming language of choice (even if your favorite language is as verbose as COBOL). So it is a matter of representation: a textual notation is much more appropriate than a graphical one. The activity diagram itself is fine, thanks.

So what is the textual notation for the activity diagram? There is none. I mean, not one defined by the OMG. Many companies have defined their own action languages (such as Pathfinder AL, Bridgepoint OAL, Kennedy Carter’s ASL) with compilers that provide a textual front-end for the UML activity diagram. TextUML itself has a bigger cousin (currently an ongoing work) that allows specifying UML activities in a way that is familiar to any programmer. Want to see what an action language looks like? Expect a new post on the subject (including our very own action language) here soon.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Model-driven development improves reuse

One notable benefit of model-driven development that is often underrated is improved reuse. This is a direct consequence of appropriate separation of concerns promoted by this development approach. The more intertwined concerns are when addressed in a piece of code, the harder it is to reuse that piece of code. The reason is simple: whenever you tie a solution for one concern to a solution for another concern, you are in trouble: you cannot reuse that piece of code where only one of the concerns is relevant, or the solution for one of the concerns is not appropriate (even if the solution for the other concern is).

Model-driven development promotes an approach where problem-domain concerns are addressed separately from implementation concerns. That means artifacts dealing with problem-domain concerns are free from any specifics on target platforms, and also that artifacts addressing implementation-related concerns are totally unaware of any problem domain knowledge.

That is fantastic, and the reason is twofold:

  1. it makes it viable to build a repository of platform-independent problem-domain specific components, likely created by people that are experts in their domain, that can be reused on different target platforms;
  2. it allows implementation specialists to code their technology-specific implementation strategies as standalone artifacts (i.e. templates), which can then be shared and reused in applications for the most varied problem domains.

The software industry has been looking for a long time for a way of encapsulating knowledge about specific problem domains in the form of platform-independent software components. Model-driven development with true executable models enables that dream.

For many decades, valuable business logic has been imprisoned into obsolescence-prone implementation-specific artifacts. We are working hard on a product that will help stop this insanity and finally make the dream of truly reusable component repositories a reality.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Platform independence in MDA

One key aspect of MDA is platform independence. However, even some of the brightest people in our industry misunderstand what platform independence means in MDA.

Platform independence has a different meaning in MDA than it has, for instance, in Java. Java promotes platform-independence by providing a common environment that insulates the application from platform details such as the instruction set and system API (for instance, for memory allocation, file system manipulation, networking, GUI, threading, etc). The application still has to address all these concerns, but it does that through API and mechanisms that work the same way regardless the actual underlying platform, and thus can run on any platform the Java environment is available. In other words, the Java environment is the platform.

MDA promotes platform-independence by adopting a design-centric approach. Models are removed from implementation related concerns and thus are inherently platform independent: a single design can be reused for building the same system for multiple target platforms. The implementation details are taken care of by target platform specific templates. The templates are applied to the user models then generating concrete platform-specific artifacts (running code, documentation, database schema, configuration files). Differently from Java (even if Martin Fowler says so), MDA does not promote another platform. What it does is to promote a clear separation between problem domain concerns and implementation concerns (as covered before here in the inaugural series entitled “Where we are coming from“).

The benefits of this separation are many: from unprecedented levels of reuse to better opportunities for work specialization. I plan to cover these benefits in detail on future posts.


Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

UML modes and tools

On his bliki, Martin Fowler eloquently describes the three different modes in which UML can be used: UML as sketch, UML as blueprint and UML as programming language. Let’s revisit the different modes from the perspective of tools for the job.

UML as sketch

Description: In this mode, UML is essentially a tool for conceiving and communicating ideas between team members. As such, there is no need for completeness or validity of the object models, and actually any information not essential for communicating the idea at hand is intentionally omitted for conciseness and clarity. UML as sketch is increasingly popular, even at shops where model-driven development is considered an abomination.

Tools for the job: developers don’t need special tools for using UML as a sketching tool, paper and pen or a whiteboard are great. The only flaws are when you would like to archive a drawing or send it to others by email. In that case, Visio (or any other generic diagram editor) is a common choice. Another less known option is Whiteboard Photo, that allows you to take a snapshot of your hand drawings (on a whiteboard or on paper) and have them automatically translated into great looking clean 2-D charts.

UML as blueprint

Description: in this mode, the focus is on having UML models used as an input to the implementation phase, and thus need to be valid and complete from the point of view of structure. Behavioral aspects are described in textual form (such as in use case descriptions) or by means of diagrams depicting scenarios (such as sequence or collaboration diagrams) and are inherently informally and incompletely described. The models can be submitted as input to code generation but the generated code has to be enhanced by hand to cover the behavioral aspects.

Tools for the job: generic tools won’t cut it here – you will need diagram editing tools that support all (or most) of UML diagrams, and (if you are taking the code generation route) persist your models in a format that your code generation tools can understand. Most UML tools out there support (or intend to support) this.

The TextUML Toolkit we provide supports this mode too. Your class diagrams are fully verified, and are persisted using a standard format.

UML as programming language

Description: In this mode, UML models must be complete both structurally and behaviorally as they must be simulatable and serve as input to code generation (by the way, Fowler really misses the point when he argues that a graphical language such as UML is not appropriate for fine grained behavior specification – who said UML is graphical anyway?).

Tools for the job: there are only a few tools currently in the market that support this mode, they tend to target embedded software development, and I bet you will not find how much they cost on their websites unless you call them.

Corcova Libra will support this, and when made available will visibly sport a reasonably fair price on our web site. Also, Libra will aim at mainstream business application developers, instead of focusing on specific vertical markets.


UML as sketch is cool and useful, but from the point of view of software engineering (our focus here) is meaningless.

UML as blueprint is increasingly practiced in shops where (partial) code generation has been adopted. It has benefits over writing the entire code by hand, but it still requires all the interesting code to be written manually, and there are a lot of issues with that.

Finally, UML as programming language (in other words, full blown model-driven development) is the most interesting of the three modes, even if there is a lot of skepticism and misinformation towards it. I will talk about that in an upcoming post.


Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Where we are coming from – Part III

(This is the third and last installment in this series. If you haven’t done it yet, read the first and second installments before you proceed)

Let’s start by recalling the main points we wanted to make in the previous posts. In summary, we should strive for addressing concerns as independently as possible (points #1 and #2), and that depending on the dimension (point #3) the concern lives in we should deploy the most appropriate languages and skills for the job (point #4 and #5). The benefits are countless across many areas such as productivity, handling of changes in requirements, reuse, and work specialization. But how to achieve that?

Libra comes to the rescue

Corcova Libra (codename) is our yet to be released tool that supports our vision of how software development should be done. Libra artifacts are models and templates. Problem domain concerns are addressed in a platform-independent way in the form of executable UML models. Implementation-technology concerns are addressed in templates using one of the supported template languages.

Since the platform-independent models are fully complete (from the point of view of business requirements), they can be tried, tested and debugged, even before a target platform is chosen. And when the implementation-aware templates are applied to the platform-independent models, the result is platform-specific code that fully addresses all concerns, be them in the problem-domain or implementation technology dimension.

Libra is not the first product to follow this approach, but is the first that aims to bring the benefits of executable models and full code generation to the masses of application developers.

This is the last post of a series that aimed at explaining what problem we set out to fix. We hope you enjoyed it. We sure were vague on technical details, which we plan to unveil as we release the first publicly available version. If you want to know about any news on the development of Libra, this is the place to watch. If you don’t like to use feed readers, you can also send us an e-mail, and we will let you know whenever a new post is available (we promise not to spam you, and you can opt-out at any time).

UPDATE (2013-08-22): Much of the code and ideas originally in the project codenamed Corcova/Libra are part of Cloudfier today.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Where we are coming from – Part II

(This is the second installment in a series of posts that explains what we think is wrong with the current state of affairs in the mainstream business application development industry, and how we plan to fix it. If you haven’t done it yet, read the first post first)

We finished the first post of this series by stating it was essential to acknowledge the fact that concerns belong to one of two primary dimensions, namely the problem domain dimension or the implementation technology dimension.

The reason for that is that these dimensions impose a set of common characteristics shared by all concerns they are home to. That suggests that we should need different tools when addressing concerns in different dimensions.

Point #4: when addressing a concern, we should use tools that are appropriate for concerns in its dimension.

But is that true? And what are tools appropriate for concerns in the problem domain dimension, or for concerns in the technology dimension? To answer these questions, we need to look deeper to understand the differences between the two primary dimensions.

  • change rate: concerns in the problem domain dimension change as often as the business requirements change, and that depends on the application and the problem domain. Concerns in the implementation technology dimension change only when a new target platform is to be supported, or a new kind of implementation artifact must be created, or a new implementation strategy (for the technology in use) is chosen. Comparatively, concerns in this dimension are more stable than concerns in the problem domain dimension.
  • abstraction level: since concerns in the problem domain dimension can be addressed in a way that is completely independent from implementation details, they can be dealt with at a high abstraction level. Conversely, concerns in the implementation technology dimension are closely tied to the implementation platform and thus have a lower level of abstraction. Trying to address concerns on those two dimensions using the same abstraction level (for instance, by using the same programming language) is less than optimal, and it is bound to favor one dimension at expense of the other (think of an accounting application in C or an operating system in COBOL).
  • reusability: artifacts created to address concerns in the problem domain dimension are reusable across target platforms and implementation strategies. Artifacts addressing implementation-related concerns are reusable across problem domains. Thus, the languages and methods for addressing concerns in the problem domain dimension should be technology obsolescence-proof, whereas languages and methods for addressing implementation technology concerns are free to harness the power of the technologies they are related to, while remaining agnostic to problem domains.
  • skills required: developers addressing implementation technology concerns must be deeply familiar with the target platform and must know the best practices for the particular set of technologies chosen, no knowledge of the problem domain being required. On the other hand, people addressing concerns in the problem domain dimension must have good analysis skills and understand the problem domain they are working on – little to no knowledge of the implementation platform is required. They are not exempt though of having good object orientation design skills and a decent proficiency on specifying algorithms using imperative programming. It is seldom the case that a person that is knowledgeable on the problem domain is also an expert at some target platform, so being able to address concerns in different dimensions in a compartmentalized way opens great opportunities for work specialization.

Point #5: Concerns in the problem domain and implementation technology dimensions differ in many fundamental aspects, and thus you need different languages, methods and skills to address them.

It should be needless to say that this is the model of software development in what we believe. Languages and methods with the appropriate abstraction level for higher expressiveness and power. Problem domain related artifacts that are obsolescence-proof. Reuse nirvana by completely separating concerns across dimensions. Optimal productivity and job satisfaction through specialization of work across concern dimensions.

All this will be possible only if you can truly address problem domain and implementation related concerns completely independently from one another.

We are building the tools that will allow you to do that. Want to know how? Stay tuned for the next installment in this series of posts. Want to see it with your own eyes and help us make a great product? Send us an e-mail, introduce yourself and tell us how you would like to help.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Where we are coming from – Part I

(This is the first installment in a series of posts that will explain what we think is wrong with the current state of affairs in the mainstream business application development industry, and how we plan to fix it)

One term that often appears in a conversation between two developers deciding how to better write a piece of code is “separation of concerns”. But what is a concern and why is separating concerns so important? And even more, why is it being discussed here?

A concern is any basic responsibility a software system has to address. A software system has to deal with many different concerns as it bears many different responsibilities. Basically, a system is made of a collection of composition units or modules (functions, classes, methods, components etc) created to collectively address all of the different concerns imposed by requirements.

But every concern leaves its imprint on the system being developed, in the form of changes or additions to the code. The more places a concern leaves its imprint on, the harder it is to adapt the system due to a change in requirements. The reason is twofold: if the code dealing with a given concern is scattered throughout multiple modules, it is hard to figure out exactly what different places in the source code need to be changed; and if a typical module handles many different concerns at once, it is hard to tell apart code that deals with one concern from code that deals with another.

Point #1: the more independently concerns are dealt with, the easier it is to evolve the code when requirements change.

Ideally, every single concern should map to a single composition unit. If a concern ceases to exist, you delete the corresponding composition unit. If a new concern needs to be taken into account, you just create a new composition unit to deal with it, and no other part of the system is affected. But life is not pretty like that. In practice, there is actually a good deal of interaction between different concerns. So even though maintenance-wise it is better that different concerns are dealt with as independently as possible, it is still required for the system to run correctly that some level of coordination take place between parts of the code addressing different concerns.

Point #2: however, some sort of coordination between code dealing with different concerns is often required.

OK, so far it has been pretty much all common sense, as up to now, we were just trying to level the playing field. But one thing that is not quite common sense yet is the notion that every concern inhabits one of two completely orthogonal dimensions.

One of the dimensions, the dominating one, is home to concerns related to requirements originating from the problem domain. Let’s call it the problem domain dimension. These concerns can be completely understood and dealt with regardless the implementation language or target platform. The richer your problem domain is, the more concerns will inhabit the problem domain dimension.

The second dimension, which we like to call the implementation technology dimension, is also essential, but concerns in this dimension have more of an additive nature. Concerns in this dimension are originated from requirements related to the implementation space, and bear no relationship whatsoever to concepts in the problem domain.

(Are there more dimensions other than the two discussed here? Probably. But pragmatically, we believe that a clear distinction between problem domain and implementation related concerns goes a long way and is the first step we must take.)

Point #3: concerns inhabit one of two dimensions, either the problem domain dimension or the implementation technology dimension.

One important benefit of acknowledging this is that we can now understand that a dimension imposes a common set of characteristics to all concerns inhabiting it. That has several implications, from a much higher degree of reusability to interesting opportunities for work specialization.

But let’s leave that to the next installment. Meanwhile, feel free to comment or ask any questions, just as usual.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

The road ahead of us

Thanks for checking us out. We are busy working hard on a tool that will bring software development productivity to a whole new level. At some point this summer, we will release a full fledged model-driven development tool, with support for model execution and complete code generation.

Before that, as a preview, we plan to release one of its components as a standalone product. It is a UML authoring tool that presents a hybrid textual/graphical notation. Our hunch is that even hardcore hackers will want to use it.

Stay tuned!

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter