Model interpretation vs. code generation? Both.

Model interpretation vs. code generation? There were recently two interesting posts on this topic, both generating interesting discussions. I am not going to try to define or do an analysis of pros and cons of each approach as those two articles already do a good job at that. What I have to add is that if you use model-driven development, even if you have decided for code generation to take an application to production, it still makes a lot of sense to adopt model interpretation during development time.

For one, model interpretation allows you to execute a model with the fastest turnaround. If the model is valid, it is ready to run. Model interpretation allows you to:

  • play with your model as you go (for instance, using a dynamically generated UI, like AlphaSimple does)
  • run automated tests against it
  • debug it

All without having to generate code to some target platform, which often involves multiple steps of transformation (generating source code, compiling source code to object code, linking with static libraries, regenerating the database schema, redeploying to the application server/emulator, etc).

But it is not just a matter of turnaround. It really makes a lot more sense:

  • you and other stakeholders can play with the model on day 1. No need to commit to a specific target platform, or develop or buy code generators, when all you want to validate is the model itself and whether it satisfies the requirements from the point of view of the domain. Heck, you might not even know yet your target platform!
  • failures during automated model testing expose problems that are clearly in the model, not in the code generation. And there is no need to try to trace back from the generated artifact where the failure occurred back to model element that originated it, which is often hard (and is a common drawback raised against model-driven development);
  • debugging the model itself prevents the debugging context from being littered with runtime information related to implementation concerns. Anyone debugging Java code in enterprise applications will relate to that, where most of the frames on the execution stack belong to 3rd-party middleware code for things such as remoting, security, concurrency etc, making it really hard to find a stack frame with your own code.

Model-driven development is really all about separation of concerns, obviously with a strong focus on models. Forcing one to generate code all the way to the target platform before models can be tried, tested or debugged misses that important point. Not only it is inefficient in terms of turnaround, it also adds a lot of clutter that gets in the way of how one understands the models.

In summary, regardless what strategy you choose for building and deploying your application, I strongly believe model interpretation provides a much more natural and efficient way for developing the models themselves.

What are your thoughts?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

4 thoughts on “Model interpretation vs. code generation? Both.

  1. Ed Seideiwtz

    August 9, 2010 at 3:21pm

    Rafael –

    Exactly right! The Shlaer-Mellor execution tools having been doing it this way for years (e.g., and Execute the model interpretively to debug it and then compile to the target environment for deployment — possibly to several target environments.

    Note that I didn’t actually use the term “code generation” above. This phrase implicitly assumes that the “code” is something different than your executable model, which automatically also implies that, in the end, it is the “generated code” that is the most important thing, not the model. If, on the other hand, you think of Executable UML like other programming languages, then the issue is interpretation or compilation, not interpretation or code generation — and the answer is, again, “Both!”, just as you say in your post. (See also my presentation, particularly slides 7 and 8.)

    Of course, executing UML can be useful even when the goal is not programming. For example, if system engineer with a model in SysML would like to be able to execute, simulate and analyze his system model before committing it to the production engineers. In this case interpretation is a natural approach — though not necessarily the only one; one could still compile the model behind the scenes, to improve performance (I believe that IBM Rhapsody actually does this). (See also, slide 9.)

    – Ed

  2. rafael.chaves

    August 9, 2010 at 11:04pm

    Thanks for the great comment, Ed, you are a walking encyclopedia on modeling tools. :)

    I suspect most people working on new model-driven development tools (for instance, at totally ignore or have little knowledge of those products and that is a pity.

  3. Andriy Levytskyy

    October 14, 2010 at 11:29am

    I am late for the party, but nevertheless would like to comment that Rafael makes an excellent point. I think that modeling and simulation packages used in engineering is another good example to support this point.

  4. Marco Brambilla

    November 5, 2010 at 3:01pm

    Hi Rafael. I agree that a mix of model interpretation and code generation can be a good compromise solution in the trade offs we all know.
    Another option that could reduce the impact of code generation is to go towards a “lightweight” code generation approach. For instance, in our toolsuite (WebRatio, a MDD tool supporting the DSLs BPMN for business processes and WebML for web application design) we generate the code but only in terms of XML descriptors of generic Java components that we develop and deploy once and for all. This makes the “code” generation simpler (and btw allows for quick lookup of the generated artifacts).

Comments are closed.