It seems that people that resist the idea of model-driven development (MDD) do so because they believe no tool can have the level of insight a programmer can. They are totally right about that last part. But that is far from being the point of MDD anyways. However, I think that unfortunate misconception is one of the main reasons MDD hasn’t caught on yet. Because of that, I thought it would be productive to explore this and other myths that give MDD a bad name.
Model-driven development myths
Model-driven development makes programmers redundant. MDD helps with the boring, repetitive work, leaving more time for programmers to focus on the intellectually challenging aspects. Programmers are still needed to model a solution, albeit using a more appropriate level of abstraction. And programmers are still needed to encode implementation strategies in the form of reusable code generation templates or model-driven runtime engines.
Model-driven development enables business analysts to develop software (a variation of the previous myth). The realm of business analysts is the problem space. They usually don’t have the skills required to devise a solution in software. Tools cannot bridge that gap. Unless the mapping between the problem space and solution space is really trivial (but then you wouldn’t want to do that kind of trivial job anyways, right?).
Model-driven development generates an initial version of the code that can be manually maintained from there on. That is not model-driven, it is model-started at most. Most of the benefits of MDD are missed unless models truly drive development.
Model-driven development involves round-trip engineering. In MDD, models are king, 3GL source code is object code, models are the source. The nice abstractions from the model-level map to several different implementation artifacts that capture some specific aspect of the original abstraction, combined with implementation-related aspects. That mapping is not without loss of information, so it is usually not reversible in a practical way, even less so if the codebase is manually maintained (and thus inherently inconsistent/ill-formed). More on this in this older post, pay attention to the comments as well.
Model-driven development is an all or nothing proposition. You use MDD where it is beneficial, combining with manually developed artifacts and components where appropriate. But avoid mixing manual written code with automatically generated code in the same artifact.
What is your opinion? Do you agree these are myths? Any other myths about MDD that give it a bad name that you have seen being thrown around?
Rafael
Melski
February 6, 2010 at 2:03amWhat gives MDD, at least to me, is that it is either incomplete or too complex depending on whether you involve roundtrip engineering. There are two sides to it.
If you include round trip engineering it (at this point in time) makes the models incomprehensible; they loose their purpose which is to enable a high level of understanding without having to read code level details.
If you don’t include round trip engineering it makes the models almost redundant. Yes you do need some models but not to the level that mdd requires. It seems like your in fact replicating your work, and thus all bar the highest level diagrams don’t get updated when people get short of time. That leaves only one artifact that is indeed the truth – the code.
In both cases developers end up wishing they just had code, instead of code plus either incomprehensible models (that are harder to read than code), or models which may or may not reflect reality.
So in short, I like having models to detail to me quickly a high level representation of the application; however beyond that its proven (so far) to be easier to read code than to read the diagrams.
Thats on the programmer side of life.
On the business side; models mean little to me. A Feature driven process details to me what to expect in terms of functionality and when. A clear winner over models which to me are nothing but part of the process of making an application; not an acceptable deliverable.
As for your myths, its hard to say. To me MDD ultimately must include round trip, though I understand it may not currently. The others I am happy to label as myths.
Axel
February 6, 2010 at 3:38amhttp://www.slideshare.net/merks/the-unbearable-stupidity-of-modeling-presentation
Ed Merks
February 6, 2010 at 8:50amWith good generator technology, I don’t believe it’s necessary to separate hand written and generated code into different artifacts:
http://ed-merks.blogspot.com/2008/10/hand-written-and-generated-code-never.html
We’ve been evolving models such as Ecore and GenModel for almost ten years now. We’ve modified the models and regenerated, e.g., adding generics to Ecore was a very significant change, we’ve modified the generated code to implement derived features, and we’ve evolved the general purpose templates to improve the generated patterns, e.g., boolean and enums stored as flags. Those who say it can’t be done are simply wrong.
Dann Martens
February 6, 2010 at 11:06amSince the EMF modeling zealots are force-feeding modeling into e4, I think the discussion of what Ecore and GenModel are and what it can do have become pretty irrelevant. EMF modeling implementations are specific and proprietary and are not part of any standard in the larger Java community. You’ll find more standards adhered to in the Semantic Technology community, which approaches modeling from an equally valid, yet more future-proof perspective.
As a result, Eclipse is on a track which is alienating a large part of its user base. I think it is safe to say, no one likes to be forced to use a technology because a mere few manage to pull some strings to ‘educate the misguided’.
EMF is an extremely overly complicated technology, and clearly not in a state yet to be forced on a larger public. I find comments on new EMF sub-project announcements, such as ‘hoping to read the English translation, soon’, particularly relevant and amusing.
Until that time, I will be happy to see it evolve over time into something human-apprehensible. EMF belongs to an in-crowd. If that group wishes to gain some larger appeal and respect, I suggest they reconsider their attitude.
rafael.chaves
February 6, 2010 at 11:14pmMelski, thanks for your comments.
I don’t agree the absence of round-trip engineering makes models redundant. The *code* is redundant, as it can be always regenerated from the models.
“Yes you do need some models but not to the level that mdd requires”
“(…) diagrams don’t get updated when people get short of time.”
“That leaves only one artifact that is indeed the truth – the code.”
These sentences are very telling. They tell me you are not doing MDD, but instead you are using models for sketching, communicating and documenting designs. That is fine, even though not my piece of cake, many (most) people use models that way, but that (and the problem you describe) has nothing to do with MDD. I would go as far as saying that approach to modeling is what creates a lot of misconceptions about MDD.
Finally, I would be very interested in understanding what you meant by “incomprehensible models (that are harder to read than code)”. What “incomprehensible models” look like?
rafael.chaves
February 6, 2010 at 11:30pmEd, even if something is technically viable, that doesn’t necessarily makes it a good idea. Or even if it is successful in some specific context, doesn’t mean it is recommended as a general rule. Supporting generation of new types of textual artifacts is very simple, there are plenty of mature tools for that. However, the ability of regenerating code without destroying manual changes requires much more effort: for each type of artifact generated one needs to devise a strategy and tools for merging a new version of generated code with the existing contents of the artifact. I don’t believe that that extra cost and added complexity are justifiable – what value are they creating anyways?
Another issue (major, in my opinion) is that burden to tell apart generated from manually written code is now on developers. Talking about code generation in EMF, I much prefer the approach used by UML2, where whenever there is need for handwritten code, the generated code delegates to a separate class. I know that when the UML2 metamodel implementation provides some less trivial behavior, the code will be in the corresponding XYZOperations class. I can safely ignore the automatically generated implementations of metaclasses (which I believe is most of the code). That clear and simple convention significantly reduces the effort for understanding the UML2 code base.
rafael.chaves
February 6, 2010 at 11:34pmDann, ranting apart, I think you raise a good point w.r.t. the added conceptual baggage required to adopt E4 if it is going to be heavily based on EMF. Maybe EMF should go through a similar refactor in order to lower the barrier of adoption. But that discussion certainly does not belong here.
Ed Merks
February 7, 2010 at 7:24amThere’s nothing quite like name calling to help a technical discussion remain focused on concrete issues. Even if I had the power to force feed anybody anything, I’d certainly not make a habit of it. The e4 folks chose to use EMF for their own reasons, good ones in my opinion. Characterizing a general purpose open source technology based on an OMG standard, i.e., EMOF, as “specific and proprietary” seems contradictory. The OMG has a rather large stack of standards based on MOF; a good many of those are implemented at Eclipse. In any case, I’m not sure any of us has a crystal ball to look into the future, so arguments about future proofing seem questionable at best.
EMF is just complicated enough to help solve complex problem, much like Java itself. Surely no one will argue that Java is simple! Furthermore, it’s clearly being used by a large and growing community; everything has to start somewhere. In any event, I’ll definitely avoid taking attitude advice from anyone who consider name calling an acceptable form of technical discourse.
I agree that it’s often and even typically very good to keep generated and hand written code completely separate. The point of my blog is that this doesn’t always come without a price of its own, e.g., I explained the cost of the “four class pattern.” Note Mint does a very nice job with filters to show only the hand written code. A point to take away from that is that good tools can do an excellent job solving problems in innovative ways; one of the basic tenets of MDD. That being said, most of the open source tools at Eclipse are generally adequate at best…
Here’s something I’ve asserted many times: if you started from scratch and tried to solve all the problems that EMF has already solved, you’d end up with something isomorphic to EMF. I.e., it won’t be simpler, just different.
Vlad Varnica
February 8, 2010 at 2:43amHi folks,
I personally recommend to throw EMF in the bin because this project failed.
The best is to use Ecore with UML and just EMF as back-office rules engine.
EMOF is like the official OMG MOF and it is really very very powerful if used well.
btw, Great super bowl New Orleans win yesterday
Vlad,
MoWe
February 8, 2010 at 3:36am@Dann: You mention standards related to semantic web technology. The vast majority of software developers and users is not able to describe which problem exactly you solve based on description logics (W3C standard OWL DL), what the problem with OWL-full is, etc. Even more important: Where are the real-world use-cases? The problem is that those standards have been developed from a very academic point of view. Just look at the discussions about decidability. EMF on the other hand takes a very pragmatic approach.
Of course it’s no “one fits all”-solution, but nobody every claimed it is. But it helped a lot of people and companies to increase their productivtiy and improve the quality of their software – believe it or not.
rafael.chaves
February 8, 2010 at 8:51amVlad, your comment was approved but it clearly is off-topic and unproductive. Please let’s stay on topic.
Johan den Haan
February 8, 2010 at 12:32pmHi Rafael,
Nice points. I agree with most of them. However, I think your first two points needs more nuance. You’re right, programmers are still needed and business analyst cannot build everything with an MDD tool. But, roles do definitely change when using MDD (see http://www.theenterprisearchitect.eu/archive/2009/02/04/roles-in-model-driven-engineering ). Programmers will move to working on generators and maybe some technical parts of an application, modeling an application can be done by less-technical programmers or business engineers (or whatever we call it).
Sven Efftinge
February 8, 2010 at 11:17pm@Rafael,
very nice post!
IMHO also MDA related dogmatism give MDD a bad name. I mean these stupid PIM vs. PSM idea, the overengineered and unpractical QVT standard and also (sorry for that ;-)) the idea of using UML as a programming frontend (UML might be good for other things like sketching diagrams to the whiteboard, though).
@Johan,
but how does those roles (generator developer, modeler) differ from people who built frameworks and libraries and people who use them?
Creating reusable abstractions is a key discipline in software development, does it really matter that much whether you create libraries, languages, generators or interpreters? In the end you’ll have to understand the problem to solve and find the right level of abstraction.
In general I find it strange to see MDD as an (holistic) approach to software development. It’s just a tool and the whole process isn’t that much affected. Maybe besides that usually my turnarounds suffer if I use too much code generation
rafael.chaves
February 9, 2010 at 12:39am@Johan – yes, MDD provides opportunity for work specialization, and (my claim) higher work satisfaction: without MDD, domain-savvy developers need to deal with technical stuff they don’t (want to) understand, and more technical developers need to deal with domain-related stuff they would rather not think about.
rafael.chaves
February 9, 2010 at 12:58am@Sven
I actually find the MDA model very useful as a conceptual/philosophical framework. I won’t argue against your opinion of the suitability of UML as an executable modeling language, I assume you know mine.
Re: roles – not fundamentally different, but I argue that MDD provides for much better separation of concerns and rationalization of the work, making the separation of responsibilities more clear/feasible.
Re: “Creating reusable abstractions is a key discipline in software development, does it really matter that much whether you create libraries, languages, generators or interpreters?”
Again, I think MDD/DSM provide more powerful mechanisms for separating abstractions. People write OO-like in old style C. Still, there are significant benefits in using a proper OO language (even if it is C++).
“It’s just a tool and the whole process isn’t that much affected”
Hey, the whole process is the same since the time of the ENIAC.
“Maybe besides that usually my turnarounds suffer if I use too much code generation”
If you can execute your model, you don’t need to generate until you are happy with it.
Jeppe Cramon
February 20, 2010 at 3:54amGreat post Refael – you sum up nicely the points that I’ve been battling for the last 4 years.
IMO people seem to be burnt by Case Tools, primarily, and by dogmatic MDA – which seems to end up in endless diagrams and no running system.
I like MDD’s pragmatic attitude – use models where they make sense – to heighten the abstraction level (which means very little to many new comers until they see real examples of it) and automate by generating a lot of tedious repetitive code. I totally agree with focusing on forward engineering and model is king.
I agree with those that complain that the learning curve is still too steep and it still doesn’t help that many MDD zealots are still too high on their UML bashing horse. Many examples of MDD simply miss the point of helping adapters get on board. People have experience with UML – let them use it where UML actually works.
IMO the hardcode MDD expert’s are too far ahead of them crowd to have a change of including them. I mean EMF, GMF and e.g. OpenArchitectureWare are extremely cool and versatile – but it’s still too complex and have a high learning curve before you’re productive. A lot, including me, have given up before reaching a point where we could actually get anything done.
We decided to not use EMF – not because we’re smarter than the guys who build EMF – but because we wanted something that was easier to use and thereby has a lower learning curve. It’s not as versatile as EMF, but so far we haven’t come a cross a project which wasn’t easily solvable using the current approach.
In order to spread MDD and attract new projects to use it, we instead tried to focus on where the major hurt these days are – In my perspective it still very much resolves around the domain model (being an internal or external).
This is where we have successfully helped customers adapt MDD by combining it with Domain Driven Design (DDD). We simply model their domain model in UML (using Class diagrams only) – and perform straight forward code generation of Java code with JPA/Hibernate annotations/specific code (or for instance WSDL, XML Schema for Webservice models).
Model is king and code can be regenerated time and time again – with the option to allow custom code extensions either through code generator extensions, through aspects, 3 level inheritance or some other approach (see http://www.slideshare.net/jeppec/building-a-lean-architecture-for-web-applications-using-domain-driven-design-model-driven-software-development).
With UML domain modeling the abstraction layer can be raised a lot – e.g. with introducing stereotypes like “history” or “versioned” which lets you annotate your model to indicate active history/temporal object pattern). Too many hard code MDD’s may laugh at this as being too little or too simplistic- but for a lot of new MDD adapters it’s a lot – It’s something that immediately gives reward – it’s easy to understand and evolve and it doesn’t try to interfere with domain logic. Start here, gain confidence and later you can introduce DSLs and other visual models based on something more fitting than what UML may have to offer.
Visual domain models are extremely good for overview and communication. We often sit down together with business expert which join in on the modeling because basic UML class diagrams, with a few stereotypes and perhaps a tagged value or two, are easy to comprehend and understand. No reason to understand meta or meta-meta
My 2 cents
/Jeppe
Jim Logan
November 30, 2011 at 1:29pm@rafael: I love what you said: “without MDD, domain-savvy developers need to deal with technical stuff they don’t (want to) understand, and more technical developers need to deal with domain-related stuff they would rather not think about.”