What is the focus of analysis: problem or solution?

Is the purpose of an analysis model understanding the problem or proposing a solution? I have discussed this a few times with different people. This is how I used to see it:

  • Analysis deals with understanding the problem domain and requirements in detail
  • Design deals with actually addressing those (functional and non-functional) requirements
  • A detailed design model can be automatically transformed into a working implementation
  • An analysis model can’t, as in the general case, it is not possible to automatically derive a solution based on the statement of a problem.

Rumbaugh, Blaha et al in “Object-oriented modeling and design” (one of the first OO modeling books) state the purpose of analysis in OO is to model the real-world system so it can be understood; and the outcome of analysis is understanding the problem in preparation for design.

Jacobson, Booch and Rumbaugh (again, now with the other “two amigos”) in “The unified software development process” state that “an analysis model yields a more precise specification of the requirements than we have in the results from requirements capture” and “before one starts to design and implement, one should have a precise and detailed understanding of the requirements”.

Ok, so I thought I was in good company there. However, while reading the excellent “Model-based development: applications“, to my great surprise, H. S. Lahman clearly states that contrary to structured development, where the focus of analysis is problem analysis, in the object-oriented paradigm, problem analysis is done during requirements elicitation, and the goal of object-oriented analysis is to specify the solution in terms of the problem space, addressing functional requirements only, in a way that is independent of the actual computing environment. Also, Lahman states that the OOA model is the same as the platform-independent model (PIM) in MDA lingo, so it can actually be automatically translated into running code.

That is the first time I have seen this position defended by an expert. I am not familiar with the Shlaer-Mellor method, but I won’t be surprised if it has a similar view of analysis, given that Lahman’s method is derived from Shlaer-Mellor. Incidentally, Mellor/Balcer’s “Executable UML: a foundation for model-driven architecture” is not the least concerned with the software lifecycle, briefly mentions use cases as a way of textually gathering requirements, and focuses heavily on solution modeling.

My suspicion is that for the Shlaer-Mellor/Executable UML camp, since models are fully executable, one can start solving the problem (in a way that is removed from the actual concrete implementation) since the very beginning, so there is nothing to be gained by strictly separating problem from a high-level, problem-space focused solution. Of course, other aspects of the solution, concerned with non-functional requirements or somehow tied with the target computing environment, are still left to be addressed during design.

And now I see how that all makes sense – I struggled myself with how to name what you are doing when you model a solution in AlphaSimple. We have been calling it design based on the more traditional view of analysis vs. design – since AlphaSimple models specify a (highly abstract) solution, it couldn’t be analysis. But now I think I understand: for approaches based on executable modeling, the divide between understanding the problem and specifying a high-level solution is so narrow and so cheap to cross, that both activities can and should be brought closer together, and the result of analysis in approaches based on executable modeling is indeed a model that is ready to be translated automatically into a running application (and can be quickly validated by the customer).

But for everybody else (which is the vast majority of software development practitioners – executable modeling is still not well known and seldom practiced) that is just not true, and the classical interpretation still applies: there is value in thoroughly understanding the requirements before building a solution, given that the turnaround between problem comprehension, solution building and validation is so damn expensive.

For those of you thinking that this smells of BigDesignUpFront, and that is not an issue with agile or iterative approaches in general – I disagree. At least as far as typical iterative approaches go, where iterations need to comprise all/several phases of the software development life cycle so they can finally deliver results that can be validated by non-technical stakeholders. As such they are still very wasteful (the use of the word agile feels like a bad joke to me).

Approaches based on executable modeling, on the other hand, greatly shrink the chasm between problem analysis and conceptual solution modeling and user acceptance, allowing for much more efficient and seamless collaboration between the problem domain expert and the solution expert. Iterations become so action packed that they are hardly discernible. Instead of iterations taking weeks to allow for customer feedback, and a project taking months to fully cover all functional requirements, you may get a fully specified solution after locking a customer and a modeler in a boardroom for just a day, or maybe a week for bigger projects.

So, long story short, to answer the question posed at the beginning of this post, the answer is both, but only if you are following an approach based on executable modeling.

What is your view? Do you agree with that? Are you an executable modeling believer or skeptic?

UPDATE: make sure to check the thread on Google+ as well.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

15 thoughts on “What is the focus of analysis: problem or solution?

  1. Ed Seidewitz

    July 30, 2011 at 3:17pm

    Rafael –

    You (and others who may have read my posts elsewhere) know that I am, of course, an executable modeling believer! :-)

    As such, let me say that I think you have characterized exceedingly well the concept of “analysis” in executable modeling approaches, and for Shlear-Mellor heritage approaches in particular. For those who are interested, in addition to the books you mention, I would still highly recommend Sally Shlaer and Stephen Mellor’s original books Object-Oriented Systems Analysis and Object Lifecycles. The notation and terminology may be a bit dated, but they are still well worth the read.

    (Indeed, I first met Steve Mellor in 1993 because I enjoyed reading Object-Oriented Analysis so much that I tracked down his phone number and gave him a call to tell him how much I liked it!)

    – Ed

  2. Ed Seidewitz

    July 30, 2011 at 3:20pm

    Actually, I think it was actually earlier when I made the call to Steve — the OOA book came out in 1988 — but we didn’t actually meet until ’93. Time sure flies! It was definitely a work ahead of its time.

  3. Mike Finn

    July 31, 2011 at 3:43am

    Hi Rafael,

    I’m a Shlaer-Mellor believer, although my own particular version is much evolved.

    I concur with your view on Lahman’s book.

    You say – “That is the first time I have seen this position defended by an expert” which surprises me since a core message of S-M/xUML over the last 25 years is to keep the Design out of the Analysis and Analysis out the design.

    I’m happy you used the word Translation instead of Transformation because they are not the same process.

    However, don’t get too carried away with it all – “you may get a fully specified solution after locking a customer and a modeler in a boardroom for just a day”.

    To do this you will need a Software Architecture implemented in a Model Compiler and these are not easy and not cheap.


  4. rafael.chaves

    July 31, 2011 at 9:31am

    @Ed / @Mike – It is a shame, but I have had no exposure to Shlaer/Mellor whatsoever. That is why Lahman’s view was news to me. My OO modeling (self-)education started with Coad/Yourdon OOA/OOD, then Rumbaugh (OMT), then a bit of UML. My interest in Executable UML started when I learned about action semantics in UML 1.5 (back in 2003), and other than the UML specs, all I had got until very recently was Mellor/Balcer’s book. Lahman’s is just my second book on the subject.


    “To do this you will need a Software Architecture implemented in a Model Compiler and these are not easy and not cheap.”

    I’d like to help changing that. I believe for information management systems, the same standard architectures are used by many, so there is a good opportunity for reuse/sharing the cost. Do you think an off-the-shelf model compiler is not feasible?

    On it not being easy, what do you see as the main difficulties?

  5. Stephen Mellor

    July 31, 2011 at 9:58am

    My thanks to Ed Seidewitz for pointing me to this blog AND the G+ comments to which I cannot add.

    I have learned, through years of teaching, that the hardest task is get a student to “unlearn” certain ideas that inhibit the addition of new ideas to their tree of knowledge. For example, the title for this essay asserts that “analysis,” “problem,” and “solution” constitute some branches of the (software engineering) tree of knowledge, as held by the author.

    The author does a great job of grafting executable modeling onto the analysis/design/solution/refinement tree, but the very question assumes there is a meaningful (and useful!) answer to the question. There is not.

    Consider this statement in the context of executable modeling: “Of course, other aspects of the solution, concerned with non-functional requirements or somehow tied with the target computing environment, are still left to be addressed during design.” Note the assumption that there is a meaning of the word “design.”

    Now consider, as an example, a credit card company that requires that customers and merchants be billed and paid for each transaction. I would suggest that one non-functional requirement is that the customer not be inundated by bills (and that the merchant be paid as late as we can get away with :) Yet I would expect that to be addressed in the (executable) model of the credit-card subject matter with the addition of the concept of billing cycles, not treated as a a “design detail.”

    Moreover, I would suggest that no amount of “refinement” will invent a database (or SOA or client-server or … ) Rather, these implementation technologies are invented separately, and they have their own “performance” “non-functional requirements.”

    Consequently, the final, central observation made by the author that “… to answer the question [Is the purpose of an analysis model understanding the problem or proposing a solution?], the answer is both…” is 100% correct, in my, executable modeling, view.

    This combination of understanding and proposing is applied recursively _at each layer in the system_. You can find a complete description in a long-forgotten paper at ooatool.com/docs/SMMethod96.pdf

  6. Stephen Mellor

    July 31, 2011 at 10:04am


    Model compilers are available for five-figure US$ amounts from (eg) Mentor Graphics, which, compared to half-a-year’s salary is a steal.

    OlivaNova has a different model. They sell translations based on a high-water mark. That is, if you generate 10000 lines of code, you pay a certain amount. If you retranslate and produce 12000, you pay for the additional 2000 lines. Subsequent generations of less than 12000 lines are free. Until you go over that high-water mark.

  7. rafael.chaves

    July 31, 2011 at 10:21am

    @Stephen Thanks for coming by and commenting!

    I agree there is nothing about your example that would make it not suitable to be handled in the executable model, as it is clearly addressable in terms of the problem space. I often struggle with the concept of non-functional requirements, and I tend to tie them to crosscutting concerns in the computing space, and for requirements like the one you suggested, I tend to come up/look for functional requirements that would serve the same purpose (the NFR being a rationale for them).

  8. Andriy Levytskyy

    August 3, 2011 at 5:20am

    I would say the focus is the problem :) , but I agree with Stephen Mellor that combination of understanding (problem) and proposing (solution) is applied recursively _at each layer in the system_ and each development phase, be it analysis, design, etc..

    In case of an executable modeling, there is a difference between problem analysis and conceptual solution. The result of the former is ontology models and understanding the problem – these are not executable.The former is input for DSLs and model interpreter development. The latter drives a solution model specification in the executable DSL. When another problem needs to be solved in the same domain (hence domain analysis is not needed), first the problem needs to be understood (analysis) and then a conceptual solution modeled.

  9. rafael.chaves

    August 3, 2011 at 8:36am


    You may be right for DSL-based modeling, but for Shlaer/Mellor, Lahman’s MBD and Executable UML, I don’t think so (and that is what triggered this post). In those approaches, understanding of the problem (as much as requirements gathering can) precedes OOA and is out of scope as a discrete activity. OOA starts right off the bat with *addressing* (solving) the customer problem – in terms of the problem space, but it is still “solution”. Deepening/refinement of the understanding of the problem happens implicitly as the OOA model is built, much like a designer (or even a developer) that is given incomplete requirements has also to further the understanding of the problem (an issue that executable modeling helps mitigate) in order to complete their work (which is solution centric).

  10. rafael.chaves

    August 3, 2011 at 8:46am


    To clarify what I mean by “problem” vs. “solution”:

    Problem: a need a customer (expert in the problem domain) has and needs addressed by a new system.

    Solution: a solution to a customer problem to be defined by a “developer” (expert in software).

    I do see a lot of value in classifying activities on whether they help elaborate or further refine and clarify the problem statement, or whether they help address the problem with a (computer-based) solution.

    S/M’s (or MBD’s, or xUML) OOA seems to blur the distinction between problem and solution by covering both (deepening problem understanding, and building a partial solution that focuses on the problem space and ignores the computing space), but I still believe that does not change the fact they are two different things.

  11. Scott Finnie

    September 22, 2011 at 2:27pm

    Coming late to the party, but one of my favourite topics…

    @rafael: like you, my introduction to OO modelling was through Rumbaugh et al. At the time, the proposal that “analysis = understand problem, design = specify solution” held intuitive appeal. However I always seemed to get caught up in unfruitful debate: there was always some element that someone maintained was ‘analysis’ whilst another was equally adamant it was ‘design’.

    And then I was introduced to Shlaer-Mellor. It bent my mind. It took a while for the neurons to reconfigure themselves – and unlearn several years of accepted wisdom.

    But the approach brought stunning clarity to the interminable debates of “analysis vs. design”, “problem vs. solution”, “high level vs. detail” and such like. In SM you’re either modelling a problem domain (“analysis”) or defining the bridges between domains (“design”).

    Precision pervades. So domain models are precise, executable, and thus amenable to highly automated translation.

    In fact I’ve stopped using the terms “analysis” and “design” because they’re so loaded with baggage. Instead I use “description” (modelling) and “translation”. I’ve used those for 10 years now and find them as useful now as I did then.

Comments are closed.