Is the purpose of an analysis model understanding the problem or proposing a solution? I have discussed this a few times with different people. This is how I used to see it:
- Analysis deals with understanding the problem domain and requirements in detail
- Design deals with actually addressing those (functional and non-functional) requirements
- A detailed design model can be automatically transformed into a working implementation
- An analysis model can’t, as in the general case, it is not possible to automatically derive a solution based on the statement of a problem.
Rumbaugh, Blaha et al in “Object-oriented modeling and design” (one of the first OO modeling books) state the purpose of analysis in OO is to model the real-world system so it can be understood; and the outcome of analysis is understanding the problem in preparation for design.
Jacobson, Booch and Rumbaugh (again, now with the other “two amigos”) in “The unified software development process” state that “an analysis model yields a more precise specification of the requirements than we have in the results from requirements capture” and “before one starts to design and implement, one should have a precise and detailed understanding of the requirements”.
Ok, so I thought I was in good company there. However, while reading the excellent “Model-based development: applications“, to my great surprise, H. S. Lahman clearly states that contrary to structured development, where the focus of analysis is problem analysis, in the object-oriented paradigm, problem analysis is done during requirements elicitation, and the goal of object-oriented analysis is to specify the solution in terms of the problem space, addressing functional requirements only, in a way that is independent of the actual computing environment. Also, Lahman states that the OOA model is the same as the platform-independent model (PIM) in MDA lingo, so it can actually be automatically translated into running code.
That is the first time I have seen this position defended by an expert. I am not familiar with the Shlaer-Mellor method, but I won’t be surprised if it has a similar view of analysis, given that Lahman’s method is derived from Shlaer-Mellor. Incidentally, Mellor/Balcer’s “Executable UML: a foundation for model-driven architecture” is not the least concerned with the software lifecycle, briefly mentions use cases as a way of textually gathering requirements, and focuses heavily on solution modeling.
My suspicion is that for the Shlaer-Mellor/Executable UML camp, since models are fully executable, one can start solving the problem (in a way that is removed from the actual concrete implementation) since the very beginning, so there is nothing to be gained by strictly separating problem from a high-level, problem-space focused solution. Of course, other aspects of the solution, concerned with non-functional requirements or somehow tied with the target computing environment, are still left to be addressed during design.
And now I see how that all makes sense – I struggled myself with how to name what you are doing when you model a solution in AlphaSimple. We have been calling it design based on the more traditional view of analysis vs. design – since AlphaSimple models specify a (highly abstract) solution, it couldn’t be analysis. But now I think I understand: for approaches based on executable modeling, the divide between understanding the problem and specifying a high-level solution is so narrow and so cheap to cross, that both activities can and should be brought closer together, and the result of analysis in approaches based on executable modeling is indeed a model that is ready to be translated automatically into a running application (and can be quickly validated by the customer).
But for everybody else (which is the vast majority of software development practitioners – executable modeling is still not well known and seldom practiced) that is just not true, and the classical interpretation still applies: there is value in thoroughly understanding the requirements before building a solution, given that the turnaround between problem comprehension, solution building and validation is so damn expensive.
For those of you thinking that this smells of BigDesignUpFront, and that is not an issue with agile or iterative approaches in general – I disagree. At least as far as typical iterative approaches go, where iterations need to comprise all/several phases of the software development life cycle so they can finally deliver results that can be validated by non-technical stakeholders. As such they are still very wasteful (the use of the word agile feels like a bad joke to me).
Approaches based on executable modeling, on the other hand, greatly shrink the chasm between problem analysis and conceptual solution modeling and user acceptance, allowing for much more efficient and seamless collaboration between the problem domain expert and the solution expert. Iterations become so action packed that they are hardly discernible. Instead of iterations taking weeks to allow for customer feedback, and a project taking months to fully cover all functional requirements, you may get a fully specified solution after locking a customer and a modeler in a boardroom for just a day, or maybe a week for bigger projects.
So, long story short, to answer the question posed at the beginning of this post, the answer is both, but only if you are following an approach based on executable modeling.
What is your view? Do you agree with that? Are you an executable modeling believer or skeptic?
UPDATE: make sure to check the thread on Google+ as well.