TextUML Toolkit 1.4 is out!

Release 1.4 of the TextUML Toolkit is now available from the update sites, for both Eclipse 3.5+ and 3.4 (by the way, it is possible this will be the last major release targetting Eclipse 3.4, unless someone volunteers to generate and test builds for 3.4).

This is mostly a bug fix release. However, there were some minor notation changes to support stereotypes on dependencies, generalizations and interface realizations, hence the version change from 1.3.x to 1.4.x.

Thanks to Vladimir Sosnin, Attila Bak and fredlaviale for contributing with code and/or bug reports/feature requests. Keep them coming!

As usual, you can find a summary of the changes in the TextUML Toolkit Features page. If you are using the Toolkit, make sure you upgrade soon. If not, what are you waiting for? Install it already!

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.3 is out!

The TextUML Toolkit 1.3 is now available, 4.5 months after 1.2. If you already got RC1 (1.3.0.20090614…), it is the same build, no need to upgrade again. Otherwise, just point the Eclipse update mechanism to:

http://abstratt.com/update/ – if you are using Eclipse Galileo, or

http://abstratt.com/update/3.4/ – if you are using Eclipse Ganymede.

Please see the feature page for a summary of the evolution of the TextUML Toolkit in terms of features across releases. Some highlights for this release are better integration with other UML tools and support for both Eclipse 3.4 and 3.5.

As usual, bug reports and feature requests are much appreciated. And if you need help, make sure to ask on the user forum.

The TextUML Toolkit Team

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.3RC1 is now available

The first release candidate towards version 1.3 of the TextUML Toolkit is now available. The feature overview page has been updated to reflect the changes in the 1.3 cycle.

Please give it a try and report any issues you might find. If no serious problems are found with this build, it will be promoted to 1.3 final later this week.

Note that the main update site supports Eclipse 3.5, but there is also an update site for Eclipse 3.4.  The features are the same, the only differences between the two sites are the dependencies (UML2 3.0 and EMF 2.5 on Eclipse 3.5, UML2 2.2 and EMF 2.4 on Eclipse 3.4).

The TextUML Toolkit team welcomes your feedback.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

A month’s worth of news

Wow, it is been more than a month since my last post, but I have been busier than ever working on an upcoming MDD-related product (cannot say much now other than that you will hear more about it here first). However, In TextUML Toolkit-land things are looking pretty exciting.

More hands on deck

Vladimir Sosnin has joined the project and has been on a roll contributing many patches, not only bug fixes but design improvements and features too. It is amazing how quickly he has become very comfortable with the TextUML Toolkit code base (Vladimir is certainly a solid developer, but I’d like to think the quality of the code base helped him too). And if that was not good enough, Vladimir is also using the TextUML Toolkit in his daily work around model-driven development, so that should help ensuring the Toolkit has the features its target audience actually needs.

New release in the making

1.3RC1 should be made available during the weekend. It has some shiny new language features, a bunch of bug fixes, better integration with other UML2 tools and support for both Galileo and Ganymede. By the way, shipping code that can handle different incompatible versions of Eclipse has become a little more challenging with P2, as you cannot ship a feature that has bundles that resolve in a mutually exclusive way – it seems you really need two different features for that.

Documentation moved to SourceForge

SourceForge now has support for MediaWiki, which is used to author and serve the documentation for the TextUML Toolkit. In the spirit of strengthening the message that the project should really be considered a community-based effort, I decided to migrate the documentation from the Abstratt web site over to SourceForge.

I guess that does it, at least for now.  Stay tuned.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

On code being model – maybe not what you think

I have heard the mantra ‘code is model’ several times. Even though I always thought I got the idea of what it meant, only now I decided to do some research to find out where it came from. Turns out that it originated from a blog post that MS’ Harry Pierson wrote back in 2005. It is a very good read, insightful, and to the point.

The idea that gave title to Harry’s post is that whenever we use a simpler representation to build something that is more complex and detailed than we want to care about, we are creating models. 3GL source code is a model for object code. Byte code is a model for actual CPU-specific executable code. Hence code is model.

He then goes to ask that if we have been successfully reaping the benefits of increased levels of abstraction by using 3GLs for decades now, what prevents us from taking the next step and using even higher level (modeling) languages? He makes several good points that are at the very foundations of true model-driven development:

  • models must be precise“- models must be amenable to automatic transformation. Models that cannot be transformed into running code are “useless as development artifacts“. If you like them for conceiving or communicating ideas, that is fine, but those belong to a totally different category, one that plays a very marginal role in software development, and have nothing to do with model-driven development. Models created using the TextUML Toolkit are forcefully precise, and can include behavior in addition to structure.
  • models must be intrinsic to the development process” – models need to be “first class citizens of the development process” or they will become irrelevant. That means: everything that makes sense to be modeled is modeled, and running code is generated from models without further manual elaboration, i.e., no manually changing generated code and taking it from there. As a rule, you should refrain from reading generated code or limit yourself to reading the API of the code, unless you are investigating a code generation bug. There is nothing really interesting to see there – that is the very reason why you wanted to generate it in the first place. Build, read, and evolve your models! Generated code is object code.
  • models aren’t always graphical” – of course not. I have written about that before here. The TextUML Toolkit is only one of many initiatives that promote textual notations for modeling (and I mean modeling, not diagramming – see next point).
  • explicitly call out models vs. views” – in other words, always keep in mind that diagrams != models. Models are the real thing, diagrams are just views into them. Models can admit an infinite number of notations, be them graphical, textual, tabular etc. Models don’t need notations. We (and tools) do. Unfortunately, most people don’t really get this.

The funny thing is that, most of the times I read someone citing Harry’s mantra, it is misused.

One misinterpretation of the “code is model” mantra is that we don’t need higher-level modeling languages, as current 3GLs are enough to “model” an application. The fact is: 3GLs do not provide an appropriate level of abstraction for most kinds of applications. For example, for enterprise applications, 4GLs are usually more appropriate than 3GLs. Java (EE) or C# are horrible choices, vide the profusion of frameworks to make them workable as languages for enterprise software – they are much better appropriated for writing system software.

Another unfortunate conclusion people often extrapolate from the mantra is that if code is model, model is code, and thus it should always be possible to translate between them in both directions (round-trip engineering). Round-trip engineering goes against the very essence of model-driven development, as source code often loses important information that can only exist in higher level models. The only reason people need RTE is because they use models to start a design and generate code, but then they switch to evolving and maintaining the application by directly manipulating the generated code. That is a big no-no in true model-driven development – it implies models are not precise or complete enough for full code generation.

So, what is your view? How do you interpret the “code is model” mantra?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

New in 1.3 M1: better integration with diagramming tools

Even though it has always been possible to open models generated by the TextUML Toolkit in UML2-based diagramming tools, until 1.2 the Toolkit assigned new ids to each element every time a model was regenerated. That meant that any diagrams based on the model being regenerated would become invalid as any cross-references from the diagram to the model would be broken.

Starting with 1.3 M1 (test build available from the milestones update site), the TextUML Toolkit is now better compatible with diagramming tools based on UML2. You can edit the source in TextUML and save it to cause model generation to occur, and any existing diagrams should still remain valid.

Here you can see the TextUML Toolkit being used side-by-side with the UML2 Tools graphical editor:

Actually, I found out that UML2 Tools can be a great companion for the TextUML Toolkit because it seems to just do the right thing when it notices the underlying model has been changed externally: new elements show up, removed elements disappear, preserved elements preserve their layout. I tried doing the same with Papyrus 1.11 and unfortunately it does not seem to handle external updates properly. I haven’t tried with other UML2-based diagram editors, so at this point I am not sure which one represents the trend here.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Developing for Eclipse without OSGi

I have been doing some exploratory work around running the TextUML Toolkit‘s compiler as a standalone Java application. In other words, without OSGi. You Eclipse/OSGi heads out there might ask: why would anyone want to do that? The answer is: enhanced applicability. Severing ties with the Eclipse runtime/OSGi means the compiler would become usable in other contexts, and by more people. Think Ant or Maven-driven builds, or (the horror!) other IDEs.

The TextUML compiler’s dependencies

The TextUML compiler is not very different from any ordinary 3GL compiler. It reads a bunch of source files and spits another bunch of object files. For reading and writing files, the TextUML compiler uses EFS, the Eclipse File System. For generating the object files, which are UML models, the compiler uses UML2 and EMF.

Luckily, all these components are expected to work on a standalone application, although some restrictions in functionality might apply. At a first glance, one would expect that running the TextUML compiler without OSGi should be quite feasible. But, as I should have learned by now, it is never that simple…

Extension registry without OSGi

The TextUML compiler, and some of its dependencies use the extension registry to receive contributions to their extension points or make contributions to other components’ extension points. Since Eclipse 3.2, the extension registry API provides support for creating registries without OSGi. Basically, you can add any extension and extension points you want, all you need to do is to provide the XML markup for them.

Problem: in a standalone Java application, it is the application’s responsibility to somehow find the plugin manifests and populate the extension registry.

Solution: I started with the example Paul Webster posted on the platform newsgroup almost two years ago. It basically starts from a file system location which it assumes to hold plugins, both on JAR’d and directory form. It will find plugin manifests (plugin.xml), recognize translation resources (plugin.properties), and even discover the bundle name by parsing the bundle manifest (MANIFEST.MF), and use that to populate the extension registry. Pretty cool, thanks PW. I just cleaned-up the code a bit and enhanced it to also find plugin manifests using the application’s classloader. I also had to add support for setting the default extension registry, which is the one other components will get when invoking Platform.getExtensionRegistry() or RegistryFactory.getRegistry(). Finally, I had to implement basic support for creating executable extensions, which was not too hard given that I could assume the entire application is using the same (application) classloader.

EFS without OSGi

Also since 3.2, the Eclipse File System does not require OSGi to operate, however it does require a functional registry as it uses an extension point to allow file system implementations to be plugged in. So, provided one manages to populate the default extension registry, EFS works fine on a standard Java app. No action required here (yay!).

EMF and UML2 without OSGi

EMF has long been advertised as being mostly functional when running in non-managed mode. However, when running in an standalone app, EMF does not process extensions to its extension points (such as resource factories, metamodel packages and URI mappings), so a standard Java application needs to explicitly initialize EMF, which is clearly suboptimal.  That is because EMF only processes contributions to its extension points right during bundle activation. There is no alternative way to trigger processing of the registry (see bug #271253), which sucks, given that the registry is there just waiting to be used. With no OSGi, there is no such thing as bundle activation, and thus EMF goes into autistic mode. This is also true for UML2 (not surprisingly, as they share many implementation traits), the main difference being that UML2′s extension points are much less used than EMF’s.

Problem: Ecore won’t process the extension registry when running in a standalone Java application.

Solution: Change Ecore so it allows clients to explicitly trigger processing of the contributions to Ecore’s extension points. My local hacky solution was to make Ecore’s extension registry parsers to be publicly visible so any client could invoke them.

‘platform:’ URLs without Equinox

Extensions to Ecore’s URI mapping extension often translate to ‘platform:’ URLs, which are another common Eclipse-ism, and thus not supported in plain Java applications.

Problem: No support for ‘platform:’ URL schemes  in standalone mode.

Solution: I implemented a simple URL stream handler for the ‘platform’ protocol. The current implementation complete ignores the path component that determines what bundle the resource is in. That is clearly something that needs to be improved, but should not be too hard.

Conclusion

Well, it wasn’t really a walk in the park, but other than the issue that requires a change in EMF (bug #271253, which I am hoping could be addressed for Ganymede Galileo),  I am pretty happy with the results of this exploration.

I made all the code I wrote (including my version of Paul Webster’s registry loader) available on the TextUML Toolkit’s SVN repository on SourceForge. Feel free to use it and submit improvements. I wonder if this sort of stuff could find a home in Eclipse.org, as I think many people currently developing for non-OSGi targets would like to take advantage of some great OSGi-agnostic Eclipse APIs.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

(not) Installing the TextUML Toolkit 1.2 on Eclipse 3.5 M6

I decided it was time to start using Eclipse 3.5 before it went RC or else if I find any blockers there won’t be time left for them to get fixed before the next release this Summer. I have been on the other side of the fence and know how frustrating it is when people only really start reporting bugs at the very end of the release cycle.

After I installed the 3.5 M6 SDK, I decided to install the TextUML Toolkit 1.2

Nothing to install?

First surprise: no features would show on the update site! WTH? Well, turns out I had the Update UI to show available features grouped by category, but the TextUML Toolkit update site does not use categories. Unchecking the option to group by categories shows the four features in the TextUML Toolkit update site as it was usual in 3.4. Phew… I was just about to enter a bug report against the Eclipse update component (a.k.a. p2), when I found out that that unintuitive UI behavior has been recently addressed and the next milestone won’t show this problem any longer.

A roadblock

I managed to install the TextUML Toolkit, and that brought along EMF and UML2, which are required features. Since, at this time, the TextUML Toolkit does not specify version range for its dependencies, the latest versions of EMF and UML2 were automatically installed by Eclipse. That should be fine, I thought, as one of the Eclipse development key mandates has always been backward compatibility. But this time around that was not the case: the TextUML builder was failing to compile a TextUML source file that was making use of template parameters. Turns out the UML2 API around templates has changed in its version 3.0. Before (in UML2 2.2), a template parameter substitution could have multiple formal and actual parameters. Now it can have only one of each, and thus the TextUML Toolkit is (partially) broken in 3.5.

So much for core values, you might be thinking? Well, to be fair, this is a bit of a fringe case. The UML2 API is an Eclipse component, and as such should strive for backward compatibility too. However, it is also a high-fidelity rendition of the UML specification published by the OMG. In this case, the OMG fixed a bug (#9838) in the spec, and as often is the case with UML, that resulted in a change that was not backward compatible. If the OMG promotes changes that are not backward compatible, the UML2 team does not have much of a choice: in order to continue to closely implement the spec, it has to break the Eclipse guidelines.

And neither do I. For the TextUML Toolkit 1.2, I will have to issue a maintenance release (1.2.1) that will explicitly depend on UML2 2.2, thus avoiding the new version, and probably becoming incompatible with other UML modeling packages that only run on Galileo and UML2 3.0. For the next release of the TextUML Toolkit (1.3), I will have to adopt the new API and explicitly depend on UML 2 3.0.

That is how life goes… for some reason, I kind of feel like a flea crawling on the back of an elephant.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

SQL queries in UML

I strongly believe queries are an essential part of a domain model. As such, in our quest to have (UML) models that can fully (yet abstractly) describe object models for the common enterprise applications, we cannot leave out first class support for queries.

But how do you do queries in UML? The obvious answer seems to be OCL, but that is not the approach I am taking as OCL and UML have serious interoperability/duplication issues. Instead, I took the  middleweight extension approach.

First, we model a protocol for manipulating collections of objects (showing only a subset here):

class Collection specializes Basic
  operation includes(object : T) : Boolean;
  operation isEmpty() : Boolean;
  operation size() : Integer;
  operation exists(predicate : {(:T) : Boolean}) : Boolean;
  operation any(predicate : {(:T) : Boolean}) : T;
  operation select(filter : {(:T) : Boolean}) : T[*];
  operation collect(mapping : {(:T) : any}) : any[*];
  operation forEach(predicate : {(:T)});
  operation union(another : T[*]) : T[*];
  (...)
end;

That protocol is available against any collection of objects, which in UML can be obtained by navigating an association, reading an attribute, invoking an operation, obtaining the extent of a class (remember Smalltalk’s allInstances), anything where the resulting value has multiplicity greater than one.

Note most of the operations in the Collection protocol take blocks/closures as arguments. Closures are used in this context to define the filtering criterion for a select, or the mapping function for a collect.

For instance, for obtaining all accounts that currently do not have sufficient funds, this method would do it:

static operation findNSFAccounts() : Account[*];
begin
    return Account extent.select(
        (a : Account) : Boolean {return a.balance < 0}
    );
end;

Note the starting collection is the extent of the Account class. That is very similar to what is done in the context of query languages for object-oriented databases, such as OQL or JDOQL. We then filter the class extent by selecting only those accounts that have a negative balance, by passing a block to the select operation.

When mapping that behavior to SQL, we could end up with a query like this:

select _account_.* from Account _account_ where _account_.balance < 0

Another example: we want to obtain all customers with a balance above a given amount, let’s say, to send them a letter to thank them for their business. The following method specifies that logic:

static operation findBestCustomers(minBalance : Real) : Customer[*];
begin
    return (Account extent.select(
          (a : Account) : Boolean { return a.balance >= minBalance }
    ).collect(
          (a : Account) : Customer { return a->AccountOwner->owner }
    ) as Customer);
end;

Note that we start off with the extent of Account class, filter it down to the accounts with good balance using select, and then map from that collection to a collection with the respective account owners by traversing an association using collect.

If that was going to be mapped to SQL, one possible mapping would be:

select _customer_.* from Account _account_
    inner join Customer _customer_
        on  _account_._accountID_ = _customer_._customerID_
    where _account_.balance >= ?

Much of this can be already modeled if you try it out with the TextUML Toolkit 1.2. But, you might ask, once you model that, what can you do with UML models containing queries like the ones shown here?

Since the models are complete (include structure and behavior), you can:

  1. Execute them. Imagine writing automated tests against your models, or letting your customer play with them before you actually start working on the implementation.
  2. Generate complete code. The generated code will include even your custom queries, not only those basic ones (findAll, findByPK) code generators can usually produce for you.

If you would like to see tools that support that vision, keep watching this blog.

So, what is your opinion?
Do you see value in being able to specify queries in your models? Is this the right direction? What would you do differently?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.2 is out!

The TextUML Toolkit 1.2 is now available, 5 months after 1.1, when it first became an open source project. If you already got RC2 (1.2.0.200902011417), it is the same build, no need to upgrade again. Otherwise, just point the Eclipse update mechanism to:

http://abstratt.com/update/

Please see the feature page for a summary of the evolution of the TextUML Toolkit in terms of features across releases.

As usual, bug reports and feature requests are dearly appreciated. And if you need help, make sure to ask on the forum.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit + Acceleo: generate code from UML models

2011-03-10 – UPDATE: Interested in UML/TextUML and full code generation? You can now do that online using AlphaSimple

Acceleo is a cool open-source code generation tool that has great integration with the Eclipse IDE and EMF-based metamodels. The tool has a strong emphasis on simplicity and ease of use, which, for an open source development tool, is really a breeze of fresh air.

The Acceleo website includes a repository of code generation modules, which are extensions to the tool that target generation for specific platform/languages, such as .Net (C#/NHibernate), Java (Hibernate/Struts/Spring), Python, PHP, C, Zope etc. Of course, you can also develop your own modules, using Acceleo template editors with metamodel-driven content assist, and live sample generation.

Most of the modules in the Acceleo repository take UML2 compatible models. That means you can use the TextUML Toolkit for quickly creating your UML models using the TextUML textual notation, and then use Acceleo to generate code from them.

One of the most mature code generation modules in the Acceleo repository is the UML2 to Java EE module, which can generate code for Spring, Struts and Hibernate-based Java applications from UML2 models. A step-by-step guide would be too much for this post, but if you want to explore generating Java EE code from UML models you create using the TextUML Toolkit, here are some pointers:

Get the tools. First of all, using Eclipse 3.4, install all features from the following update sites:

  • TextUML Toolkit – http://abstratt.com/update/
  • Acceleo code generator – http://acceleo.org/update/
  • Acceleo code generation modules – http://acceleo.org/modules/update

Once the install is complete, allow Eclipse to restart for the new plug-ins to become available.

Create your UML models and profiles. Create a project for your UML models using the TextUML Toolkit. For an example of how to do that, see the TextUML Tutorial. You will need to define in your model project a profile where you declare the stereotypes that you will use to annotate your modeled classes, to drive code generation (see how here)

Create your Acceleo generator project. The Acceleo website has plenty of documentation on how to create Acceleo code generation projects. Some pointers:

Note that this module has a bug in which it is based on a profile with an invalid UML name which won’t work with the TextUML Toolkit out of the box. Instead of using the profile shipped with that module, you can then override the default profile and stereotypes using a properties file at the root of your generator project as shown here so to direct it to your own profile and stereotypes. You can also find a more structured guide for using Acceleo and the TextUML Toolkit together here. Even though that tutorial uses a customized version of the JEE module, it can still be helpful.

As usual. If you need help generating code from UML models using the TextUML Toolkit and Acceleo, feel free to ask for help here, or in the user forum.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.2 RC2 fixes stereotype extension rendering bug

A new RC build of the TextUML Toolkit is now available. It has one bug fix since RC1. Basically, when rendering stereotypes, the metaclasses extended by the stereotype were not being shown. Not really critical, but isolated and safe enough to be fixed now. Pending any last minute bug reports by the community, it is likely this build will be promoted to 1.2 release early this week.

Also, while testing, I noticed that the Pet Store example was triggering a known Graphviz issue with associations navigable both ends, so I updated the petstore_models project so the relationship between Title and Payment was navigable only in one direction (Title -> Payment). I recaptured the class diagram images showing on that page using the Image Viewer “Save to File” function instead of screenshot sections.

Just trigger a software update for the TextUML Toolkit to get RC2.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.2 RC1 fixes Java 5 compatibility issue

Yikes, it happened again. A user reported that the TextUML Toolkit 1.2 RC0 build announced earlier this week wouldn’t run on Java 5 VMs (Mac users would be the most affected). This has just been fixed (one bundle was being compiled against Java 6) and a new release candidate is now available from the update site. If you too were seeing “java.lang.UnsupportedClassVersionErrors” running RC0, just go ahead and update to RC1.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.2 RC0 / M3 is now available

The third milestone build and first release candidate of the TextUML Toolkit, the IDE for textual UML  modeling, is now available from the update site.

The feature page shows all the new features in this new release, but here they are:

As you can tell, most of the work in the 1.2 release cycle was towards extending the TextUML notation with support for UML.

There were also several bug fixes, one of them being the first to be addressed by an external contributor (thanks, Massimiliano!).

If no serious issues are found with this milestone build, it will be promoted to 1.2 final by the next weekend. So, if you are using a 1.1.* build, make sure you upgrade it to a 1.2 release candidate build and start smoke testing with your usual workflows, and report any bugs you find.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Closures in UML? Extending the metamodel with the TextUML Toolkit

UML is known to be a huge language, and that has two problems: it is too complex, having way more features than most applications will ever need, and can still be insufficient, as no single language will ever cover everybody’s needs.

In the article “Customizing UML: Which Technique is Right for You?”, James Bruck and Kenn Hussey (both from the UML2 team) do a great job at covering the several options for extending (or restricting) UML (James also made a related presentation at last year’s EclipseCon, together with Christian Damus, of Eclipse OCL fame). Cutting to the chase, these are the options they identify:

  • using keywords (featherweight extensions)
  • using profiles, stereotypes and properties/tagged values (lightweight extensions)
  • extending the metamodel by specializing the existing metaclasses (middleweight extensions)
  • using package merges to select the parts of UML you need (heavyweight extensions)

Each option has its own strengths and weaknesses, as you can see in the referred article/presentation. At this time, the TextUML notation supports two of those approaches: profiles and metamodel extensions.

Adding closures to UML

Even though profiles are the most popular (and recommended) mechanism for extending UML, it is not enough in some cases/applications. That has been the case in the TextUML Toolkit, for instance, when implementing closures in the TextUML action language (yes, the Toolkit eats its own dog food).

According to the wikipedia entry, “a closure is a function that is evaluated in an environment containing one or more bound variables. When called, the function can access these variables.

It really makes sense to (meta) model a closure as some kind of UML activity, which is basically a piece of behavior that can be fully specified in UML. Methods, for instance, are better modeled in UML as activities. Activities are composed of activity nodes and actions, which are similar to blocks of code and instructions, respectively.

The only thing that is missing in the standard Activity metaclass is the ability for a closure to have an activity node from another activity as context, so it can access context’s local variables. So here is a possible (meta) modeling of closures in UML using the TextUML syntax:

[Standard::Metamodel]
model meta;

apply Standard;

(*
  A closure is a special kind of activity that has another
  activity's activity node as context. A closure might
  reference variables declared in the context activity node.
*)
[Standard::Metaclass]
class Closure specializes uml::Activity
  (* The activity node that provides context to this closure. *)
  reference contextNode : uml::StructuredActivityNode;
end;

end.

Or, for those of you who prefer a class diagram (courtesy of the EclipseGraphviz integration):
Closure as a UML metamodel extension

Note a model contributing language extensions must be applied the Standard::Metamodel stereotype, and each metaclass must be assigned the stereotype Standard::Metaclass.

Of course, there is no point in being able to metamodel closures, if there is no way to refer to them. We need a kind of type that we can use to declare variables and parameters that can hold references to closures. That also means we need to be able to invoke/dereference a closure reference, and there is no support for referring to metamodel elements in the UML action language. That means we need more language extensions. But I will leave that to another post. My goal here was to show how simple it is to create a simple UML metamodel extension with the TextUML Toolkit.

What about you, have you ever needed to extend UML using a metamodel extension? What for?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Integrating the TextUML Toolkit with other modeling tools

No tool is an island. That is even more important when we are talking about highly focused single-purpose tools such as the TextUML Toolkit. As you probably know, the TextUML Toolkit is a tool for UML modeling using a textual notation, but that is about it. The TextUML Toolkit itself won’t help if you need features such as code generation, reverse engineering or graphical modeling/visualization. That might look as a negative thing to some, but that is by design. I am a strong believer of separation of concerns, component-based software and using the best tool for the job, and that is reflected in the design of the Toolkit.

This post will describe how to use models created by other UML2-compatible modeling tools from models you create using the TextUML Toolkit. I plan to cover other areas of integration in future posts.

Reading models created by other UML tools using the TextUML notation

If you have the TextUML Toolkit installed, one of the editors available for UML files is the TextUML Viewer. As the name implies, it is not really an editor, i.e., you cannot edit models with it, just visualize them using the TextUML notation. But it is an useful tool for reading UML models you don’t have the source for, i.e. models created by other UML2 modeling tools.

Using models created by other tools…

There are a few different ways of using models created by other UML modeling tools from your models created in TextUML.

…by co-location

In this method, you just drop the UML model you want to use to at the root of your TextUML (i.e. MDD) project. All models at the root of the same MDD project are considered to be in the same repository and thus you can make cross-model references by either using qualified names or importing the package and using simple names.

This method is dirty simple, but it forces you to keep the foreign model at a specific location, potentially forcing you to keep multiple copies of the model.

…with project references

In this method (available starting in version 1.2), the UML model you want to use is in a different project than the one you want to refer it from. You just open the properties dialog for your TextUML (MDD) project and add the project providing the UML model you want to use to the list of referenced projects. Now models at the root of that project will also be visible from your TextUML (MDD) project, in other words, they are also seen as part of the same repository.

This method is more powerful than the previous one, as now models can be shared by reference instead of by copying. There is a limitation still: the models you use ought to be part of some project in the workspace, so you cannot refer to models stored at arbitrary locations in the file system or models that are contributed by Eclipse bundles.

…by explicitly loading them

For this method, you use the load directive to load an external model located by a URI.

This method allows you to load any model, provided its location can be represented as a resolvable URI. That is the case of any file system path, or Eclipse platform URLs. The caveat is that some URIs such as file system URIs are not portable, so they are are bound to be invalid on other machines.

The entire truth…

…is that even if you are using only the TextUML Toolkit for building UML models, these are the same mechanisms for creating models that make cross-package references (see also the TextUML Guide). For the TextUML Toolkit, all UML2-compatible models are equal, no matter how they were created.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

TextUML Toolkit 1.2 M2 is now available

The second milestone build towards TextUML Toolkit release 1.2 is now available from the milestone update site. Get it while it is hot.

This milestone adds support for some UML language features:

But what is really exciting is that this milestone is also the first build that includes a contribution from a new member of the TextUML Toolkit team: Massimiliano Federici provided a  patch that squished bug 2127735 (revision 194). Kudos to Massimiliano, his contribution was greatly appreciated, and it seems it might just be the first of many!

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Feature: primitive types

Yesterday I checked in support for UML primitive types in the TextUML Toolkit. As an example of the syntax, here is the UMLPrimitiveTypes model, shipped in Eclipse UML2, rendered by the TextUML Source viewer:

[Standard::ModelLibrary]
model UMLPrimitiveTypes;

apply Standard;

primitive Boolean;

primitive Integer;

primitive String;

primitive UnlimitedNatural;

end.

As you can see, primitive types are declared using the keyword ‘primitive’, and cannot specialize other types or have structural features (thus there is no ‘end’ terminating the type declaration). To be honest, I am not sure this is the intended semantics, as the UML specification does not seem to explicitly disallow that.

Here is the breakdown of the change sets (r188-191):

  • automated tests (r190)
  • parser and model generation (r191)
  • textual renderer (r189)
  • editor (r188)

To be frank, the only test case added was quite trivial:

public void testPrimitiveType() throws CoreException {
  String source = "";
  source += "model someModel;n";
  source += "primitive Primitive1;n";
  source += "end.";
  parseAndCheck(source);
  PrimitiveType found = (PrimitiveType) getRepository().findNamedElement(
    "someModel::Primitive1", IRepository.PACKAGE.getPrimitiveType(), null
  );
  assertNotNull(found);
}

But it is a good start. It ensures parsing works, and the model generated has the intended element created under the right parent package.

Any suggestions of what UML feature to cover next? And what is your take? Can UML primitive types have structural features or specialize other types/implement interfaces?

(By the way, if you want to understand how the TextUML Toolkit is implemented, feature posts like this are a good starting point)

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Feature: data types

Just checked in a new feature in the TextUML Toolkit: support for data types (a.k.a. structs). This is an example of a data type declaration:

datatype UserName
  attribute firstName : String;
  attribute lastName : String;
end;

You can declare operations and specialize other classifiers as usual.

Change set

Here are the individual changes per plug-in (r177-r186):

There, nice and easy. Well, actually, I ended up doing a second commit because I forgot to ensure a data type specializes only other data types. But now that has been taken care of.

As usual, suggestions of UML features to support in the TextUML Toolkit are most welcome.

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

Slashdot: Is Open Source Software a Race To Zero?

Great discussion over @ Slashdot: Is Open Source Software a Race To Zero?

I really think the open source approach has lots of benefits, for the software itself and all parties involved. However, I would say it will probably take a decade before sound business models based on open source are really understood and start to become mainstream.

At this point in time, (as most people) I still think it is considerably harder/trickier to make money developing software as open source than it is with closed source. At least for small companies. A few reasons:

  • reduced barrier to entry for new competitors as they can easily leverage the fruits of your hard work. Even more so if you choose a more liberal license such as a BSD, EPL or Apache (JBoss and MySQL use GPL, for instance).
  • lower profit margins, if you decide to adopt a services-based business model instead of one based on selling product licenses, which is a common approach.
  • the overhead of maintaining the open source software while developing the closed source extensions or providing the related services, the very activities that will actually make money, could be unbearable.

The TextUML Toolkit is open source (EPL) since release 1.1. The decision of making the TextUML Toolkit open source was based on the fact that I (a.k.a. Abstratt Technologies) never intended to make any money directly off of it, wanted to attract external contributions and maybe get some visibility to other future offerings. But I wouldn’t have done it if I had any plans of selling the TextUML Toolkit as a product on its own.

Well, I am interested in your thoughts. Do you know of cases of small companies making good money from developing and selling open source software (using liberal licenses such as the EPL)?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter