We just released a new version of Cloudfier which sports a long requested feature (in experimental status): reverse engineering a relational database schema as a Cloudfier application. It relies on offline database schema snapshots produced by the SchemaCrawler tool.
Steps
- Create a new folder (call it, say, ‘import-test’)
- Select the folder you just created, right click it, and choose Import > File or Zip Archive, then pick a database snapshot file created using SchemaCrawler in your computer (feel free to download this example snapshot). When asked whether the file should be unzipped, CHOOSE “NO”.
- Select the folder you just created, right click it, and choose Open Related > Shell
- type
cloudfier init-project .
- type
cloudfier app-deploy .
so the contents of the project are published - type
cloudfier import-schema-crawler . offline.db_.zip
to import the SchemaCrawler snapshot as TextUML model (provide the proper file name if your snapshot file has a different name) - if you used the sample snapshot, delete the forlint.tuml file before you take the next step.
- type
cloudfier full-deploy .
to deploy the application.
Producing an offline database schema with SchemaCrawler
- Download and extract SchemaCrawler into your computer.
- Open a terminal or command prompt and cd into the _schemacrawer directory inside the location where you extracted SchemaCrawler.
- Run SchemaCrawler tool “serialize” command against your database, for instance (for a SQL Server database):
./schemacrawler.sh -command=serialize -server=sqlserver -infolevel=standard -password=DB_PASSWORD -user=DB_USER -database=DB_NAME -host=DB_HOST -o=my-offline-schema.zip
More details on running SchemaCrawler here.
Controlling the schema import operation
There are a bunch of options that you can add to your mdd.properties file to customize the importing operation. Some of which are demonstrated in this test class.
Sualeh Fatehi
July 28, 2016 at 10:57pmThank you for using SchemaCrawler creatively. Please make sure that the snapshots are made with the same version of SchemaCrawler as you use to produce the TUML files. Otherwise, Cloudifer may not be able to read the snapshot in order to create the TUML files.
Rafael Chaves
July 28, 2016 at 11:06pmThanks for making SchemaCrawler available, Sualeh. From what I could see, it looks solid! Before I knew about SchemaCrawler, I was considering using straight DatabaseMetadata, but SchemaCrawler’s API is so much nicer and easier to use.
Is there any backward compatibility between snapshots across SchemaCrawler versions? Or is using the exact same version for creating and reading snapshots the only way to go?
Sualeh Fatehi
August 11, 2016 at 10:46pmRafael, I am so glad you like it. Another way to go would be to serialize the catalog object in some abstract way, such as JSON. SchemaCrawler generates JSON natively. You would then have to write your own code to deserialize the JSON, and create TUML files.