Testing code generation templates – brainstorming

We would like to support automated testing of templates in AlphaSimple projects. I have been “test-infected” for most of my career, and the idea of writing code generation templates that are verified manually screams “unsustainable” to me. We need a cheap and easily repeatable way of ensuring code generation templates produce what they intend to produce.

Back-of-a-napkin design for code generation testing:

  1. by convention, for each test case, declare two transformations: one will hardcode the expected results, and another will trigger the transformation to test with some set of parameters (typically, an element of a model). We can pair transformations based on their names: “expected_foo” and “actual_foo” for a test case named “foo”
  2. if the results are identical, the test passes; otherwise, the test fails (optionally, use a warning for the cases where the only differences are around layout, i.e., non significant chars like spaces/newlines – optionally, because people generating Python code will care about layout)
  3. just as we do for model test failures, report template test failures as build errors
  4. run template tests after model tests, and only if those pass
  5. (cherry on top) report text differences in a sane way (some libraries out there can do text diff’ng)

Does that make sense? Any suggestions/comments (simpler is better)? Have you done or seen anything similar?

Email this to someoneShare on FacebookShare on LinkedInShare on Google+Tweet about this on Twitter

5 thoughts on “Testing code generation templates – brainstorming

  1. Aaron Digulla

    August 2, 2011 at 6:24am

    Some comments:

    I keep the expected results in files in the project. This has two advantages:

    1. Those files are usually pretty large (> 10 lines) and keeping that inline in the test code would make the source unreadable.

    2. I don’t have to escape the test results making them easier to read in the “default editor”.

    The main drawback is that JUnit has no notion of “compare String to file content”. My solution is a readText() method that takes a file name and optionally an encoding (UTF-8 is the default) and then, I use

    expectedFile = new File( “expected/….” );
    actualFile = new File( “tmp/….” );
    String expected = readText( expectedFile );
    String actual = readText( actualFile );

    if( !expected.equals (actual) ) {
    assertEquals(
    expectedFile + “n” + expected,
    actualFile + “n” + actual
    );
    }

    This way, I have the file names in the test comparison.

    All tests write their output to actualFile so I can open those in a “default editor”, too.

    I haven’t yet figured out a way to automatically test with some parameters; that depends on how “suitable” your model elements are for automatic generation in tests.

    To solve this for “cumbersome” model elements, I have test data factories which create me suitable complex model mocks.

  2. Kalle Launiala

    August 2, 2011 at 2:20pm

    Hi,

    I’ll specify a “alteration of thought” for starters. In case of structured-generator scenarios, the intense refactoring of generators may apply, and you don’t actually care for generating the alike output.

    In case of such dynamic generation methodologies, what you care for testing is the final code that is full result of the generated code.

    I’m currently in process of demonstrating software definition requirements through design level abstraction.

    The funky stuff happens with performance level requirements; I am demonstrating also software-development generic building block “operation” that has parameters and result value (both optional).

    Properly generated every software can be made by such blocks.

    Now combined with performance level requirements, some higher-level operations are in “test build” injected with assertions that will watchdog the required performance to actual performance.

    This will on the other hand also provide means for recording the actual data that goes through the operation chain.

    If your architecture allows and you have decent amount of testing data (that you can simply record from “test recorder injected generated code”) or in some architectures (such as event-sourced architectures) you can simply rerun all the cases through altered generators.

    This will of course rely mostly on full-chain tests (in from service layer, assertions on what goes to database), but if you get those tests for “almost free” its a good start.

    Now I haven’t looked to your generation model, but I’d assume that as you’re doing heavy generating already, I assume you can do the injection alike as we can.

    So sorry for dodging the question to provide alternatie solution, but in addition of testing the generations, if you can also get relatively cheap way of testing the “final application”, it supports the proving that generators work as intended.

    Cheers,

    Kalle Launiala

  3. Walter MourĂ£o

    August 4, 2011 at 9:30am

    Hi Rafael.
    Andromda works in a similar way… the cartridges are tested running a project and comparing the generated files against a set of existing files.

    Cheers,

    Walter

Comments are closed.