We would like to support automated testing of templates in AlphaSimple projects. I have been “test-infected” for most of my career, and the idea of writing code generation templates that are verified manually screams “unsustainable” to me. We need a cheap and easily repeatable way of ensuring code generation templates produce what they intend to produce.
Back-of-a-napkin design for code generation testing:
- by convention, for each test case, declare two transformations: one will hardcode the expected results, and another will trigger the transformation to test with some set of parameters (typically, an element of a model). We can pair transformations based on their names: “expected_foo” and “actual_foo” for a test case named “foo”
- if the results are identical, the test passes; otherwise, the test fails (optionally, use a warning for the cases where the only differences are around layout, i.e., non significant chars like spaces/newlines – optionally, because people generating Python code will care about layout)
- just as we do for model test failures, report template test failures as build errors
- run template tests after model tests, and only if those pass
- (cherry on top) report text differences in a sane way (some libraries out there can do text diff’ng)
Does that make sense? Any suggestions/comments (simpler is better)? Have you done or seen anything similar?