Continuous Integration Articles …

Test First Workflow – A Short Story

As a depiction of the typical approach taken when solving a problem with Test First practices in mind, below is a brief excerpt from a recent conversation with a collegue who inquired of me as to how one generally goes about solving a problem using Test First methodologies. My explanation was rather simple, and read somewhat like a short story, though I describe it as being more of a step by step process from a Pair Programming perspective.

The general workflow conveyed in my description, while brief, covers the essentials:

  1. We have a problem to solve.
  2. We discuss the problem, asking questions as needed; then dig a bit deeper to ensure we understand what it is we are really trying to solve; and, most importantly, why.
  3. We consider potential solutions, identifying those most relevant, evaluating each against the problem; then agree upon one which best meets our needs.
  4. We define a placeholder test/spec where our solution will be exercised. It does nothing yet.
  5. We implement the solution in the simplest manner possible, directly within the test itself; the code is quite ugly, and that is perfectly fine, for now. We run our test, it fails
  6. We adjust our implementation, continuing to focus solely on solving the problem; all the while making sure not to become too distracted with implementation details at this point.
  7. We run our test again, it passes. We’re happy, we’ve solved the problem.
  8. We move our solution out of the test/spec to the actual method which is to be implemented, which, until now, had yet to exist.
  9. We update our test assertions/expectations against the actual (SUT). We run our test, it passes.
  10. We’re happy, we have a working, tested solution; however, the implementation is substandard; this has been nagging at us all along, so we shift focus to our design; refactoring our code to a more elegant, performant solution; one which we can be proud of.
  11. We run our test again, it fails. That’s fine, perhaps even preferable, as it verifies our test is doing exactly what is expected of it; thus, we can continue to refactor in confidence.
  12. We adjust our code, continuing to make design decisions and implementation changes as needed. We run our test again, it passes.
  13. We refactor some more, continuing to focus freely, and without worry on the soundness of our design and our implementation. We run our test again, it passes.

Rinse and Repeat…

While the above steps are representative of a typical development work-flow based on Test First processes, it is worth noting that as one becomes more acclimated with such processes, certain steps often become unnecessary. For example, I generally omit Step #5 insofar as implementing the solution within the test/spec itself is concerned; but rather, once I understand the problem to be solved, I then determine an appropriate name for the method which is to be tested, and implement the solution within the SUT itself, as opposed to the test/spec; effectively eliminating the need for Step #8. As such, the steps can be reduced down to only those which experience proves most appropriate.

Concluding Thoughts

Having become such an integral part of my everyday workflow for many years now, I find it rather challenging to approach solving a problem without using Test First methodologies. In fact, attempting to solve a problem of even moderate complexity without approaching it from a testing perspective feels quite awkward.

The simple fact is, without following general Test First practices, we are just writing implementation code, and if we are just writing implementation code, then, in turn, we are likely not thinking through a problem in it’s entirety. Consequently, it follows then that we are also not thinking through our solutions in their entirety, and hence our designs. Because of this, solutions feel uncertain, and ultimately leave us feeling much less confident in the code we deliver.

Conversely, when following sound testing practices we afford our team and ourselves an unrivaled sense of confidence in terms of the specific problems we are solving, why we are solving them, and how we go about solving them; from that, we achieve a concerted understanding of the problem domain, as well as a much clearer, holistic understanding of our designs.

Preprocessing Modules with RequireJS Optimizer

RequireJS provides an effective means of bundling an application for production deployment by way of the RequireJS Optimizer; which allows for optimizing, concatenating and minifying modules via Node or Rhino.

The Optimizer is quite powerful insofar that fine-grained control over various aspects of the optimization process can be implemented with relative ease. One such example is the ability to specify a callback function to be invoked for each module in the build prior to optimization. This essentially provides the necessary hooks needed to implement a preprocessor.

The onBuildWrite option

The optimize method of the RequireJS Optimizer accepts an onBuildWrite option which allows for a callback to be specified. The callback will be invoked prior to serialization of each module within the optimized bundle. Callbacks receive the name, path and contents of the module; and, are always expected to return a value.

For example, consider the following build configuration which demonstrates a basic onBuildWrite callback that simply logs the name of each module processed by the build to the console and, returns the module’s content unmodified.

Using the above configuration, when executed against a (contrived) example consisting of just a single module, “ModuleA”, in Node, it would output the following to the console:

If we were to print out the contents of the files we would see something like this:

With this in mind, a basic preprocessor can be implemented quite easily using the onBuildWrite option. Assume the main.js script has a token placeholder for the build version like so:

We can implement a simple preprocessor which replaces the #version token with a build date as follows:

The above onBuildWrite callback logs the original contents, replaces the #version token with the current date, logs the result and returns the processed content. The output of which would be similar to the following:

As can be seen in the above examples, implementing a basic preprocessor is rather simple to accomplish with the RequireJS Optimizer. The ability to do so allows for some interesting possibilities and is certainly something worth checking out if you are leveraging AMD via RequireJS.

You can fork an example implementation at requirejs-preprocessor-example.

Practices of an Agile Developer

Of the many software engineering books I have read over the years, Practices of an Agile Developer in particular continues to be one book I find myself turning to time and time again for inspiration.

Written by two of my favorite technical authors, Andy Hunt and Venkat Subramaniam, and published as part of the Pragmatic Bookshelf, Practices of an Agile Developer provides invaluable, practical and highly inspirational solutions to the most common challenges we as software engineers face project after project.

What makes Practices of an Agile Developer something truly special is the simplicity and easy to digest format in which it is written; readers can jump in at any chapter, or practically any page for that matter, and easily learn something new and useful in a matter of minutes.

While covering many of the most common subjects on software development, as well as many particularly unique subjects, it is the manner in which the subjects are presented that makes the book itself quite unique. The chapters are formatted such that each provides an “Angel vs. Devil on your shoulders” perspective of each topic. This is quite useful as one can briefly reference any topic to take away something useful by simply reading the chapters title and the “Angel vs. Devil” advice, and from that come to a quick understanding of the solution. Moreover, each chapter also provides tips on “How it Feels” when following one of the prescribed approaches. The “How it feels” approach is very powerful in that it instantly draws readers in for more detailed explanations. Complimentary to this is the “Keeping your balance” suggestions which provide useful insights to many of the challenges one might face when trying to apply the learnings of a particular subject. “Keeping your Balance” tips answer questions which would otherwise be left to the reader to figure out.

I first read Practices of an Agile Developer almost 4 years ago, and to this day I regularly find myself returning to it time and time again for inspiration. A seminal text by all means, I highly recommend it as a must read for Software Developers of all levels and disciplines.

Flex Mojos 3.2.0 Released

Sonatype recently released the latest version of Flex Mojos, which is now at version 3.2.0.

This latest update is a big step forward for Flex / AIR Developers managing their project builds and dependencies with Maven 2 as the updates are focused around unit testing support improvements; including support for headless mode on Linux based CI servers and, more importantly, a fix for running automated unit tests in multi-module builds; which was a big head scratcher for me about a month ago!

Below is a list of what I feel are the most significant updates in 3.2.0:

  • Added support for SWF optimization
  • Multi-module builds now run tests correctly across projects
  • Changes to the way flex-mojos launches flash player when running test harness
  • Long-running flexunit tests no longer cause the build to fail.
  • Fix to NullPointerException during flex-mojos:test-run goal
  • You can view the complete list of release notes here.

    Refactoring Ant Builds with Macrodefs

    Within the past few years the proliferation of Agile Best Practices has pushed the importance of refactoring front and center in the world of Object Oriented Software Design, yet for some odd reason build scripts seem to have been overlooked in this regard by many. Perhaps this is due to the risk and complexity involved in such an effort as well as the lack of a tools by which refactoring build scripts can safely be accomplished.

    For instance, whereas refactoring in typical OO languages relies heavily on Unit Tests for ensuring refactorings do not break existing code along the way, build scripts do not have such safety nets as Unit Tests. Ant is statically typed however it doesn’t provide compile time type checking, additionally build scripts are defined declaratively via XML mark-up however they can not be validated as there are not fixed DTD attributes to validate them against. Perhaps most importantly is that there are not many resources to turn to for guidance when it comes to refactoring Build Scripts. For example, most of what I have learned about the subject comes from Julian Simpson’s work in the ThoughtWorks Anthology, which I highly suggest reading for a much more exhaustive, yet comprehensive and succinct essay on the subject. In any case, based on the above factors I am quite certain that all of these points plays a role in Ant Scripts somehow being overlooked with regard to refactoring.

    So where do you begin?
    That’s a really good question, one which I was forced to ask myself awhile back while being tasked with the daunting challenge of streamlining a very complex Build / CI process. At the time, I was responsible for modifying a Build for a large enterprise class Flex application which required build time transformations of localized content with varying modules being built for n-locales depending on context specific business rules, all of which needed to be built and deployed to multiple environments via a pre-existing CI Process. Further complicating things was that the builds were wrapped by nested DOS batch files. In addition, the existing builds had dependencies on far more complex underlying build Scripts. To make matters worse, up until that point in time no one, including myself, truly knew the build structure and all of it’s dependencies, it was very much a black box. So considering the fact that I needed to modify the build and would be responsible for maintaining the builds moving forward, as well as streamlining the existing build scripts so as to allow them to scale in order to support additional applications to seamlessly become part of the build, to say the least, I was eager to learn the Build Scripts inside out if I was to refactor and maintain them.

    The moral to the story I just bored you with above is that if you have ever had to maintain a build before then this story probably sounds pretty familiar: you have a Build Script which is a black box that no one wants to deal with; it works and that’s all that matters – until it needs to change of course. So again, where does one begin when refactoring a Build Script? Well lets think in terms of typical OO refactoring.

    Remove duplication
    Perhaps one of the most obvious and easiest places to begin consideration for refactoring candidates in an Object Oriented Design is to remove duplication; that is to isolate and thin out common functionality so as to remove redundancy and duplication. Most Ant Scripts are littered with such duplication, and as such should be viewed in the same manner as one would when refactoring Object Oriented applications. In fact, the goal of refactoring is very much the same regardless of the paradigm – beit a declaratively language such as Ant or an Object Oriented language such as ActionScript – provide more efficient, maintainable and easier to work with code.

    I tend to think of Build Script design – yes, it is design – much the same as any other OO design. So just as one would strive to eliminate code duplication in an Object Oriented Design, the same should apply to the design of a Build Script. For example, consider the following build target which packages a series of distributions:

    This kind of Build Script is common, however if you were to think of this in terms of OO Design, whereas each target is analogous to a method, you would quickly realize the code is very redundant. Moreover, the functionality provided by these targets: the packaging of distributions, is a very common task, so just as in an OO design this functionality should be extracted into a reusable library. In Ant 1.6+ we can achieve the same kind of code reuse by extracting these common, redundant targets using Macrodefs.

    Use Macrodefs
    In short, a Macrodef, which is short for “macro definition”, is basically an extracted piece of reusable functionality in an Ant that can be used across Build Scripts for performing common, or specific tasks. Macrodefs can be thought of as a reusable API for Ant. You include macrodefs in your build scripts by importing the macrodef source file. This is analogous to how one would import a class.

    So consider the redundant targets outlined above. Using macrodefs we can extract these common tasks, refactoring them into a single macrodef, import the file which contains the macrodef into our build script and then call the macrodef by wrapping it in a task.

    To extract the target to a Macrodef we would first begin by creating a new XML document named after the functionality of the target, in this case we could call it “dist.xml”. This document would contain a project root node just as any other Ant Script would. We would then define a macrodef node and specify an identifier via the name attribute; this is how we can reference the macrodef once imported to our build script.

    Once we have defined the macrodef we can add dynamic properties to its definition. This could be thought of as begin analogous to arguments of a method signiture. By specifying these arguments we can then assign their values whenever we invoke the macrodef. Default values can also be added if needed.

    Finally, we specify the behavior of the macrodef via the sequential node, This is where the functional markup is defined. Note that we reference the properties internally using the @{property} notation, just as you would normally however the token is prefixed with an @ sign rather than a $ sign.

    We now have a parametrized, reusable piece of functionality which we can use across Ant Builds, and as such, simplifying the build while promoting code reuse.

    To use the macrodef in another Ant Build we need only import it and create a target which wraps the macrodef. So we could refactor the distribution targets from the original Build file example to the following:

    And that’s the basics of using macrodefs to refactor an Ant Build. There is a lot more which can be accomplished with macrodefs in regards to designing and refactoring Ant Builds, specifically antlib, and I encourage you to give it a try as I am sure you will be happy with the results.