You are viewing the Articles in the TDD / BDD Category

Cairngorm Abstractions: Business Delegates

In Part 1 of Cairngorm Abstractions I discussed the common patterns which can be utilized in a design to simplify the implementation of concrete Cairngorm Commands and Responders. Applying such patterns can be leveraged to help facilitate code reuse and provide a maintainable, scalable architecture, as, in doing so the design will ultimately ensure reuse as well as remove redundancy.

In this post I will describe the same benefits which can be gained by defining common abstractions of Business Delegates.

Business Delegate Abstractions
A Business Delegate should provide an interface against the service to which it references. This can be viewed as a one-to-one relationship whereas the operations and signatures defined by a Service, beit an HTTPService, WebService, RemoteObject, DataService etc. would dictate the Business Delegate’s API.

However, a rather common mistake I often find is that many times Business Delegates are defined in the context of the use case which invokes them, rather than the service from which they provide an interface against.

Correcting this is quite simple: refactor the current implementation to follow the one-to-one relationship model between a Service and Business Delegate.

So for instance, if your applications service layer specifies a “UserService”, your design should essentially have only one Business Delegate API for that Service. All of the operations provided by the “UserService” would be defined by an “IUserServiceDelegate” interface which would enforce the contract between the “UserService” and concrete Delegate implementations, regardless of their underlying service mechanism.

In this manner clients (delegate instances) can be defined as the abstraction (IUserServiceDelegate) and obtain references to concrete Business Delegate instances via a Delegate Factory, and as such remain completely transparent of their underlying service implementation.

This could be implemented as follows:

Abstract Delegates
Perhaps the most common design improvement which can be made to improve the implementation and maintainability of Business Delegates is to define proper abstractions which provide an implementation which is common amongst all Business Delegates. Additionally, in doing so you will remove a significant amount of redundancy from your design.

For example, if you compare any two Business Delegates and find they have practically the exact same implementation, that is an obvious sign that a common abstraction should be defined.

Consider the following Business Delegate implementation:

The above example may look familiar, and when given just a bit of thought as to it’s design it becomes apparent that there is quite a bit of redundancy as every method essentially contains the same implementation code. That is, an AsyncToken is created, referencing the operation to invoke against the service, and a reference to the responder is added to the token.

The overall design would benefit much more by refactoring the commonality implemented across all Business Delegate methods to an abstraction, which in it’s simplest form could be defined as follows:

By defining a basic abstraction, the original implementation could then be refactored to the following:

The same basic abstractions could easily be defined for HTTPService, WebService and DataService specific Business Delegates (in fact I have a library of Cairngorm extensions which provides them; planning on releasing these soon). Pulling up common implementation code to higher level abstract types also simplifies writing tests against concrete Business Delegates as the abstraction itself would need only to be tested once.

There are many more Business Delegate abstractions I would recommend in addition to what I have outlined here, in particular configuring Delegate Factories via an IoC Container such as SAS, however I would first suggest taking a good look at your current design before adding additional layers of abstraction, and the most appropriate place to start would be to define abstractions which encapsulate commonality, promote reuse and remove redundancy.

Refactoring Ant Builds with Macrodefs

Within the past few years the proliferation of Agile Best Practices has pushed the importance of refactoring front and center in the world of Object Oriented Software Design, yet for some odd reason build scripts seem to have been overlooked in this regard by many. Perhaps this is due to the risk and complexity involved in such an effort as well as the lack of a tools by which refactoring build scripts can safely be accomplished.

For instance, whereas refactoring in typical OO languages relies heavily on Unit Tests for ensuring refactorings do not break existing code along the way, build scripts do not have such safety nets as Unit Tests. Ant is statically typed however it doesn’t provide compile time type checking, additionally build scripts are defined declaratively via XML mark-up however they can not be validated as there are not fixed DTD attributes to validate them against. Perhaps most importantly is that there are not many resources to turn to for guidance when it comes to refactoring Build Scripts. For example, most of what I have learned about the subject comes from Julian Simpson’s work in the ThoughtWorks Anthology, which I highly suggest reading for a much more exhaustive, yet comprehensive and succinct essay on the subject. In any case, based on the above factors I am quite certain that all of these points plays a role in Ant Scripts somehow being overlooked with regard to refactoring.

So where do you begin?
That’s a really good question, one which I was forced to ask myself awhile back while being tasked with the daunting challenge of streamlining a very complex Build / CI process. At the time, I was responsible for modifying a Build for a large enterprise class Flex application which required build time transformations of localized content with varying modules being built for n-locales depending on context specific business rules, all of which needed to be built and deployed to multiple environments via a pre-existing CI Process. Further complicating things was that the builds were wrapped by nested DOS batch files. In addition, the existing builds had dependencies on far more complex underlying build Scripts. To make matters worse, up until that point in time no one, including myself, truly knew the build structure and all of it’s dependencies, it was very much a black box. So considering the fact that I needed to modify the build and would be responsible for maintaining the builds moving forward, as well as streamlining the existing build scripts so as to allow them to scale in order to support additional applications to seamlessly become part of the build, to say the least, I was eager to learn the Build Scripts inside out if I was to refactor and maintain them.

The moral to the story I just bored you with above is that if you have ever had to maintain a build before then this story probably sounds pretty familiar: you have a Build Script which is a black box that no one wants to deal with; it works and that’s all that matters – until it needs to change of course. So again, where does one begin when refactoring a Build Script? Well lets think in terms of typical OO refactoring.

Remove duplication
Perhaps one of the most obvious and easiest places to begin consideration for refactoring candidates in an Object Oriented Design is to remove duplication; that is to isolate and thin out common functionality so as to remove redundancy and duplication. Most Ant Scripts are littered with such duplication, and as such should be viewed in the same manner as one would when refactoring Object Oriented applications. In fact, the goal of refactoring is very much the same regardless of the paradigm – beit a declaratively language such as Ant or an Object Oriented language such as ActionScript – provide more efficient, maintainable and easier to work with code.

I tend to think of Build Script design – yes, it is design – much the same as any other OO design. So just as one would strive to eliminate code duplication in an Object Oriented Design, the same should apply to the design of a Build Script. For example, consider the following build target which packages a series of distributions:

This kind of Build Script is common, however if you were to think of this in terms of OO Design, whereas each target is analogous to a method, you would quickly realize the code is very redundant. Moreover, the functionality provided by these targets: the packaging of distributions, is a very common task, so just as in an OO design this functionality should be extracted into a reusable library. In Ant 1.6+ we can achieve the same kind of code reuse by extracting these common, redundant targets using Macrodefs.

Use Macrodefs
In short, a Macrodef, which is short for “macro definition”, is basically an extracted piece of reusable functionality in an Ant that can be used across Build Scripts for performing common, or specific tasks. Macrodefs can be thought of as a reusable API for Ant. You include macrodefs in your build scripts by importing the macrodef source file. This is analogous to how one would import a class.

So consider the redundant targets outlined above. Using macrodefs we can extract these common tasks, refactoring them into a single macrodef, import the file which contains the macrodef into our build script and then call the macrodef by wrapping it in a task.

To extract the target to a Macrodef we would first begin by creating a new XML document named after the functionality of the target, in this case we could call it “dist.xml”. This document would contain a project root node just as any other Ant Script would. We would then define a macrodef node and specify an identifier via the name attribute; this is how we can reference the macrodef once imported to our build script.

Once we have defined the macrodef we can add dynamic properties to its definition. This could be thought of as begin analogous to arguments of a method signiture. By specifying these arguments we can then assign their values whenever we invoke the macrodef. Default values can also be added if needed.

Finally, we specify the behavior of the macrodef via the sequential node, This is where the functional markup is defined. Note that we reference the properties internally using the @{property} notation, just as you would normally however the token is prefixed with an @ sign rather than a $ sign.

We now have a parametrized, reusable piece of functionality which we can use across Ant Builds, and as such, simplifying the build while promoting code reuse.

To use the macrodef in another Ant Build we need only import it and create a target which wraps the macrodef. So we could refactor the distribution targets from the original Build file example to the following:

And that’s the basics of using macrodefs to refactor an Ant Build. There is a lot more which can be accomplished with macrodefs in regards to designing and refactoring Ant Builds, specifically antlib, and I encourage you to give it a try as I am sure you will be happy with the results.

What makes a good design?

One of my core job responsibilities for the past several years has been to conduct technical design and implementation (code) reviews during various phases of the software development life cycle. This is typically a highly collaborative process whereas myself and an individual engineer, or the team as a whole will begin by performing a detailed analysis of business requirements in order to gain an initial understanding of the specific component(s) being developed. Once an understanding of the requirements has been reached a brainstorming session ensues which ultimately leads to various creative, technical solutions. After discussing the pros and cons of each the best solutions quickly begin to reveal themselves, at which point it is simply a process of elimination until the most appropriate solution has surfaced.

The next step is to translate the requirements into the proposed technical solution in the form of a design document. The design is specified on a high level and is only intended to provide an overview of the appropriate technical road map which is to be implemented. This typically consists of higher level UML Sequence and Class diagrams, either in the form of actual diagrams produced in a UML editor, or could simply be a picture captured from UML drawn out during a whiteboarding session. The formality of the documented design is less important, what is important is that the design is captured in some form before it is implemented. Implementation specific details such as exact class and method signatures and so forth are intentionally left out as they are to be considered outside the scope of the design. See Let Design Guide, not Dictate for more on this subject. Once the design is documented it is reviewed and changes are made if needed. This process is repeated until all business and technical requirements have been satisfied, at which point the “all clear” is given to move forward with implementing the design.

But what exactly constitutes a good design? How does one determine a good design from a bad one? In reality it could vary significantly based on a number of factors, however in my experience I have found a design can almost always be judged according to three fundamental criterion: Correctness, Cohesion / Coupling and Scalability. For the most part everything falls into one of these three categories. Below is a brief description of the specific design questions each category sets out to determine.

  • Correctness
    Does the design solve the problems described in the requirements and discussed by the team? This is Correctness in the form of satisfying business requirements. Are the patterns implemented in the design appropriate, or are additional patterns being used just for the sake of using the pattern? This is Correctness in the form of technical requirements. A good design is well focused and only strives to provide a solution which meets the requirements specified by the business owners, client etc; it does not attempt to be overly clever.
  • Cohesion / Coupling
    Has a highly cohesive, loosely coupled design been achieved? Have the classes, interfaces and APIs been logically organized? Does each provide a specific, well-defined set of functionality? Is composition used over inheritance where applicable? Has related functionality been properly abstracted? Does changing this break that, does adding that break this, etc.
  • Scalability
    Does the solution scale well? Is it flexible? A good design strives to facilitate change with confidence, and with as little risk as possible. A good design also achieves transparency at some level in the areas where it is most applicable.

The concepts outlined above are crucial to achieving a good design, however they are often overlooked or misunderstood to some degree. Throughout the years I have began to recognize some commonality in the design mistakes I find in Object Oriented Designs in general, and within Flex projects in particular. Many of which typically can be attributed to violations of basic MVC principles, but most commonly the design mistakes appear to be a negation of Separation of Concerns (SoC).

There are close relationships between Correctness, Cohesion / Coupling and Scalability, each of which plays a very significant role in the resulting design as a whole.

So lets start with Correctness, which is by far the single most important facet of design, for if the design does not provide a solution which satisfies the requirements specified then it has failed – all other aspects of the design are for the most part, details.

It is important to understand that Correctness has a dependency on Flexibility. For example, as architects and developers our understanding of the problem domain is constantly evolving as we gain experience in the domain. Additionally, as requirements may change significantly as a product is being developed, our designs must be able to adapt to these changes as well. Although this poses some challenges it is wrong to suggest that requirements need to be locked down completely before the design phase begins, but rather requirements need only be clearly defined to the extent that the designer is aware of what is required at that point in time and how it fits into the “big picture”. A competent designer understand this well and makes careful considerations before committing to any design decisions. This is where the importance of Flexibility comes into play. In order for a design to be conceptually and technically correct it needs to be flexible enough to support change. This is why good design is so important – to easily facilitate change. As such the flexibility to allow change should be evident throughout the design. A good example might be where the middle-tier has not decided which service layer implementation will be used (e.g. XML:80, WSDL, REST etc.), or the Information Architects have not decided what the constraints of each user role will be. A good design should be flexible enough to allow for changes such as these as well as others with confidence and more importantly, little risk to other parts of the application; after all, you shouldn’t have to tear down the house just to renovate the bathroom – in addition to Correctness and Scalability, this is where Cohesion and Coupling come into play.

High Cohesion is vital to achieving a good design as it ensures related functionality and responsibilities are logically grouped together, encapsulated and abstracted. We have all seen the dreaded, all encompassing class which assumes multiple responsibilities. Classes such as these have low cohesion and are a sign of future challenges if not addressed immediately. On a higher level, if high cohesion had not been achieved it is easy to notice as there will typically only be one class which comprises an entire API, however quite often low cohesion in classes may be a bit more subtle than one might expect and a code review will reveal areas where low cohesion has been implemented.

For example, consider the following Logging facility which is intended to provide a very simple logging implementation:

The above example is such a classic case of low cohesion. I see this kind of thing all the time. The problem here is that the Logger class has low cohesion because it is assuming the responsibility of creating and formatting a time stamp, this functionality is outside of the responsibilities of the Logging API. The creating and formatting of a time stamp is not a concern of the Logger, but rather would be the responsibility of a separate DateFormatting utility whose sole purpose is to provide an API for formatting Date objects. Removing the Date formatting functionality from the Logging API to a class which is responsible for formatting Date objects would facilitate code reuse across many APIs, reduce redundancy and testing as well as allow the Logger class to only define operations which are directly related to Logging. A good design must achieve high cohesion if it is to be successful.

Coupling is essential in determining a good design. A good way to think of coupling is like this: Think back to when you were a kid playing with blocks, you could easily take any number of different blocks and rearrange them to build whatever you like – that’s loose coupling. Now compare that to a crossword puzzle or a jigsaw puzzle, the pieces only fit together in a very specific way – that’s tight coupling. A good design strives to achieve loosely coupled APIs in order to facilitate change as well as reuse. A classic, yet less commonly mentioned example tight coupling is in the packaging of APIs. Often, many times designers will achieve loosely coupled APIs however the APIs themselves are tightly coupled to the application namespace.

Consider the of Logging API example from above, note that the API is defined under the package com.somedomain.someproject.logging. Even if the example were to be refactored to achieve high cohesion it would still be tightly coupled to the project specific namespace. This is a bad design as in the event another product should need to use the Logging API it would first need to be refactored to a common namespace. A better design would be to define the Logging API under the less specific namespace of com.somedomain.logging. This is important as the Logging facility itself should be generic in that it could be used across multiple projects. Something as simple as proper packaging of generic and specific components plays a key role in a good design. A better design for the above example would be as follows, this design achieves both high cohesion and loose coupling:

As with all design, technical design is subjective. Architects and Engineers can spend an infinite amount of time debating the various points of design. In my experience it really comes down to organization and efficiency, that is, organization of responsibilities and concerns, and the efficiency of their implementation both individually and as a whole.

It may sound cliche’ however before you begin a new design, or review an existing one, consider the following quote before doing so – it pretty much sums up what good design is:

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
– Antoine de Saint-Exupery

Continuous Integration with Hudson

Continuous Integration is a fundamental Agile Development process in which members of a team integrate changes on a regular basis, ideally multiple times per day, which in turn results in multiple integrations per day. The integration process itself is facilitated by an automated integration build which is triggered upon a specific interval to check for new commits to the central repository, or mainline. This is necessary for detecting changes which could potentially break the build as quickly as possible, as it is typically easier to fix these errors sooner rather than later, thus resulting in significantly less integration issues, especially when working on large, collaborative projects where there are multiple members of a team developing against the same codebase.

Continuous Integration does not necessarily require any specific tooling, however it is very common to incorporate a build management tool in order to automate the builds. The most common tool of choice by build managers to facilitate automated builds is Cruise Control. I have been using Cruise Control for years to automate Flex builds as it contains everything one typically would need for automating an integration, staging and production build process. Setting up Cruise Control and modifying the configuration to add new projects is a straight forward process, however I always felt it to be a bit tedious.

Considering Cruise Control is an industry standard as well as the fact that it provides pretty much everything I have ever needed for automating enterprise build processes, I never had a compelling reason to investigate any of the other tools out there. Then last week I received an email from our engineering groups build manager stating that there is a different build management tool being considering called Hudson. Initially I questioned why Hudson was being considered over Cruise Control as CC is, for the most part, an industry standard, so I decided to do a little investigating to see for myself, first by reading the documentation then by running the simple test drive available at the Hudson site which you can use to get an idea of how the tool works.

Downloading Hudson is straight forward and best of all installation is a breeze; simply deploy the web archive (hudson.war) to an existing servlet container (I am using Tomcat 6.0.14) and your ready to go.

Once deployed you can go to the Hudson console, typically located at: http:<host:port>/hudson/ (e.g. http://somedomain:8400/hudson/). The console has all of the features you might expect (as well as some other useful tools). From the console you can easily configure Hudson, create new projects, monitor build progress, view build logs, kick off a build, schedule builds and so forth. Setting up a project is one of the things I really like about Hudson – as opposed to configuring a project in Cruise Control – as Hudson provides a super simple GUI that you can use to create new projects. This is one of its main attractions in my opinion as new builds can be configured in next to no time at all. The Dashboard and UI in general are very intuitive and easy to use, and if you’re like me and still would like to have the ability to look under the hood and modify the configurations, you also have that level of control as well. Kicking off a build manually or scheduling builds is just as easy.

Overall I have to say that I am a fan of Hudson. Admittedly I just started using it so I am certain there is much more to learn about the tool, however if the rest of the functionality is as easy to configure as the basic features then this is a sure sell. I estimate it took me approximately 10 minutes total to download, install, configure a project and run a build (successfully I might add:).

Some key features:

  • Master / Slave builds
  • Distributed Builds
  • GUI based project configuration
  • E-mail / AIM / RSS build notifications
  • Plug-ins / extensibility support
  • Simple Installation
  • Console Auto-refresh

So if you are interested in streamlining your Continuous Integration process I recommend taking a quick test drive of Hudson.

Flex Unit TestSuiteHelper

Test Driven Development is an essential part of any software project. Leveraging Flex Unit in your Flex and AIR projects is a well known best practice and will inevitably result in less unknowns and provide a much more reliable codebase, however like most things in life that provide value, unit testing comes at a cost: Time.

In general, writing Unit tests is often an overlooked process as it ultimately equates to an additional level of effort (time) in a project plan. However, writing unit tests should not be over looked for a number of reasons, and in my opinion most importantly because it forces developers to understand what problems the code they write are intended to solve, thus resulting in a better understanding of the problem domain.

Now without getting into a lengthy discussion about the benefits of Test Driven Development, as there is plenty of information available on the subject, I am going to focus on how to make it easier in Flex.

Typically when I write a unit test I stub it out and then run the test runner to make sure it fails. I pretty much do this immediately after I write the test and then I see that my test did not come up and realize I never added it to the test suite. Obviously something like adding tests to a test suite can be an automated process which would allow unit tests to be developed a little faster as all that would be needed was to write the test.

So in an effort to simplify the process of adding tests to a test suite I have developed a TestSuiteHelper which is a simple utility class that automates the process of adding unit tests to a test suite. Rather than manually adding each test the TestSuiteHelper will automate the process simply by invoking a static method and passing in the test class from which you need to extract all of the tests.

So if you would like to simplify your team’s unit testing workflow feel free to utilize the TestSuiteHelper.