You are viewing the Articles published in

Flex Mojos 3.2.0 Released

Sonatype recently released the latest version of Flex Mojos, which is now at version 3.2.0.

This latest update is a big step forward for Flex / AIR Developers managing their project builds and dependencies with Maven 2 as the updates are focused around unit testing support improvements; including support for headless mode on Linux based CI servers and, more importantly, a fix for running automated unit tests in multi-module builds; which was a big head scratcher for me about a month ago!

Below is a list of what I feel are the most significant updates in 3.2.0:

  • Added support for SWF optimization
  • Multi-module builds now run tests correctly across projects
  • Changes to the way flex-mojos launches flash player when running test harness
  • Long-running flexunit tests no longer cause the build to fail.
  • Fix to NullPointerException during flex-mojos:test-run goal
  • You can view the complete list of release notes here.

    Cairngorm Abstractions: Business Delegates

    In Part 1 of Cairngorm Abstractions I discussed the common patterns which can be utilized in a design to simplify the implementation of concrete Cairngorm Commands and Responders. Applying such patterns can be leveraged to help facilitate code reuse and provide a maintainable, scalable architecture, as, in doing so the design will ultimately ensure reuse as well as remove redundancy.

    In this post I will describe the same benefits which can be gained by defining common abstractions of Business Delegates.

    Business Delegate Abstractions
    A Business Delegate should provide an interface against the service to which it references. This can be viewed as a one-to-one relationship whereas the operations and signatures defined by a Service, beit an HTTPService, WebService, RemoteObject, DataService etc. would dictate the Business Delegate’s API.

    However, a rather common mistake I often find is that many times Business Delegates are defined in the context of the use case which invokes them, rather than the service from which they provide an interface against.

    Correcting this is quite simple: refactor the current implementation to follow the one-to-one relationship model between a Service and Business Delegate.

    So for instance, if your applications service layer specifies a “UserService”, your design should essentially have only one Business Delegate API for that Service. All of the operations provided by the “UserService” would be defined by an “IUserServiceDelegate” interface which would enforce the contract between the “UserService” and concrete Delegate implementations, regardless of their underlying service mechanism.

    In this manner clients (delegate instances) can be defined as the abstraction (IUserServiceDelegate) and obtain references to concrete Business Delegate instances via a Delegate Factory, and as such remain completely transparent of their underlying service implementation.

    This could be implemented as follows:

    Abstract Delegates
    Perhaps the most common design improvement which can be made to improve the implementation and maintainability of Business Delegates is to define proper abstractions which provide an implementation which is common amongst all Business Delegates. Additionally, in doing so you will remove a significant amount of redundancy from your design.

    For example, if you compare any two Business Delegates and find they have practically the exact same implementation, that is an obvious sign that a common abstraction should be defined.

    Consider the following Business Delegate implementation:

    The above example may look familiar, and when given just a bit of thought as to it’s design it becomes apparent that there is quite a bit of redundancy as every method essentially contains the same implementation code. That is, an AsyncToken is created, referencing the operation to invoke against the service, and a reference to the responder is added to the token.

    The overall design would benefit much more by refactoring the commonality implemented across all Business Delegate methods to an abstraction, which in it’s simplest form could be defined as follows:

    By defining a basic abstraction, the original implementation could then be refactored to the following:

    The same basic abstractions could easily be defined for HTTPService, WebService and DataService specific Business Delegates (in fact I have a library of Cairngorm extensions which provides them; planning on releasing these soon). Pulling up common implementation code to higher level abstract types also simplifies writing tests against concrete Business Delegates as the abstraction itself would need only to be tested once.

    There are many more Business Delegate abstractions I would recommend in addition to what I have outlined here, in particular configuring Delegate Factories via an IoC Container such as SAS, however I would first suggest taking a good look at your current design before adding additional layers of abstraction, and the most appropriate place to start would be to define abstractions which encapsulate commonality, promote reuse and remove redundancy.

    Cairngorm Abstractions: Commands and Responders

    It is quite common to find a significant amount of code redundancy in Flex applications built on Cairngorm. This is by no means a fault of the framework itself, actually quite the contrary as Cairngorm is designed with simplicity in mind; opting to appropriately take a less-is-more approach in favor of providing a more prescriptive framework which only defines the implementation classes necessary to facilitate the “plumbing” behind the framework. Everything else is really just an interface.

    With this amount of flexibility comes additional responsibility in that developers must decide what the most appropriate design is based on their applications specific context. Moreover, as with any design there is never a truly one size fits all approach which can be applied to any problem domain; there are really only common patterns and conventions which can be applied across domains and applications. This IMHO is what had allowed the framework to be a success and it is important to understand that this simplicity also requires developers to give their designs the same attention one would to any Object Oriented design.

    However over the years I have found a significant amount of redundancy found in Flex applications built on Cairngorm. This appears to be (more often than not) the result of developers implementing Cairngorm examples verbatim in real world applications, and in doing so failing to define proper abstractions for commonly associated concerns and related responsibilities. The most common example of this is the typical implementation of Commands, Responders BusinessDelegates and PresentationModel implementations.

    For some of you this may all seem quite obvious, and for others hopefully this series will provide some insight as to how one can reduce code redundancy across your Cairngorm applications by implementing abstractions for common implementations.

    This topic will be a multi-part series in which I will provide some suggestions surrounding the common patterns of abstractions which can be implemented in an application built on Cairngorm, with this first installment based on common abstractions of Cairngorm Commands and Responders. Other areas in future posts will cover Business Delegate and Presentation Model abstractions. So let’s get started…

    Command Abstractions
    First let’s begin by looking at what is arguably the simplest abstraction one could define in a Cairngorm application to simplify code and eliminate areas of redundancy – Command abstractions. This example assumes the concern of mx.rpc.IResponder implementations is abstracted to a separate object. For more on this subject see my post regarding IResponder and Cairngorm.

    A traditional Cairngorm Command is typically implemented as something to the extent of the following:

    The problem with the above Command implementation is that it results in numerous look-ups on the ModelLocator Singleton instance in every execute implementation which needs to reference the ModelLocator.

    A simpler design would be to define an abstraction for all commands which contains this reference. as in the following:

    As in any OO system there are many benefits to defining abstractions and a good design certainly reflects this. For example, just by defining a very basic abstraction for all Commands we have now eliminated the number of look-ups on the ModelLocator for every Command in the application as well as redundant imports. By defining an abstraction for common references your code will become easier to read and maintain as the number of lines of code will certainly become reduced.

    Commands are by far the easiest to create an abstraction for as most commands will typically reference the ModelLocator, and if so they could do so simply by extending an AbstractCommand, if not they would implement ICommand as they traditionally would.

    So the first example could now be refactored to the following:

    You could take these abstractions a step further and define additional abstractions for related behavior and contexts, all of which would also extend the AbstractCommand if a reference to the applications ModelLocator is needed.

    Responder Abstractions
    Now let’s take a look at an abstraction which is much more interesting – Responder abstractions. In this example we will focus on the most common Responder implementation; mx.rpc.IResponder, however the same could easily apply for an LCDS Responder implementation of a DataService.

    A separate RPC responder could be defined as an abstraction for HTTPServices, WebServices and RemoteObjects as each request against any of these services results in a response of either result or fault, hence the IResponder interface’s contract.

    For example, consider a typical Responder implementation which could be defined as follows:

    By defining a Responder abstraction each concrete Responder implementation would result in significantly less code as the redundant cast operations could be abstracted, and, as with Command Abstractions, a convenience reference to the application specific ModelLocator could also be defined. Moreover, a default service fault implementation could be defined from which each service fault could be handled uniformly across the application.

    Thus we could define an abstracttion for RPC Responders as follows:

    We could now refactor the original Responder implementation to the following simplified implementation:

    As you can see just be pulling up common references and functionality to just two abstractions we can significantly remove redundancy from all Commands and Responders. As such this allows designs to improve dramatically as it allows for the isolation of tests and limits the amount of concrete implementation code developers need to sift through when working with your code.

    It is important to understand that a design which is built in part on Cairngorm must still adhere to the same underlying Object Oriented Design principles as any other API would, and in doing so you will end up with a much simpler design which can easily scale over time.

    Flex Quick Tip: ASDoc Package comments

    Documenting ActionScript elements such as a Class, Interface, method, property, event, style and so forth with ASDoc is a rather simple and straightforward process. However one element which is a bit more tricky to document with ASDoc is package declarations.

    Determining how to document package declarations is a bit more complicated than say, documenting classes as there is not a specific ASDoc tag intended for documenting packages in source code form. To do so one might assume to simply add an ASDoc comment to a package declaration, such as the following:

    However the above will not generate ASDoc source documentation; it will simply be ignored by the asdoc compiler.

    In order to document packages you need to specify the -package argument to the asdoc compiler. This is specified in the form:
    -package <package-name> '<package-comment>

    In general, I think most developers would prefer not to specify any kind of source file documentation via a compiler argument as this type of metadata ideally should be defined as an annotation (in the documentation sense of the word) to the actual source file itself, however if you find a need to document packages the -package compiler option will do the job.

    Refactoring Ant Builds with Macrodefs

    Within the past few years the proliferation of Agile Best Practices has pushed the importance of refactoring front and center in the world of Object Oriented Software Design, yet for some odd reason build scripts seem to have been overlooked in this regard by many. Perhaps this is due to the risk and complexity involved in such an effort as well as the lack of a tools by which refactoring build scripts can safely be accomplished.

    For instance, whereas refactoring in typical OO languages relies heavily on Unit Tests for ensuring refactorings do not break existing code along the way, build scripts do not have such safety nets as Unit Tests. Ant is statically typed however it doesn’t provide compile time type checking, additionally build scripts are defined declaratively via XML mark-up however they can not be validated as there are not fixed DTD attributes to validate them against. Perhaps most importantly is that there are not many resources to turn to for guidance when it comes to refactoring Build Scripts. For example, most of what I have learned about the subject comes from Julian Simpson’s work in the ThoughtWorks Anthology, which I highly suggest reading for a much more exhaustive, yet comprehensive and succinct essay on the subject. In any case, based on the above factors I am quite certain that all of these points plays a role in Ant Scripts somehow being overlooked with regard to refactoring.

    So where do you begin?
    That’s a really good question, one which I was forced to ask myself awhile back while being tasked with the daunting challenge of streamlining a very complex Build / CI process. At the time, I was responsible for modifying a Build for a large enterprise class Flex application which required build time transformations of localized content with varying modules being built for n-locales depending on context specific business rules, all of which needed to be built and deployed to multiple environments via a pre-existing CI Process. Further complicating things was that the builds were wrapped by nested DOS batch files. In addition, the existing builds had dependencies on far more complex underlying build Scripts. To make matters worse, up until that point in time no one, including myself, truly knew the build structure and all of it’s dependencies, it was very much a black box. So considering the fact that I needed to modify the build and would be responsible for maintaining the builds moving forward, as well as streamlining the existing build scripts so as to allow them to scale in order to support additional applications to seamlessly become part of the build, to say the least, I was eager to learn the Build Scripts inside out if I was to refactor and maintain them.

    The moral to the story I just bored you with above is that if you have ever had to maintain a build before then this story probably sounds pretty familiar: you have a Build Script which is a black box that no one wants to deal with; it works and that’s all that matters – until it needs to change of course. So again, where does one begin when refactoring a Build Script? Well lets think in terms of typical OO refactoring.

    Remove duplication
    Perhaps one of the most obvious and easiest places to begin consideration for refactoring candidates in an Object Oriented Design is to remove duplication; that is to isolate and thin out common functionality so as to remove redundancy and duplication. Most Ant Scripts are littered with such duplication, and as such should be viewed in the same manner as one would when refactoring Object Oriented applications. In fact, the goal of refactoring is very much the same regardless of the paradigm – beit a declaratively language such as Ant or an Object Oriented language such as ActionScript – provide more efficient, maintainable and easier to work with code.

    I tend to think of Build Script design – yes, it is design – much the same as any other OO design. So just as one would strive to eliminate code duplication in an Object Oriented Design, the same should apply to the design of a Build Script. For example, consider the following build target which packages a series of distributions:

    This kind of Build Script is common, however if you were to think of this in terms of OO Design, whereas each target is analogous to a method, you would quickly realize the code is very redundant. Moreover, the functionality provided by these targets: the packaging of distributions, is a very common task, so just as in an OO design this functionality should be extracted into a reusable library. In Ant 1.6+ we can achieve the same kind of code reuse by extracting these common, redundant targets using Macrodefs.

    Use Macrodefs
    In short, a Macrodef, which is short for “macro definition”, is basically an extracted piece of reusable functionality in an Ant that can be used across Build Scripts for performing common, or specific tasks. Macrodefs can be thought of as a reusable API for Ant. You include macrodefs in your build scripts by importing the macrodef source file. This is analogous to how one would import a class.

    So consider the redundant targets outlined above. Using macrodefs we can extract these common tasks, refactoring them into a single macrodef, import the file which contains the macrodef into our build script and then call the macrodef by wrapping it in a task.

    To extract the target to a Macrodef we would first begin by creating a new XML document named after the functionality of the target, in this case we could call it “dist.xml”. This document would contain a project root node just as any other Ant Script would. We would then define a macrodef node and specify an identifier via the name attribute; this is how we can reference the macrodef once imported to our build script.

    Once we have defined the macrodef we can add dynamic properties to its definition. This could be thought of as begin analogous to arguments of a method signiture. By specifying these arguments we can then assign their values whenever we invoke the macrodef. Default values can also be added if needed.

    Finally, we specify the behavior of the macrodef via the sequential node, This is where the functional markup is defined. Note that we reference the properties internally using the @{property} notation, just as you would normally however the token is prefixed with an @ sign rather than a $ sign.

    We now have a parametrized, reusable piece of functionality which we can use across Ant Builds, and as such, simplifying the build while promoting code reuse.

    To use the macrodef in another Ant Build we need only import it and create a target which wraps the macrodef. So we could refactor the distribution targets from the original Build file example to the following:

    And that’s the basics of using macrodefs to refactor an Ant Build. There is a lot more which can be accomplished with macrodefs in regards to designing and refactoring Ant Builds, specifically antlib, and I encourage you to give it a try as I am sure you will be happy with the results.

    Pattern Recognition

    It has been said that the true sign of intelligence lies in ones ability to recognize patterns – and there is a lot to be said of that statement as patterns can be found everywhere, in everything, in everyday life.

    One of the greatest strengths of human intelligence is in our ability to recognize patterns and abstract symbolic representations even when they occur in contexts different from that in which we originally learned them. It’s why hard to grasp concepts which are foreign or new to us become very clear when explained through metaphor.

    This ability to recognize patterns is essential to our survival, always has been. For example, practically all ancient civilizations had a very, very good understanding of the recurring patterns in their environment; something we like to call seasons. This understanding of patterns in time and climate was crucial to the survival of these early civilizations. Our ability to recognize patterns is essential to our learning and understanding of the world around us. Pattern recognition is a cognitive process much like intuition. Arguably they are inter-related or possibly one and the same.

    Suppose you you want to lose a few pounds, or save a little extra money, or learn a new programming language etc. but you are not seeing the results you would like. By recognizing patterns in your behavior you will begin to notice areas which need to be adjusted and from that determine an appropriate solution and the necessary adjustments to be made in order to achieve your goal. For example, maybe you’ve been trying to save some extra money and after a few months realize you are getting nowhere. You then analyze your behavior for recurring patterns and realize your spending half your pay every weekend on beer, just kidding, but you get my point.

    Pattern Recognition in Software Development

    In the world of software development patterns apply in pretty much just the same way – our ability to recognize them is essential to ensuring the success of a software application. When we discover patterns of recurring problems in software we are then able to consider various potential patterns from a catalog of named solutions, i.e. Design Patterns. Once an appropriate solution is found we can apply it to resolve the problem regardless of the domain.

    When designing software, patterns are something that should reveal themselves, never forced into a design. This is how patterns should always be applied; you have a problem, and based on that problem you begin to recognize common patterns, or maybe new ones, which can be applied as a solution to resolve the problem. It should never be the other way around, that is, a solution (Pattern) looking for a problem. However this happens quite often and is pretty evident in many software applications. Many refer to this as “pattern fever“, personally I like to call it “patterns for patterns sake“, or simply “for patterns sake“. Because that’s really what it is.

    For example, have you ever found a Singleton implementation where an all static class would have sufficed (e.g. utilities). Or a code behind implementation class which masquerades as an abstract class. Or an Interface where there is clearly only a need for a single concrete implementation (e.g. data centric implementations), or a marker Interface which serves no purpose at all. The list goes on and on.

    In some cases it very well may just be an innocent flaw in the design, however the majority of the time it’s a tell tale sign of someone learning a new pattern and knowingly, albeit, mistakenly, attempting to implement the pattern into production code. This is clearly the wrong way of learning a new pattern. Learning new design patterns is great and a lot of fun but remember, there is a time and place for everything, and production code isn’t it.

    Learning Patterns

    One of the best ways to learn a new pattern (or anything new for that matter) is to explore it. Begin by reading enough about it to get some of the basic concepts to sink in a bit. Put it into context, think of it in terms of metaphor – in ways that make sense to you, remember you are learning this. Question it. Then experiment with it. See how it works, see how it doesn’t work, break it, figure out how to put it back together, and so on, but whatever works best for you. Most importantly always do it as a separate effort such as a POC, but never in production code.

    Once you get this down and understand the various patterns you’ll find you never need to look for them, for if they are needed they will reveal themselves sure enough.