Flex Quick Tip: ASDoc Package comments

Documenting ActionScript elements such as a Class, Interface, method, property, event, style and so forth with ASDoc is a rather simple and straightforward process. However one element which is a bit more tricky to document with ASDoc is package declarations.

Determining how to document package declarations is a bit more complicated than say, documenting classes as there is not a specific ASDoc tag intended for documenting packages in source code form. To do so one might assume to simply add an ASDoc comment to a package declaration, such as the following:

However the above will not generate ASDoc source documentation; it will simply be ignored by the asdoc compiler.

In order to document packages you need to specify the -package argument to the asdoc compiler. This is specified in the form:
-package <package-name> '<package-comment>

In general, I think most developers would prefer not to specify any kind of source file documentation via a compiler argument as this type of metadata ideally should be defined as an annotation (in the documentation sense of the word) to the actual source file itself, however if you find a need to document packages the -package compiler option will do the job.

Refactoring Ant Builds with Macrodefs

Within the past few years the proliferation of Agile Best Practices has pushed the importance of refactoring front and center in the world of Object Oriented Software Design, yet for some odd reason build scripts seem to have been overlooked in this regard by many. Perhaps this is due to the risk and complexity involved in such an effort as well as the lack of a tools by which refactoring build scripts can safely be accomplished.

For instance, whereas refactoring in typical OO languages relies heavily on Unit Tests for ensuring refactorings do not break existing code along the way, build scripts do not have such safety nets as Unit Tests. Ant is statically typed however it doesn’t provide compile time type checking, additionally build scripts are defined declaratively via XML mark-up however they can not be validated as there are not fixed DTD attributes to validate them against. Perhaps most importantly is that there are not many resources to turn to for guidance when it comes to refactoring Build Scripts. For example, most of what I have learned about the subject comes from Julian Simpson’s work in the ThoughtWorks Anthology, which I highly suggest reading for a much more exhaustive, yet comprehensive and succinct essay on the subject. In any case, based on the above factors I am quite certain that all of these points plays a role in Ant Scripts somehow being overlooked with regard to refactoring.

So where do you begin?
That’s a really good question, one which I was forced to ask myself awhile back while being tasked with the daunting challenge of streamlining a very complex Build / CI process. At the time, I was responsible for modifying a Build for a large enterprise class Flex application which required build time transformations of localized content with varying modules being built for n-locales depending on context specific business rules, all of which needed to be built and deployed to multiple environments via a pre-existing CI Process. Further complicating things was that the builds were wrapped by nested DOS batch files. In addition, the existing builds had dependencies on far more complex underlying build Scripts. To make matters worse, up until that point in time no one, including myself, truly knew the build structure and all of it’s dependencies, it was very much a black box. So considering the fact that I needed to modify the build and would be responsible for maintaining the builds moving forward, as well as streamlining the existing build scripts so as to allow them to scale in order to support additional applications to seamlessly become part of the build, to say the least, I was eager to learn the Build Scripts inside out if I was to refactor and maintain them.

The moral to the story I just bored you with above is that if you have ever had to maintain a build before then this story probably sounds pretty familiar: you have a Build Script which is a black box that no one wants to deal with; it works and that’s all that matters – until it needs to change of course. So again, where does one begin when refactoring a Build Script? Well lets think in terms of typical OO refactoring.

Remove duplication
Perhaps one of the most obvious and easiest places to begin consideration for refactoring candidates in an Object Oriented Design is to remove duplication; that is to isolate and thin out common functionality so as to remove redundancy and duplication. Most Ant Scripts are littered with such duplication, and as such should be viewed in the same manner as one would when refactoring Object Oriented applications. In fact, the goal of refactoring is very much the same regardless of the paradigm – beit a declaratively language such as Ant or an Object Oriented language such as ActionScript – provide more efficient, maintainable and easier to work with code.

I tend to think of Build Script design – yes, it is design – much the same as any other OO design. So just as one would strive to eliminate code duplication in an Object Oriented Design, the same should apply to the design of a Build Script. For example, consider the following build target which packages a series of distributions:

This kind of Build Script is common, however if you were to think of this in terms of OO Design, whereas each target is analogous to a method, you would quickly realize the code is very redundant. Moreover, the functionality provided by these targets: the packaging of distributions, is a very common task, so just as in an OO design this functionality should be extracted into a reusable library. In Ant 1.6+ we can achieve the same kind of code reuse by extracting these common, redundant targets using Macrodefs.

Use Macrodefs
In short, a Macrodef, which is short for “macro definition”, is basically an extracted piece of reusable functionality in an Ant that can be used across Build Scripts for performing common, or specific tasks. Macrodefs can be thought of as a reusable API for Ant. You include macrodefs in your build scripts by importing the macrodef source file. This is analogous to how one would import a class.

So consider the redundant targets outlined above. Using macrodefs we can extract these common tasks, refactoring them into a single macrodef, import the file which contains the macrodef into our build script and then call the macrodef by wrapping it in a task.

To extract the target to a Macrodef we would first begin by creating a new XML document named after the functionality of the target, in this case we could call it “dist.xml”. This document would contain a project root node just as any other Ant Script would. We would then define a macrodef node and specify an identifier via the name attribute; this is how we can reference the macrodef once imported to our build script.

Once we have defined the macrodef we can add dynamic properties to its definition. This could be thought of as begin analogous to arguments of a method signiture. By specifying these arguments we can then assign their values whenever we invoke the macrodef. Default values can also be added if needed.

Finally, we specify the behavior of the macrodef via the sequential node, This is where the functional markup is defined. Note that we reference the properties internally using the @{property} notation, just as you would normally however the token is prefixed with an @ sign rather than a $ sign.

We now have a parametrized, reusable piece of functionality which we can use across Ant Builds, and as such, simplifying the build while promoting code reuse.

To use the macrodef in another Ant Build we need only import it and create a target which wraps the macrodef. So we could refactor the distribution targets from the original Build file example to the following:

And that’s the basics of using macrodefs to refactor an Ant Build. There is a lot more which can be accomplished with macrodefs in regards to designing and refactoring Ant Builds, specifically antlib, and I encourage you to give it a try as I am sure you will be happy with the results.

Pattern Recognition

It has been said that the true sign of intelligence lies in ones ability to recognize patterns – and there is a lot to be said of that statement as patterns can be found everywhere, in everything, in everyday life.

One of the greatest strengths of human intelligence is in our ability to recognize patterns and abstract symbolic representations even when they occur in contexts different from that in which we originally learned them. It’s why hard to grasp concepts which are foreign or new to us become very clear when explained through metaphor.

This ability to recognize patterns is essential to our survival, always has been. For example, practically all ancient civilizations had a very, very good understanding of the recurring patterns in their environment; something we like to call seasons. This understanding of patterns in time and climate was crucial to the survival of these early civilizations. Our ability to recognize patterns is essential to our learning and understanding of the world around us. Pattern recognition is a cognitive process much like intuition. Arguably they are inter-related or possibly one and the same.

Suppose you you want to lose a few pounds, or save a little extra money, or learn a new programming language etc. but you are not seeing the results you would like. By recognizing patterns in your behavior you will begin to notice areas which need to be adjusted and from that determine an appropriate solution and the necessary adjustments to be made in order to achieve your goal. For example, maybe you’ve been trying to save some extra money and after a few months realize you are getting nowhere. You then analyze your behavior for recurring patterns and realize your spending half your pay every weekend on beer, just kidding, but you get my point.

Pattern Recognition in Software Development

In the world of software development patterns apply in pretty much just the same way – our ability to recognize them is essential to ensuring the success of a software application. When we discover patterns of recurring problems in software we are then able to consider various potential patterns from a catalog of named solutions, i.e. Design Patterns. Once an appropriate solution is found we can apply it to resolve the problem regardless of the domain.

When designing software, patterns are something that should reveal themselves, never forced into a design. This is how patterns should always be applied; you have a problem, and based on that problem you begin to recognize common patterns, or maybe new ones, which can be applied as a solution to resolve the problem. It should never be the other way around, that is, a solution (Pattern) looking for a problem. However this happens quite often and is pretty evident in many software applications. Many refer to this as “pattern fever“, personally I like to call it “patterns for patterns sake“, or simply “for patterns sake“. Because that’s really what it is.

For example, have you ever found a Singleton implementation where an all static class would have sufficed (e.g. utilities). Or a code behind implementation class which masquerades as an abstract class. Or an Interface where there is clearly only a need for a single concrete implementation (e.g. data centric implementations), or a marker Interface which serves no purpose at all. The list goes on and on.

In some cases it very well may just be an innocent flaw in the design, however the majority of the time it’s a tell tale sign of someone learning a new pattern and knowingly, albeit, mistakenly, attempting to implement the pattern into production code. This is clearly the wrong way of learning a new pattern. Learning new design patterns is great and a lot of fun but remember, there is a time and place for everything, and production code isn’t it.

Learning Patterns

One of the best ways to learn a new pattern (or anything new for that matter) is to explore it. Begin by reading enough about it to get some of the basic concepts to sink in a bit. Put it into context, think of it in terms of metaphor – in ways that make sense to you, remember you are learning this. Question it. Then experiment with it. See how it works, see how it doesn’t work, break it, figure out how to put it back together, and so on, but whatever works best for you. Most importantly always do it as a separate effort such as a POC, but never in production code.

Once you get this down and understand the various patterns you’ll find you never need to look for them, for if they are needed they will reveal themselves sure enough.

What makes a good design?

One of my core job responsibilities for the past several years has been to conduct technical design and implementation (code) reviews during various phases of the software development life cycle. This is typically a highly collaborative process whereas myself and an individual engineer, or the team as a whole will begin by performing a detailed analysis of business requirements in order to gain an initial understanding of the specific component(s) being developed. Once an understanding of the requirements has been reached a brainstorming session ensues which ultimately leads to various creative, technical solutions. After discussing the pros and cons of each the best solutions quickly begin to reveal themselves, at which point it is simply a process of elimination until the most appropriate solution has surfaced.

The next step is to translate the requirements into the proposed technical solution in the form of a design document. The design is specified on a high level and is only intended to provide an overview of the appropriate technical road map which is to be implemented. This typically consists of higher level UML Sequence and Class diagrams, either in the form of actual diagrams produced in a UML editor, or could simply be a picture captured from UML drawn out during a whiteboarding session. The formality of the documented design is less important, what is important is that the design is captured in some form before it is implemented. Implementation specific details such as exact class and method signatures and so forth are intentionally left out as they are to be considered outside the scope of the design. See Let Design Guide, not Dictate for more on this subject. Once the design is documented it is reviewed and changes are made if needed. This process is repeated until all business and technical requirements have been satisfied, at which point the “all clear” is given to move forward with implementing the design.

But what exactly constitutes a good design? How does one determine a good design from a bad one? In reality it could vary significantly based on a number of factors, however in my experience I have found a design can almost always be judged according to three fundamental criterion: Correctness, Cohesion / Coupling and Scalability. For the most part everything falls into one of these three categories. Below is a brief description of the specific design questions each category sets out to determine.

  • Correctness
    Does the design solve the problems described in the requirements and discussed by the team? This is Correctness in the form of satisfying business requirements. Are the patterns implemented in the design appropriate, or are additional patterns being used just for the sake of using the pattern? This is Correctness in the form of technical requirements. A good design is well focused and only strives to provide a solution which meets the requirements specified by the business owners, client etc; it does not attempt to be overly clever.
  • Cohesion / Coupling
    Has a highly cohesive, loosely coupled design been achieved? Have the classes, interfaces and APIs been logically organized? Does each provide a specific, well-defined set of functionality? Is composition used over inheritance where applicable? Has related functionality been properly abstracted? Does changing this break that, does adding that break this, etc.
  • Scalability
    Does the solution scale well? Is it flexible? A good design strives to facilitate change with confidence, and with as little risk as possible. A good design also achieves transparency at some level in the areas where it is most applicable.

The concepts outlined above are crucial to achieving a good design, however they are often overlooked or misunderstood to some degree. Throughout the years I have began to recognize some commonality in the design mistakes I find in Object Oriented Designs in general, and within Flex projects in particular. Many of which typically can be attributed to violations of basic MVC principles, but most commonly the design mistakes appear to be a negation of Separation of Concerns (SoC).

There are close relationships between Correctness, Cohesion / Coupling and Scalability, each of which plays a very significant role in the resulting design as a whole.

So lets start with Correctness, which is by far the single most important facet of design, for if the design does not provide a solution which satisfies the requirements specified then it has failed – all other aspects of the design are for the most part, details.

It is important to understand that Correctness has a dependency on Flexibility. For example, as architects and developers our understanding of the problem domain is constantly evolving as we gain experience in the domain. Additionally, as requirements may change significantly as a product is being developed, our designs must be able to adapt to these changes as well. Although this poses some challenges it is wrong to suggest that requirements need to be locked down completely before the design phase begins, but rather requirements need only be clearly defined to the extent that the designer is aware of what is required at that point in time and how it fits into the “big picture”. A competent designer understand this well and makes careful considerations before committing to any design decisions. This is where the importance of Flexibility comes into play. In order for a design to be conceptually and technically correct it needs to be flexible enough to support change. This is why good design is so important – to easily facilitate change. As such the flexibility to allow change should be evident throughout the design. A good example might be where the middle-tier has not decided which service layer implementation will be used (e.g. XML:80, WSDL, REST etc.), or the Information Architects have not decided what the constraints of each user role will be. A good design should be flexible enough to allow for changes such as these as well as others with confidence and more importantly, little risk to other parts of the application; after all, you shouldn’t have to tear down the house just to renovate the bathroom – in addition to Correctness and Scalability, this is where Cohesion and Coupling come into play.

High Cohesion is vital to achieving a good design as it ensures related functionality and responsibilities are logically grouped together, encapsulated and abstracted. We have all seen the dreaded, all encompassing class which assumes multiple responsibilities. Classes such as these have low cohesion and are a sign of future challenges if not addressed immediately. On a higher level, if high cohesion had not been achieved it is easy to notice as there will typically only be one class which comprises an entire API, however quite often low cohesion in classes may be a bit more subtle than one might expect and a code review will reveal areas where low cohesion has been implemented.

For example, consider the following Logging facility which is intended to provide a very simple logging implementation:

The above example is such a classic case of low cohesion. I see this kind of thing all the time. The problem here is that the Logger class has low cohesion because it is assuming the responsibility of creating and formatting a time stamp, this functionality is outside of the responsibilities of the Logging API. The creating and formatting of a time stamp is not a concern of the Logger, but rather would be the responsibility of a separate DateFormatting utility whose sole purpose is to provide an API for formatting Date objects. Removing the Date formatting functionality from the Logging API to a class which is responsible for formatting Date objects would facilitate code reuse across many APIs, reduce redundancy and testing as well as allow the Logger class to only define operations which are directly related to Logging. A good design must achieve high cohesion if it is to be successful.

Coupling is essential in determining a good design. A good way to think of coupling is like this: Think back to when you were a kid playing with blocks, you could easily take any number of different blocks and rearrange them to build whatever you like – that’s loose coupling. Now compare that to a crossword puzzle or a jigsaw puzzle, the pieces only fit together in a very specific way – that’s tight coupling. A good design strives to achieve loosely coupled APIs in order to facilitate change as well as reuse. A classic, yet less commonly mentioned example tight coupling is in the packaging of APIs. Often, many times designers will achieve loosely coupled APIs however the APIs themselves are tightly coupled to the application namespace.

Consider the of Logging API example from above, note that the API is defined under the package com.somedomain.someproject.logging. Even if the example were to be refactored to achieve high cohesion it would still be tightly coupled to the project specific namespace. This is a bad design as in the event another product should need to use the Logging API it would first need to be refactored to a common namespace. A better design would be to define the Logging API under the less specific namespace of com.somedomain.logging. This is important as the Logging facility itself should be generic in that it could be used across multiple projects. Something as simple as proper packaging of generic and specific components plays a key role in a good design. A better design for the above example would be as follows, this design achieves both high cohesion and loose coupling:

As with all design, technical design is subjective. Architects and Engineers can spend an infinite amount of time debating the various points of design. In my experience it really comes down to organization and efficiency, that is, organization of responsibilities and concerns, and the efficiency of their implementation both individually and as a whole.

It may sound cliche’ however before you begin a new design, or review an existing one, consider the following quote before doing so – it pretty much sums up what good design is:

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
– Antoine de Saint-Exupery

AS3 Quick Tip: Object setPropertyIsEnumerable

Most ActionScript developers are very familiar with Anonymous Objects in ActionScript 3.0 as they provide a convenient mechanism from which properties of an object which are not known at design time can be defined at runtime. This post is just a quick tip on the Object.setPropertyIsEnumerable() method, for more detailed information on using Objects in ActionScript 3.0 visit livedocs.

The Object class has a few useful methods, specifically toString();, hasOwnProperty(); and propertyIsEnumerable();. There is also another method of the Object class which is very useful: setPropertyIsEnumerable() which can be utilized to explicitly omit a property from being included in an enumeration of an object.

Consider the following example:

When enumerating the object instance all properties and their values can easily be retrieved, however it is important to keep in mind that when enumerating an Object the order in which properties are retrieved is not guaranteed. So executing the same loop against the same object instance could yield different results each time, such as “value B, value A, value C” as opposed to what you might have expected, i.e. “value A, value B, value C”.

Now suppose you do not want a property to be exposed via an enumeration of the object, this can be achieved via:

So based on the above examples, if we did not want to expose the “propA” property in a for in… loop we could omit the property as follows:

So when you need to omit properties from being included in an enumeration of an object, Object.setPropertyIsEnumerable(); proves very useful.

Continuous Integration with Hudson

Continuous Integration is a fundamental Agile Development process in which members of a team integrate changes on a regular basis, ideally multiple times per day, which in turn results in multiple integrations per day. The integration process itself is facilitated by an automated integration build which is triggered upon a specific interval to check for new commits to the central repository, or mainline. This is necessary for detecting changes which could potentially break the build as quickly as possible, as it is typically easier to fix these errors sooner rather than later, thus resulting in significantly less integration issues, especially when working on large, collaborative projects where there are multiple members of a team developing against the same codebase.

Continuous Integration does not necessarily require any specific tooling, however it is very common to incorporate a build management tool in order to automate the builds. The most common tool of choice by build managers to facilitate automated builds is Cruise Control. I have been using Cruise Control for years to automate Flex builds as it contains everything one typically would need for automating an integration, staging and production build process. Setting up Cruise Control and modifying the configuration to add new projects is a straight forward process, however I always felt it to be a bit tedious.

Considering Cruise Control is an industry standard as well as the fact that it provides pretty much everything I have ever needed for automating enterprise build processes, I never had a compelling reason to investigate any of the other tools out there. Then last week I received an email from our engineering groups build manager stating that there is a different build management tool being considering called Hudson. Initially I questioned why Hudson was being considered over Cruise Control as CC is, for the most part, an industry standard, so I decided to do a little investigating to see for myself, first by reading the documentation then by running the simple test drive available at the Hudson site which you can use to get an idea of how the tool works.

Downloading Hudson is straight forward and best of all installation is a breeze; simply deploy the web archive (hudson.war) to an existing servlet container (I am using Tomcat 6.0.14) and your ready to go.

Once deployed you can go to the Hudson console, typically located at: http:<host:port>/hudson/ (e.g. http://somedomain:8400/hudson/). The console has all of the features you might expect (as well as some other useful tools). From the console you can easily configure Hudson, create new projects, monitor build progress, view build logs, kick off a build, schedule builds and so forth. Setting up a project is one of the things I really like about Hudson – as opposed to configuring a project in Cruise Control – as Hudson provides a super simple GUI that you can use to create new projects. This is one of its main attractions in my opinion as new builds can be configured in next to no time at all. The Dashboard and UI in general are very intuitive and easy to use, and if you’re like me and still would like to have the ability to look under the hood and modify the configurations, you also have that level of control as well. Kicking off a build manually or scheduling builds is just as easy.

Overall I have to say that I am a fan of Hudson. Admittedly I just started using it so I am certain there is much more to learn about the tool, however if the rest of the functionality is as easy to configure as the basic features then this is a sure sell. I estimate it took me approximately 10 minutes total to download, install, configure a project and run a build (successfully I might add:).

Some key features:

  • Master / Slave builds
  • Distributed Builds
  • GUI based project configuration
  • E-mail / AIM / RSS build notifications
  • Plug-ins / extensibility support
  • Simple Installation
  • Console Auto-refresh

So if you are interested in streamlining your Continuous Integration process I recommend taking a quick test drive of Hudson.