You are viewing the Articles published in

JavaScript Comma Delimiter Pattern

Perhaps the most common syntactic error I find myself correcting in JavaScript is that of a simple missing comma separator on new lines. Such marginal oversights can become quite a nuisance to correct time and time again. Fortunately, there is a rather simple pattern which can be used to help avoid these errors with very little effort – only requiring code to be rewritten in a manner that reads more naturally while also being syntactically correct.

Typical missing comma scenario

If we are to consider the most common scenario where a comma is unintentionally omitted (and a subsequent exception is thrown), it can most likely be found within The Single var Pattern (irrespective of any personal preference towards or against the pattern itself).

For example, consider a typical block of code implementing a single var:

The above example, being rather typical and admittedly simple, does not pose much of a maintenance issue itself. However, if one is to refactor these declarations into a different order, add additional declarations, or specify more complex assignments, the probability of a comma being omitted unintentionally or becoming out of place increases.

In addition to The Single var Pattern, object literals are also a good source of occasional commas being omitted unintentionally, especially when nested within additional literals:

A Simple Solution

To help avoid such mistakes altogether, we can simply place commas before declarations, rather than after. At first, this may feel a bit awkward, but in time it becomes quite easy to get used to.

With this in mind, by placing commas first, the above could easily be refactored to the following:

As can be seen in the above example, considering we generally read code left to right, it becomes immediately apparent if a comma is missing.

For instance, take note of which implementation is easier to notice the missing comma:

I suspect most would agree that commas placed before stand out much more, and therefore it becomes much more apparent when one is missing. This appears to be so as the difference between the two is such that with commas placed after one needs to look for what is missing, whereas with commas placed before one sees what is missing.

As humans, our tendency towards patterns in general, and visual patterns in particular can not be understated. As developers, considering patterns are a significant part of our work, we should strive to take advantage of the most natural ones especially, even for things as seemingly marginal as placing comma separators first.

Organizing Require JS Dependencies

When developing large scale web applications leveraging RequireJS, at times, even the most highly cohesive of modules will require quite a few other modules as dependencies. As such, maintaining the order of these dependencies can become somewhat tedious. Fortunately, RequireJS provides a means of simplifying how modules may define dependencies for such cases.

Ordering Dependencies

If we are to consider how a typical module definition specifies dependencies, it becomes clear that one must ensure each module dependency name and it’s corresponding definition function argument have been listed in the same order:

In the above example, jQuery, Underscore and Backbone are specified as the modules dependencies as a dependency names array passed as the first argument to define(). Once all dependencies have been loaded, the modules definition function is invoked; with each dependency passed in the same order in which they were defined in the dependency array.

From both a design and client implementation perspective, this one-to-one correlation between dependency ordering and definition function argument ordering makes perfect sense, of course, for it would obviously be extremely confusing otherwise. In general this is rarely a concern, though when a module has many dependencies it can become cumbersome.

Adding Dependencies

The necessary side effect of the dependency/argument ordering is that as other dependencies need to be added, time must be spent ordering and re-ordering dependencies if one takes care to group dependencies categorically in order to improve readability (e.g. models…, collections…, views…, etc.).

For example, consider the following:

If we were to decide that this module also needed, say, Handlebars, we could simply add the new dependency to the end of the dependencies array, and then just add it to the end of the factory function’s arguments as follows:

While the above approach will certainly work, it fails to aid in readability as Handlebars is grouped with the application specific dependencies – the Model and Collection, as opposed to being grouped with the module’s framework dependencies. This may seem like a trivial detail, however, considering code is typically read many more times than it is written, it makes sense to organize dependencies as they are added in order to save ourselves and others time in the future when viewing the dependencies.

And so, ideally a team would have an established pattern of grouping dependencies in some kind of logical order. For example, framework specific dependencies could be listed first, followed by application specific dependencies etc. This ordering could be as simple or complex as a team collectively decides, though I would recommend keeping it generally simple.

With this in mind we could improve the above example as follows:

Organizing Dependencies

If we are to consider the above example as being somewhat typical, then it becomes rather clear that with each new dependency added we will likely have to repeat the ordering process. Again, while this may seem insignificant, it can easily lead to exceptions being thrown if any dependencies are out of order.

Fortunately, RequireJS provides a simplified CommonJS wrapping implementation, or Sugar syntax, which can be used to solve such issues. This sytax (which will feel natural to those who use Node) allows one to simply provide a module’s definition function to the module’s define call, and specifiy require as a single argument, as follows:

Using this pattern, we can refactor the above example to be more easily managed as follows:

With this pattern of dependency mapping it becomes much easier to add and remove dependencies as needed, with the added benefit of reading much more naturally. This pattern also feels more familiar as it is similar to import directives in other languages.

Conclusion

Managing module dependencies in RequireJS is quite simple and becomes even simpler when leveraging the Sugar syntax described above. When doing so, it is important to keep in mind that this syntax relies on Function.prototype.toString(), which, while having good support in most modern browsers, does not provide predictable results in certain older browsers. However, as the documentation states, using an optimizer to normalize dependencies – such as the very powerful RequireJS Optimizer – will ensure this approach works across all browsers.

As a general rule of thumb, I typically use the Sugar syntax approach when there are more than 4-5 dependencies and have found it has simplified managing dependencies in modules rather nicely.

Decoupling Backbone Modules

One of the principle design philosophies I have advocated over the years, especially through various articles on this site, has been the importance of decoupling. And while I could go into significant detail to elaborate on the importance of decoupling, suffice it to say that all designs – from simple APIs to complex applications – can benefit considerably from a decoupled design; namely, with respect to testability, maintainability and reuse.

Decoupling in Backbone

Many of the examples which can be found around the web on Backbone are intentionally simple in that they focus on higher level concepts without diverging into specific implementation or design details. Of course, this makes sense in the context of basic examples and is certainly the right approach to take when explaining or learning something new. Once you get into real-world applications, though, one of the first things you’ll likely want to improve on is how modules communicate with each other; specifically, how modules can communicate without directly referencing one another.

As I have mentioned previously, Backbone is an extremely flexible framework, so there are many approaches one could take to facilitate the decoupling of modules in Backbone; the most common of which, and my preferred approach, is decoupling by way of events.

Basic Decoupling with Events

The simplest way to facilitate communication between discreet modules in Backbone is to have each module reference a shared event broker (a pub /sub implementation). Modules can register themselves to listen for events of interest with the broker, and modules can also communicate with other modules via events as needed. Implementing such an API in Backbone is amazingly simple, in fact, so much so that the documentation provides an example in the following one liner:

Essentially, the dispatcher simply clones (or alternately, extends) the Backbone.Events object. Different modules can reference the same dispatcher to publish and subscribe to events of interest. For example, consider the following:

In the above example, the Users Collection is completely decoupled from the UserEditor View, and vice-versa. Moreover, any module can subscribe to the 'users:add' event without having any knowledge of the module from which the event was published. Such a design is extremely flexible and can be leveraged to support any number of events and use-cases. The above example is rather simple; however, it demonstrates just how easy it is to decouple modules in Backbone with a shared EventBroker.

Namespacing Events

As can be seen in the previous example, the add event is prefixed with a users string followed by a colon. This is a common pattern used to namespace an event in order to ensure events with the same name which are used in different contexts do not conflict with one another. As a best practice, even if an application initially only has a few events, the events should be namespaced accordingly. Doing so will help to ensure that as an application grows in scope, adding additional events will not result in unintended behaviors.

A General Purpose EventBroker API

To help facilitate the decoupling of modules via namespaced events, I implemented a general purpose EventBroker which builds on the default implementation of the Backbone Events API, adding additional support for creating namespace specific EventBrokers and registering multiple events of interest for a given context.

Basic Usage

The EventBroker can be used directly to publish and subscribe to events of interest:

Creating namespaced EventBrokers

The EventBroker API can be used to create and retrieve any number of specific namespaced EventBrokers. A namespaced EventBroker ensures that all events are published and subscribed against a specific namespace.

Namespaced EventBrokers are retrieved via Backbone.EventBroker.get(namespace). If an EventBroker has not been created for the given namespace, it will be created and returned. All subsequent retrievals will return the same EventBroker instance for the specified namespace; i.e. only one unique EventBroker is created per namespace.

Since namespaced EventBrokers ensure events are only piped thru the EventBroker of the given namespace, it is not necessary to prefix event names with the specific namespace to which they belong. While this can simplify implementation code, you can still prefix event names to aid in readability if desired.

Registering Interests

Modules can register events of interest with an EventBroker via the default on method or the register method. The register method allows for registering multiple event/callback mappings for a given context in a manner similar to that of the events hash in a Backbone.View.

Alternately, Modules can simply define an “interests” property containing particular event/callback mappings of interests and register themselves with an EventBroker

For additional examples, see the backbone-eventbroker project on github.

Persisting Backbone Collections

Backbone.js provides a light-weight, extensible API which focuses primarily on general structure, and the utility of that structure, without requiring a rigid adherence to specific patterns or prescribing certain design philosophies. This focused approach affords developers a significant level of flexibility which can prove to be essential to the success of many modern client side web applications – especially when considering the challenges surrounding the ever changing landscape of the web.

Naturally, it follows that such flexibility imparts additional responsibility on the developer. In my experience with architectural frameworks in general, and Backbone in particular, many times this kind of trade off can be quite preferable.

Extending Backbone

As mentioned above, Backbone implements the essential building blocks necessary to provide general structure within an application. If a feature is needed which is not provided by default, by design, Backbone allows for easily extending the core APIs in order to implement additional features as needed. In fact, developers are encouraged to do so.

Perhaps a good example of this can be found in the need to persist collections in Backbone; that is, interestingly, while the Backbone Model API implements a default save method, the Backbone Collection does not provide a similar API for explicitly saving the models within a collection; that is, saving the collection itself. This feature may have been left out intentionally as part of an initial design decision, as there are some complexities to saving an entire collection as a general use-case; such as saving deltas, propagating saved state to models within the collection and so forth; or, perhaps it simply may have been an oversite. Regardless of the reason, in certain scenarios it can be useful to save an entire collection rather than saving each model individually – fortunately, given the flexibility of Backbone, writing an extension to implement this feature is quite simple.

The PersistableCollection

As a basic solution for saving a Backbone Collection, I wrote a general abstraction – PersistableCollection, which extends Backbone.Collection, implementing a default save method.

The PersistableCollection API is quite simple in that it essentially just implements a proxy which wraps the collection in a Backbone Model, and ensures there is only one proxy created per collection. The url and toJSON methods, respectively, of the proxy Model reference the url and toJSON methods of the collection to which the proxy is bound. You can either extend PersistableCollection or generate a proxy for an existing collection via the static createProxy method. Examples of each follow.

Using a PersistableCollection

A PersistableCollection can be instantiated and used directly, just like a Backbone.Collection:

Extending PersistableCollection

You can extend PersistableCollection and simply invoke save to persist the models in the collection:

Creating a proxy for an existing collection

An existing collection can be saved by creating a proxy for it:

The PersistableCollection can be used as a general extension for saving entire collections based on the representation of all models within the collection. If you are interested in extending it to account for more specific use-cases, you can simply override the default save implementation as needed.

You can find the gist for the source and spec here.

Preprocessing Modules with RequireJS Optimizer

RequireJS provides an effective means of bundling an application for production deployment by way of the RequireJS Optimizer; which allows for optimizing, concatenating and minifying modules via Node or Rhino.

The Optimizer is quite powerful insofar that fine-grained control over various aspects of the optimization process can be implemented with relative ease. One such example is the ability to specify a callback function to be invoked for each module in the build prior to optimization. This essentially provides the necessary hooks needed to implement a preprocessor.

The onBuildWrite option

The optimize method of the RequireJS Optimizer accepts an onBuildWrite option which allows for a callback to be specified. The callback will be invoked prior to serialization of each module within the optimized bundle. Callbacks receive the name, path and contents of the module; and, are always expected to return a value.

For example, consider the following build configuration which demonstrates a basic onBuildWrite callback that simply logs the name of each module processed by the build to the console and, returns the module’s content unmodified.

Using the above configuration, when executed against a (contrived) example consisting of just a single module, “ModuleA”, in Node, it would output the following to the console:

If we were to print out the contents of the files we would see something like this:

With this in mind, a basic preprocessor can be implemented quite easily using the onBuildWrite option. Assume the main.js script has a token placeholder for the build version like so:

We can implement a simple preprocessor which replaces the #version token with a build date as follows:

The above onBuildWrite callback logs the original contents, replaces the #version token with the current date, logs the result and returns the processed content. The output of which would be similar to the following:

As can be seen in the above examples, implementing a basic preprocessor is rather simple to accomplish with the RequireJS Optimizer. The ability to do so allows for some interesting possibilities and is certainly something worth checking out if you are leveraging AMD via RequireJS.

You can fork an example implementation at requirejs-preprocessor-example.

Testing Handlebars Helpers with Jasmine

For some time now, I have primarily been using logic-less templating solutions as they allow for a greater separation of concerns in comparison to many of their logic-based counterparts. By design, the decoupling of logic-less templates imparts greater overall maintainability in that templates become considerably less complex, and therefore, considerably easier to maintain and test.

Handlebars, my preferred logic-less templating engine, simplifies testing even further via it’s elegant Helper API. While Handlebars may not be the fastest templating solution available, I have found it to be the most testable, reusable and thus, maintainable.

Custom Handlebars Helpers

Since Handlebars is a logic-less templating engine, the interpolation of values which require logical operations and/or computed values is facilitated via Helpers. This design is quite nice in that template logic can be tested in isolation from the context in which it is used; i.e. the templates themselves. In addition to the common built-in Block Helpers, custom Helpers can easily be registering in order to encapsulate the logic used by your templates.

Registering Custom Helpers

Registering Custom Helpers is as simple as invoking Handlebars.registerHelper; passing the string name of the helper which is to be registered, followed by a callback which defines the actual helpers implementation.

Consider the following custom Helper example, which, given a string of text, replaces plain-text URLs with anchor tags:

(Gist)

As can be seen in the above example, custom Handlebars Helpers are registered outside the context of the templates in which they are used. This allows us to test our custom Helpers quite easily.

Testing Custom Helpers

Generally, I prefer to abstract testing custom Helpers specifically, and test the actual templates which use the Helpers independently from the Helpers. This allows for greater portability as it promotes reuse in that common custom Helpers (and their associated tests) can then be used across multiple projects, regardless of the templates which use them. While one can test Handlebars implementation code with any testing framework, in this example I will be using Jasmine.

Essentially, testing Custom Helpers is much the same as testing any other method. The only point to be noted is that we first need to reference the helper from the Handlebars.helpers namespace. Ideally this could be avoided as, should the namespace happen to change, so, too, will our tests need to change. That being said, the probability of such a change is unlikely.

Using the above example, in a Jasmine spec, the enhance helper can be referenced as follows:

Then we can test that the helper was registered:

We can then test any expectation. For example, the enhance helper should return a Handlebars.SafeString. We can test this as follows:

The enhance helper is expected to replace plain-text URLs with anchor tags. Before testing this, though, we should first test that it preserves existing markup. In order to test this use-case, we first need to access the return value from our custom Helper, we can do this by referencing the string property of the Handlebars.SafeString returned by our Helper:

Finally, we test that our enhance Helper replaces URLs with anchor tags using the above techniques:

(Gist)

We now have a complete test against our custom Helper, and all is green:
Custom Helper Spec
Note: The above Spec Runner is using the very nice jasmine.BootstrapReporter

And that’s all there is to it. You can fork the example at handlebars-helpers-jasmine.