You are viewing the Articles in the TDD / BDD Category

BDD/TDD Mental Models

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

Test First Workflow – A Short Story

As a depiction of the typical approach taken when solving a problem with Test First practices in mind, below is a brief excerpt from a recent conversation with a colleague who inquired as to how one generally goes about solving a problem using Test First methodologies. My explanation was rather simple, and read somewhat like a short story, though I describe it as being more of a step by step process from a Pair Programming perspective.

The general workflow conveyed in my description, while brief, covers the essentials:

  1. We have a problem to solve.
  2. We discuss the problem, asking questions as needed; then dig a bit deeper to ensure we understand what it is we are really trying to solve; and, most importantly, why.
  3. We consider potential solutions, identifying those most relevant, evaluating each against the problem; then agree upon one which best meets our needs.
  4. We define a placeholder test/spec where our solution will be exercised. It does nothing yet.
  5. We implement the solution in the simplest manner possible, directly within the test itself; the code is quite ugly, and that is perfectly fine, for now. We run our test, it fails
  6. We adjust our implementation, continuing to focus solely on solving the problem; all the while making sure not to become too distracted with implementation details at this point.
  7. We run our test again, it passes. We’re happy, we’ve solved the problem.
  8. We move our solution out of the test/spec to the actual method which is to be implemented, which, until now, had yet to exist.
  9. We update our test assertions/expectations against the actual (SUT). We run our test, it passes.
  10. We’re happy, we have a working, tested solution; however, the implementation is substandard; this has been nagging at us all along, so we shift focus to our design; refactoring our code to a more elegant, performant solution; one which we can be proud of.
  11. We run our test again, it fails. That’s fine, perhaps even preferable, as it verifies our test is doing exactly what is expected of it; thus, we can continue to refactor in confidence.
  12. We adjust our code, continuing to make design decisions and implementation changes as needed. We run our test again, it passes.
  13. We refactor some more, continuing to focus freely, and without worry on the soundness of our design and our implementation. We run our test again, it passes.

Rinse and Repeat…

While the above steps are representative of a typical development work-flow based on Test First processes, it is worth noting that as one becomes more acclimated with such processes, certain steps often become unnecessary. For example, I generally omit Step #5 insofar as implementing the solution within the test/spec itself is concerned; but rather, once I understand the problem to be solved, I then determine an appropriate name for the method which is to be tested, and implement the solution within the SUT itself, as opposed to the test/spec; effectively eliminating the need for Step #8. As such, the steps can be reduced down to only those which experience proves most appropriate.

Concluding Thoughts

Having become such an integral part of my everyday workflow for many years now, I find it rather challenging to approach solving a problem without using Test First methodologies. In fact, attempting to solve a problem of even moderate complexity without approaching it from a testing perspective feels quite awkward.

The simple fact is, without following general Test First practices, we are just writing implementation code, and if we are just writing implementation code, then, in turn, we are likely not thinking through a problem in it’s entirety. Consequently, it follows then that we are also not thinking through our solutions in their entirety, and hence our designs. Because of this, solutions feel uncertain, and ultimately leave us feeling much less confident in the code we deliver.

Conversely, when following sound testing practices we afford our team and ourselves an unrivaled sense of confidence in terms of the specific problems we are solving, why we are solving them, and how we go about solving them; from that, we achieve a concerted understanding of the problem domain, as well as a much clearer, holistic understanding of our designs.

Simplifying Designs with Parameter Objects

Recently, while reading the HTML5 Doctor interview with Ian Hickson, when asked what some of his regrets have been over the years, the one he mentions, rather comically so as being his “favorite mistake”, also happened to be the one which stood out to me most; that is, his disappointment with pushState; specifically, the fact that of the three arguments accepted, the second argument is now ignored.

I can empathize with his (Hixie’s) frustration here; not simply because he is one of the most influential figures on the web – particularly for his successful work surrounding CSS, HTML5, and his responsibilities at the WHATWG in general – but rather, it is quite understandable how such a seemingly insignificant design shortcoming would bother such an obviously talented individual, especially considering the fact that pushState's parameters simply could not be changed due to the feature being used prior to completion. Indeed, the Web Platform poses some very unique and challenging constraints under which one must design.

While the ignored pushState argument is a rather trivial issue, I found it to be of particular interest as I often employ Parameter Objects to avoid similar design issues.

Parameter Objects

The term “Parameter Object” is one I use rather loosely to describe any object that simply serves as a wrapper from which all arguments are provided to a function. In the context of JavaScript, object literals serve quite well in this capacity, even for simpler cases where a function would otherwise require only a few arguments of the same type.

Parameter Objects are quite similar to that of an “Options Argument” – a pattern commonly implemented by many JavaScript libraries to simplify providing optional arguments to a function; however, I tend to use the term Parameter Objects more broadly to describe a single object parameter from which all arguments are provided to a function, optional arguments included. The two terms are often used interchangeably to describe the same pattern. However, I specifically use the term Options Argument to describe a single object which is reserved exclusively for providing optional arguments only, and is always defined as the last parameter of a function, proceeding all required arguments.

Benefits

Parameter Objects can prove beneficial in that they afford developers the ability to defer having to make any final design decisions with regard to what particular inputs are accepted by a function; thus, allowing an API to evolve gracefully over time.

For instance, using a Parameter Object, one can circumvent the general approach of implementing functions which define a fixed, specific order of parameters. As a result, should it be determined that any one particular parameter is no longer needed, API designers need not be concerned with requiring calling code to be refactored in order to allow for the removal of the parameter. Likewise, should any additional parameters need to be added, they can simply be defined as additional properties of the Parameter Object, irrespective of any particular ordering of previous parameters defined by the function.

As an example, consider a theoretical rotation function which defines five parameters:

Using a Parameter Object, we can refactor the above function to the following:

Should we wish to remove a parameter from the function, doing so simply requires making the appropriate changes at the API level without changing the actual signature of the function (assuming of course, there are no specific expectations already being made by calling code regarding the argument to be removed). Likewise, should additional parameters need to be added, such as a completion callback, etc., doing so, again, only requires making the appropriate API changes, and would not impact current calling code.

Additionally, taking these potential changes as an example, we can also see that with Parameter Objects, implementation specifics can be delegated to the API itself, rather than client code insofar that the provided arguments can be used to determine the actual behavior of the function. In this respect, Parameter Objects can also double as an Options Argument. For example, should the arguments required to perform a 3D rotation be omitted from the Parameter Object, the function can default to a 2D rotation based on the provided arguments, etc.

Convenience

Parameter Objects are rather convenient in terms of there being less mental overhead required than that of a function which requires ordered arguments; this is especially true for cases where a function defines numerous parameters, or successive parameters of the same type.

Since code is generally read much more frequently than it is written, it can be easier to understand what is being passed to a function when reading explicit property names of an object, in which each property name maps to a parameter name, and each property value maps to parameter argument. This can aid in readability where it would otherwise require reading the rather ambiguous arguments passed to a function. For example:

With Parameter Objects it becomes more apparent as to which arguments correspond to each specific parameter:

As mentioned, if a function accepts multiple arguments of the same type, the likelihood that users of the API may accidentally pass them in an incorrect order increases. This can result in errors that are likely to fail silently, possibly leading to the application (or a portion thereof) becoming in an unpredictable state. With Parameter Objects, such unintentional errors are less likely to occur.

Considerations

While Parameter Objects allow for implementing flexible parameter definitions, the arguments for which being provided by a single object, they are obviously not intended as a replacement for normal function parameters in that should a function need only require a few arguments, and the function’s parameters are unlikely to change, then using a Parameter Object in place of normal function parameters is not recommended. Also, perhaps one could make the argument that creating an additional object to store parameter/argument mappings where normal arguments would suffice adds additional or unnecessary overhead; however, considering how marginal the additional footprint would be, this point is rather moot as the benefits outweigh the cost.

A Look at pushState’s Parameters

Consider the parameters defined by pushState:

  1. data: Object
  2. title: String
  3. url: String

The second parameter, title, is the parameter of interest here as it is no longer used. Thus, calling push state requires passing either null or an empty String (recommended) as the second argument (i.e. title) before one can pass the third argument, url. For example:

Using a Parameter Object, pushState could have been, theoretically, implemented such that only a single argument was required:

  1. params: Object
    • data: Object
    • title: String
    • url: String

Thus, the ignored title argument could be safely removed from current calling code:

And simply ignored in previously implemented calls:

As can be seen, the difference between the two is quite simple: the specification for pushState accepts three arguments, whereas the theoretical Parameter Object implementation accepts a single object as an argument, which in turn provides the original arguments.

Concluding Thoughts

I certainly do not assume to understand the details surrounding pushState in enough detail to assert that the use of a Parameters Object would have addressed the issue. Thus, while this article may reference pushState as a basic example to illustrate how the use of a Parameter Object may have proved beneficial, it is really intended to highlight the value of using Parameter Objects from a general design perspective, by describing common use-cases in which they can prove useful. As such, Parameter Objects provide a valuable pattern worth considering when a function requires flexibility.

Basic Dependency Injection with RequireJS

Recently, I was having a conversation about the basic concepts of IoC/DI, and, specifically, how they pertain to modern (single page) JavaScript Web Applications. This discussion was quite interesting, and so I felt inclined to share some thoughts on the subject with a wider audience.

Dependency Injection in JavaScript

Being a dynamic language, when designing JavaScript based architectures, in comparison to architectures which are under the constraints of a statically typed language, one is typically less inclined to consider the relevance of, or immediate need for, a complete IoC container. Of course, context is key, and so there are certainly JavaScript applications which can benefit from an IoC container (such as wire.js). As such, it would not be prudent for one to suggest otherwise; but rather, this is simply to say that for the majority of JavaScript applications, standard AMD loaders provide a sufficient means of managing dependencies; as the rigidness inherent to statically typed languages which IoC containers help manage are generally less relevant to dynamic languages.

With that being said, while a robust IoC container may not be necessary for the majority of JavaScript applications, it is quite important to emphasize the benefits of employing basic dependency management and Dependency Injection; as this is an essential design characteristic which is critical to the success and overall maintainability of large scale client-side web applications.

Facilitating Code Reuse

Anyone who has been responsible for developing and maintaining specific core features across multiple applications is likely to understand that the ability to facilitate reuse of JavaScript modules is crucial. This is particularly important in the context of architectures which must account for the ability to support mulitple implementations of the same application across different form-factors; for, the ability to manage and configure dependencies can prove paramount; allowing for a framework upon which various form-factor specific implementations of an application can be supported.

In addition to this, as one might expect, having the flexibility necessary for configuring dependencies lends itself, quite naturally so, to various unit testing scenarios.

Configuring Dependencies with RequireJS

Though not always immediately apparent, applications leveraging RequireJS are essentially using a basic form of Dependency Injection out of the box – even if not in the most purist sense of the term. However, the simple matter of mapping module names to module implementations can be considered, in-of-itself, a basic form of Dependency Injection, or perhaps, one could argue this as being more of a Service Locator, as RequireJS does not instantiate dependencies on a clients behalf. Regardless of the preferred classification, this mechanism of defining dependencies is quite important, as it affords developers the ability to change module implementations as desired without the need to change client code. Of course, such modules must adhere to a specific contract (interface) so as to ensure clients which depend on specific named modules receive the expected API.

Explicit Dependencies

Consider a rather contrived example of a shared Application module which is used across two separate applications; one for Mobile and one for Desktop; with the Application module having a dependency on an AppHelper module:

Both the Mobile and Desktop applications can easily map the AppHelper module to a context specific implementation via their respective main.js configurations:

Based on the above, it is rather evident that the AppHelper module is mapped to the appropriate application specific implementation; MobileHelper for mobile, and DesktopHelper for desktop. Additional context specific APIs can just as easily be defined, and thus provided as dependencies to other modules as needed using this very simple pattern.

Implicit Dependencies

Dependencies need not always be explicit, but rather they can also be implicitly mapped based on the path to which each application’s main.js configuration resides, or based on the configured baseUrl path.

For instance, given the above example, we can map a Templates Module, and implicitly inject the path to each context specific template based on the application’s default path, or baseUrl path:

As can be seen, each application’s main.js defines a Templates module and a TemplateSource module, respectively, with each being shared amongst both the Mobile and Desktop specific applications. The Templates and TemplateSource modules are defined as follows:

While both the Mobile and Desktop applications may share the same Templates and TemplateSource modules, the specific implementation of the templates loaded from TemplateSource is determined via each application’s base path; thus, the path to app/templates/some-view.tpl automatically resolves to the context specific template; i.e.: mobile/app/templates/some-view.tpl for Mobile, and desktop/app/templates/some-view.tpl for Desktop.

Concluding Thoughts

While the above examples are rather basic, they do serve well to demonstrate just how easily one can design for module reuse across different applications with RequireJS, which itself allows for much more robust configurations of modules; such as loading context specific modules at runtime, augmenting modules for differing contexts with mixins, providing third-party libraries based on a particular form-factor (e.g. jQuery for Desktop, Zepto for Mobile, etc.), and more.

You can clone the above example here.

Decoupling Backbone Modules

One of the principle design philosophies I have advocated over the years, especially through various articles on this site, has been the importance of decoupling. And while I could go into significant detail to elaborate on the importance of decoupling, suffice it to say that all designs – from simple APIs to complex applications – can benefit considerably from a decoupled design; namely, with respect to testability, maintainability and reuse.

Decoupling in Backbone

Many of the examples which can be found around the web on Backbone are intentionally simple in that they focus on higher level concepts without diverging into specific implementation or design details. Of course, this makes sense in the context of basic examples and is certainly the right approach to take when explaining or learning something new. Once you get into real-world applications, though, one of the first things you’ll likely want to improve on is how modules communicate with each other; specifically, how modules can communicate without directly referencing one another.

As I have mentioned previously, Backbone is an extremely flexible framework, so there are many approaches one could take to facilitate the decoupling of modules in Backbone; the most common of which, and my preferred approach, is decoupling by way of events.

Basic Decoupling with Events

The simplest way to facilitate communication between discreet modules in Backbone is to have each module reference a shared event broker (a pub /sub implementation). Modules can register themselves to listen for events of interest with the broker, and modules can also communicate with other modules via events as needed. Implementing such an API in Backbone is amazingly simple, in fact, so much so that the documentation provides an example in the following one liner:

Essentially, the dispatcher simply clones (or alternately, extends) the Backbone.Events object. Different modules can reference the same dispatcher to publish and subscribe to events of interest. For example, consider the following:

In the above example, the Users Collection is completely decoupled from the UserEditor View, and vice-versa. Moreover, any module can subscribe to the 'users:add' event without having any knowledge of the module from which the event was published. Such a design is extremely flexible and can be leveraged to support any number of events and use-cases. The above example is rather simple; however, it demonstrates just how easy it is to decouple modules in Backbone with a shared EventBroker.

Namespacing Events

As can be seen in the previous example, the add event is prefixed with a users string followed by a colon. This is a common pattern used to namespace an event in order to ensure events with the same name which are used in different contexts do not conflict with one another. As a best practice, even if an application initially only has a few events, the events should be namespaced accordingly. Doing so will help to ensure that as an application grows in scope, adding additional events will not result in unintended behaviors.

A General Purpose EventBroker API

To help facilitate the decoupling of modules via namespaced events, I implemented a general purpose EventBroker which builds on the default implementation of the Backbone Events API, adding additional support for creating namespace specific EventBrokers and registering multiple events of interest for a given context.

Basic Usage

The EventBroker can be used directly to publish and subscribe to events of interest:

Creating namespaced EventBrokers

The EventBroker API can be used to create and retrieve any number of specific namespaced EventBrokers. A namespaced EventBroker ensures that all events are published and subscribed against a specific namespace.

Namespaced EventBrokers are retrieved via Backbone.EventBroker.get(namespace). If an EventBroker has not been created for the given namespace, it will be created and returned. All subsequent retrievals will return the same EventBroker instance for the specified namespace; i.e. only one unique EventBroker is created per namespace.

Since namespaced EventBrokers ensure events are only piped thru the EventBroker of the given namespace, it is not necessary to prefix event names with the specific namespace to which they belong. While this can simplify implementation code, you can still prefix event names to aid in readability if desired.

Registering Interests

Modules can register events of interest with an EventBroker via the default on method or the register method. The register method allows for registering multiple event/callback mappings for a given context in a manner similar to that of the events hash in a Backbone.View.

Alternately, Modules can simply define an “interests” property containing particular event/callback mappings of interests and register themselves with an EventBroker

For additional examples, see the backbone-eventbroker project on github.

Jasmine matchers for Sinon.JS

Lately I have been finding myself writing custom Jasmine matchers which wrap the Sinon.JS API as a convenience. After repeating this process quite a few times I took a step back to see if there was a similar solution already available.

After a brief search, I quickly came across jasmine-sinon which, to my surprise, provides a very similar matcher API to that which I had began implementing. This library really is quite nice as it essentially provides matchers for all common Sinon.JS spies, stubs and mocking use-cases. I am sure jasmine-sinon has saved many developers a lot of time by not having to write their own custom matchers as it has for me.

And so, if you are using both Jasmine and Sinon.JS, and find yourself writing convenience matchers to simplify Sinon.JS integration, jasmine-sinon is an excellent addition which compliments both and is certainly worth considering.