BDD/TDD Mental Models

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

GitHub Quick Tip

Recently, I was looking for a simple way to view an html page which is part of a GitHub project’s source. In particular, I wanted to provide links on the Backbone.EventBroker project’s main page so users could run the specs and view the example.

Fortunately, as is the case with most things on GitHub, this is quite easy to accomplish; requiring nothing more than appending the URL of the page in the projects source to the http://htmlpreview.github.com/? query string:

Simple and useful indeed!

Clearing Web Notifications Permissions in Chrome

When implementing features which leverage HTML5 Web Notifications, specifically in Chrome, it can be rather convenient to have the ability to clear notification permissions from the host for which the feature is being implemented.

As would be expected, Chrome allows for easily managing any setting; however, one needs to navigate through quite a few of Chrome’s settings in order to locate Web Notification permissions, the path to which being:
Settings > Show advanced settings… > Content Settings > Notifications > Manage Settings…

Navigation of settings can be simplified by using Chrome’s Search box, which allows for quickly navigating to any specific setting:

While Chrome’s settings Search feature is quite useful, it still requires more interactions then desired. Fortunately, there is an even simpler approach for quickly accessing a specific setting, in this case, Notifications.

Simplifying Clearing Web Notification Permissions

Web Notification Settings can be directly accessed for all hosts via the following path:
chrome://settings/contentExceptions#notifications

As with any URL protocol, the above path can be bookmarked, allowing for easy and convenient management of Notifications.

Bookmarking Chrome Settings

While the focus here may be on the topic of managing Web Notification settings, it is important to note that any Chrome setting can be bookmarked as a convenience shortcut for quickly navigating to any given setting. For instance, Chrome’s cache setting can be bookmark at:
chrome://settings/clearBrowserData

Then, commonly used settings can be accessed simply from their respective bookmarks:

Simplifying Testing of Sign-up Processes

When testing the various steps of account sign-up, creation and activation processess, one must be mindful of designs which call for unique email addresses to be confirmed for each new account. While this is a generally accepted approach, it does pose a slight challenge when end-to-end testing requires email activation from the address with which the account was created.

Creating multiple Accounts under the same Email Address

Recently, while implementing an account registration feature, I was looking for a simple means of creating and testing multiple user accounts against a single email address, so as to allow for continuous testing without the need to use different email addresses for each test.

Fortunately, a few clever solutions have been available all along in Gmail which can be leveraged for this very problem (in addition to others).

The Plus (+) Trick

Perhaps the simplest means of creating or generating unique email addresses for account creation which will all be sent to the same email address is to leverage the Plus (+) Trick.

For instance, suppose you are testing account creation and activation and want to confirm the sign-up process at someuser@gmail.com. Typically, one would want to test this feature multiple times – either by means of automated testing, manual integration testing, or the like. In order to test the account creation feature continuously using the same someuser@gmail.com account, one can simply create new accounts by appending a plus (+) to their gmail user name, and postfix a unique string of charachters to the plus sign:

In this example, emails sent to any of the addresses above would all go to someuser@gmail.com. With this in mind, it is quite easy to continuously test account creation processess without the need to use multiple (real) email addresses.

The Dot (.) Trick

As with the Plus (+) Trick, similarly, the Dot (.) Trick can also be employed to test account creation under the same email address.

For instance, to test an account creation and activation process feature continuously using the same someuser@gmail.com account, one can simply create new accounts by inserting a dot “.” within their gmail user name:

In this example, emails sent to any of these addresses above would all be sent to someuser@gmail.com.

Concluding Thoughts

As can be seen in the above examples, both the Plus (+) Trick and Dot (.) Trick, respectively, can be used to create and test account sign-up and activation processes against a single email address, greatly simplifying testing.

While the Dot (.) Trick is quite useful, it is obviously limited to a finite combination of pseudo-unique email addresses, and thus, it is better suited for fewer testing scenarios.

For more extensive testing scenarios, the Plus (+) Trick is much better suited, as it allows for seemingly infinite permutations of the same email address, and is ideal for generating addresses from which account creation is to be tested.

Function Modules in RequireJS

Having leveraged RequireJS as part of my preferred client-side web stack for some time now, I find it rather surprising how often I recall various features from the API docs which I have yet to use, and how they may apply to a specific solution I am implementing.

One such feature is the ability to define a module as a function. While at first this may seem a rather basic feature, and indeed it is, there are quite a few practical purposes in which returning a function as a module definition can prove useful.

Function Modules

As one may expect, returning a function as a Module in RequireJS is rather straight-forward. For example, a simple random function can be defined as a module as follows:

Then, the function module can simply be invoked as follows:

Function Modules as Factories

Perhaps a more practical example of implementing a function module is in the context of a Factory Method.

For instance, a function module can be used as a convenient means of creating a specific type of object on a clients behalf based on certain conditions.

Take (an intentionally simple) example of a function module which, given a specific role type, returns a corresponding view implementation (in this example, a Backbone View) for an Editor feature:

Client code can then simply invoke the factory in order to retrieve the appropriate Editor view implementation based on the specified role type:

Defining modules as functions which serve as factory methods can help simplify client code implementations, as the responsibility of determining the type of object to create, configure, etc., can be delegated to a dedicated object; thus allowing for simpler designs which better facilitate code reuse, testing, and maintenance.

As a general rule of thumb, I typically reserve implementing modules as functions for cases in which a package level method would be appropriate, or for factory implementations. However, there are other scenarios in which function modules may apply, and so they are certainly worth noting.

You can fork the above example here.