You are currently browsing the Refactoring archives.

Simplified Partial Application with ES6

Wednesday, June 1st, 2016

When implementing Partial Application in ES6, implementations naturally become quite easier to reason about as default parameters, rest parameters and arrow functions can be leveraged to provide a much more comprehensive implementation.

While on the surface this may appear insignificant, when compared to having relied almost exclusively on the arguments object and Array.prototype to provide the same functionality in ES5, the benefits become rather apparent.

For instance, consider a simple multiply function which, depending on the arity of the invocation, either computes basic multiplication against the provided parameters, or returns a partial application. That is to say, if invoked as a unary function (single argument), the function returns a partial application (a new function which multiplies by the given argument). If invoked as a variadic function (variable amount of arguments), the function returns the product of the arguments.

In ES5, we could implement such a function as follows:

View Pen

Given the above example, in order to inspect and iterate over the provided arguments, we need to rely on the Array.prototype, specifically, we need to invoke Function.prototype.call on Array prototype in order to apply the slice method so as to convert the arguments object to an Array. Additionally, we also have to account for a default value of arguments[0] should it be omitted or NaN.

Not only does this require a superfluous amount of code, but it also results in a more complicated implementation that becomes considerably more verbose, and as a result, more difficult to reason about; especially for developers who may not be familiar with the specific mechanisms employed within the implementation.

ES6 to the rescue …

With the introduction of default parameters, …rest parameters, and Arrow Functions (fat arrows) in ES6, the implementation of the above example can be significantly reduced, and as a result, becomes considerably easier to understand, as we can simply re-write the multiply function as:

View Pen

As can be seen, implementing the multiply function in ES6 not only reduces the SLOC by 1/2 of the previous ES5 implementation, but more importantly, by using rest parameters, it allows us to determine and work with the functions arity in a much more natural way. Moreover, both iterating over the provided arguments and returning the partial application becomes considerably more concise simply by using arrow functions, and the need to account for undefined arguments becomes moot thanks to default parameters.

In addition, variadic invocations of such functions can also be simplified considerably using the ES6 spread operator. For example, in order to pass an Array of arguments to a function in ES5, one would need to call Function.apply against the function, like so:

With ES6 spread operators, however, we can simply invoke the function directly with the given array preceded by the spread operator:

Simple!

Hopefully this article has shed some light on a few of the features available in ES6 which allow for writing implementations which not only read much more naturally, but can be written with considerably less mental overhead.

IIFE in ES6

Wednesday, April 6th, 2016

TL;DR: In ES6, an IIFE is implemented as follows:


Unlike ES5, which is syntactically less opinionated, in ES6, when using an IIFE, parenthetical order matters.

For instance, in ES5 an IIFE could be written either as:

or

As can be seen in the above examples, in ES5 one could either wrap and invoke a function expression in parentheses, or wrap the function expression in parentheses and invoke the function outside of the parentheses.

However, in ES6, the former throws an exception, thus, one can not use:

But rather, the invocation must be made outside of the parentheses as follows:

As an aside for those who are curious, the syntax requirements are specific to ES6 and not a by-product of any particular transpilers (i.e. Babel, Traceur, etc.).

Polymer Behaviors in ES6

Friday, March 25th, 2016

Being a typical aspect of Object Oriented Design, inheritance, and mixins, provide the means by which modular reuse patterns can be facilitated within a given system. Similarly, Polymer facilitates code reuse patterns by employing the notion of shared behaviors modules. Let’s take a quick look at how to leverage them in Polymer when using ES6 classes.

Implementing Behaviors

Implementing a Behavior is quite simple, just define an object within a block expression or an IIFE, and expose it via a namespace, or module loader of choice:

some-behaviors.js

Then, include the behavior in a corresponding .html document of the same name so as to allow the behavior to be imported by subsequent elements:

some-behavior.html

Extending Behaviors

After having defined and exposed a given Behavior, the Behavior can then be extended from element classes by defining a behaviors getter / setter as follows:

Once the behavior has been extended, simply import the behavior in the element’s template (or element bundle, etc.) and it is available to the template class:

Try it

Implementing Multiple Behaviors

Similar to individual behaviors, multiple behaviors can also be defined and extended:

first-behavior.js

second-behavior.js

In certain cases, I have found it helpful to group related behaviors together within a new behaviors (array) which bundles the individual behaviors together:

Note: As can be seen in the FourthBehavior, a behavior can also be implemented as an Array of previously defined behaviors.

Extending Multiple Behaviors

As with extending individual behaviors, multiple behaviors can also be extended using a behaviors getter / setter. However, when extending multiple behaviors in ES6, there are syntactic differences which one must take note of. Specifically, the behaviors getter must be implemented as follows:

Try it

And that’s basically all there is to it. Hopefully this article helped outline how Polymer Behaviors can easily be leveraged when implementing elements as ES6 classes. Enjoy.

BDD/TDD Mental Models

Thursday, February 13th, 2014

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

Quick Tip: Backbone Collection Validation

Sunday, January 19th, 2014

Often times I find the native Backbone Collection implementation to be lacking when compared to it’s Backbone.Model counterpart. In particular, Collections generally lack in terms of direct integration with a backend persistence layer, as well as the ability to validate models within the context of the collection as a whole.

Fortunately, such short comings can easily be circumvented due to the extensibility of Backbone’s design as a generalized framework. In fact, throughout my experience utilizing Backbone, I can assert that there has yet to be a problem I have come across which I was unable to easily solve by leveraging one of the many Backbone extensions, or, more often than not, by simply overriding Backbone’s default implementation of a given API.

Validating Collections

Perhaps a common use-case for validating a collection of Models can be found when implementing editors which allow for adding multiple entries of a given form section (implemented as separate Views), whereby each section has a one-to-one correlation with an individual model. Rather than invoke validation on models from each individual view, and manage which model’s are in an invalid state from the context of a composite view, it can be quite useful to simply validate the collection from the composite view which, in turn, results in all models being validated and their associated views updating accordingly.

Assuming live validation is not being utilized, validation is likely to occur when the user submits the form. As such, it becomes necessary to validate each model after their views have updated them as a result of the form being submitted. This can be achieved quite easily by implementing an isValid method on the collection which simply invokes isValid on each model within the collection (or optionally, against specific models within the collection). A basic isValid implementation for a Collection is as follows:

As can be seen in the above example, the Collection’s isValid method simply invokes isValid on it’s models. This causes each model to be re-validated which, in turn, results in any invalid models triggering their corresponding invalidation events, allowing for views to automatically display validation indicators, messages, and the like; particularly when leveraging the Backbone.Validation Plugin.

This example serves well to demonstrate that, while Backbone may not provide everything one could ever ask for “out of the box”, it does provide a design which affords developers the ability to quickly, easily, and effectively extend the native framework as needed.

Function Modules in RequireJS

Wednesday, October 16th, 2013

Having leveraged RequireJS as part of my preferred client-side web stack for some time now, I find it rather surprising how often I recall various features from the API docs which I have yet to use, and how they may apply to a specific solution I am implementing.

One such feature is the ability to define a module as a function. While at first this may seem a rather basic feature, and indeed it is, there are quite a few practical purposes in which returning a function as a module definition can prove useful.

Function Modules

As one may expect, returning a function as a Module in RequireJS is rather straight-forward. For example, a simple random function can be defined as a module as follows:

Then, the function module can simply be invoked as follows:

Function Modules as Factories

Perhaps a more practical example of implementing a function module is in the context of a Factory Method.

For instance, a function module can be used as a convenient means of creating a specific type of object on a clients behalf based on certain conditions.

Take (an intentionally simple) example of a function module which, given a specific role type, returns a corresponding view implementation (in this example, a Backbone View) for an Editor feature:

Client code can then simply invoke the factory in order to retrieve the appropriate Editor view implementation based on the specified role type:

Defining modules as functions which serve as factory methods can help simplify client code implementations, as the responsibility of determining the type of object to create, configure, etc., can be delegated to a dedicated object; thus allowing for simpler designs which better facilitate code reuse, testing, and maintenance.

As a general rule of thumb, I typically reserve implementing modules as functions for cases in which a package level method would be appropriate, or for factory implementations. However, there are other scenarios in which function modules may apply, and so they are certainly worth noting.

You can fork the above example here.