You are currently browsing the Agile archives.

Synchronizing package.json with yarn.lock

Tuesday, November 13th, 2018

After having used Yarn almost exclusively for the past couple of years, there has been one nagging issue which seemed to continually crop up. Specifically, the inability to have a project’s package.json dependency versions kept in sync with the actual versions in yarn.lock. And so, while running yarn upgrade results in all packages being updated to the latest versions (as specified via the given semver ranges), the versions defined in package.json are not updated to reflect that which they have been upgraded to.

This can prove problematic as, one can not easily discern a project’s dependency versions by simply viewing their respective values in package.json.

In particular, as part of process, after each production release I have scripts which are executed to automate the process of updating all project dependencies to their respective latest Minor and Patch revisions prior to opening master for new development. While the scripts manage the updates and committals internally, each project’s package.json would remain unmodified, making it challenging to determine which packages have been upgraded, and which have not. Having to automate or manually inspect the yarn.lock files is less than ideal, and quite cumbersome to say the least.

Fortunately, like most things in the Javascript world, there is a package for this; syncyarnlock, which provides exactly what one would need to ensure that the dependency versions defined in package.json are kept in sync with the project’s yarn.lock.

Simply install syncyarnlock, and execute with the options applicable to your needs.

For example, to sync a project’s package.json with the project’s yarn.lock, and have the ranges remain intact while updating the versions to reflect what will actually be installed, simply run: syncyarnlock -s -k.

This will result in the dependency ranges being preserved, while also updating their versions to reflect the versions that will actually be installed.

And with that, we have proper syncing. A definite time-saver!

Separation of Concerns: propTypes and Immutable.js

Wednesday, July 25th, 2018

When considering the separation of concerns between Container and Presentational Components (stateful / stateless components), I find it useful to leverage the core concepts of these patterns in order to define a clear boundary between where Immutable data types are used, and where raw JavaScript types are referenced exclusively.

By having a clear separation which compartmentalizes where Immutable types are used and where they are not, team members are afforded the ability to easily determine a components propTypes; as, without having a clear cut-off point, one must give thought as to if a prop passed down to a component will be an Immutable object, or not.

It’s no stretch of the imagination to see how this can quickly lead to code which becomes much harder to maintain than it needs to be. As such, the Container / Presentational Component pattern provides a rather natural boundary for separating these concerns.

Unfortunately; however, while such a boundary may seem rather obvious, it may not always be clearly defined, and this tends to lead to overly complex propType declarations.

For instance, on a number of occasions I’ve seen propTypes declared similar to the following:

Given the above example, it’s obvious that it was unclear to the original implementor (or current maintainer) of SomePresentationalComponent as to what the expected propTypes will ultimately be. In certain cases, it appears someList could be of type array; whereas, in other cases, it could be of type object (e.g. Immutable.List). Likewise, in some cases someItem could be an object, whereas in others it could be an Immutable.Map.

As you can see, this is obviously problematic and indeed a very good candidate for a bug (not to mention, a maintenance headache indeed).

Moreover, it results in all sorts of unnecessary type check permutations before accessing properties. For example, just to check the length of the list:

Likewise, just to get the id of someItem:

At best, this is far from ideal, to say the very least …

Now, obviously the developer could simply define a single propType and refactor Containers which are passing an invalid type; however, it may not always be clear what the type should be, if say, the component is being used by multiple applications to which the developer does not have access, and some of those applications are not using Immutable.js, in which case, it would be best to simply disallow Immutable from the component all together and have consumers of the component update their Containers. In any event, it’s symptomatic of a team not having a clear understanding of what kind of components work exclusively with Immutable data types, and which do not.

Solutions

Fortunately, as one might imagine, there is a couple of very simple solutions to this problem:

  1. Only use Immutable types throughout the entire application.
  2. Segment which components use Immutable types, and which do not.

Now, in some cases the argument for Option #1 may very well be a valid one; however, I find Option #2 to be much more feasible (and flexible) as, it helps to ensure Presentational component are kept pure, and that means only using JavaScript types. For my purposes, this is especially important as I have to maintain a shared library which must limit dependencies as much as possible; and some projects are using Immutable, Redux, etc., and some are not. As always – consider the context.

Pros

By having an internal design contract (or convention) which mandates that Container components are only ever to work with Immutable types and, Presentational components are only ever to be passed JavaScript data types, it becomes much clearer to team members where the boundary is defined, and thus, much easier to maintain a large application over time.

Furthermore, it allows less experienced developers to gradually become acclimated with the React Ecosystem by assigning them tasks focused on presentational features. This can be very useful as it only requires knowledge of core concepts without being inundated with additional libraries and APIs. This approach also affords team members with more experience to focus on the more complex portions of the application (application logic, reducers, containers, etc.).

In addition, destructuring, …rest parameters and related ES6 features can be used much more extensively to simplify implementation when using JavaScript types exclusively, helping to ensure Presentational components are kept intentionally “dumb”. Not to mention, in doing so, testing becomes considerably less complex when working with native JavaScript types – and this is equally important when helping newer developers become productive while still getting up to speed.

And, while not always likely, by reducing our dependency on Immutable.js, we position ourselves for a much more easier migration path in the event we decide to swap out Immutable for another library in the future.

Cons

Arguably, one could be justified in the assertion that only Immutable Data types should be used by both Container and Presentational components (Option #1), and indeed that would be a fair argument if you will be calling toJS() frequently when passing props down to Presentational Components (as there is obviously an inherent expense in doing so).

That being said, there is no reason why one would need to call toJS when passing props to Presentational Components as the Immutable API can be utilized to reduce the given props before being passed down to child components. In such cases, a Higher Order Component can be defined for doing either, which can simplify implementation considerably.

Summary

Like most design decisions, there is rarely a one-size-fits-all approach that perfectly solves any given problem, and what ultimately makes sense in one context, may not always be appropriate in another. However, in the context of when and where Immutable types are used, in most cases it is fair to say there should always be a clear boundary defined, regardless of where that boundary must be.

React PropTypes and ES6 Destructuring

Monday, April 24th, 2017

At times one may be justified in the argument that cognitive (over)load is just an expected part of the overall developer experience. Fortunately, there are numerous steps we can take to minimize the general noise which tends to distract our intended focus. One particularly simple – yet effective – example is to remove unnecessary redundancy wherever possible. In doing so, we afford both our peers and ourselves a codebase which, over time, becomes considerably easier to maintain.

For instance, when performing code reviews, more often than not I tend to see considerable redundancy when specifying React PropTypes. Typically, something along the lines of:

As can be seen, with each new prop we are redundantly referencing React PropType lookup paths. And, while the ideal components will have a limited number of props (either connected directly, or passed down), the redundancy still remains for any component which references the same prop type. Considering the number of components a given application may contain, we can rightfully assume that the above redundancy will grow proportionally.

With the above in mind, we can easily reduce the redundancy (as well as micro-optimize the lookup paths) be simply destructuring the props of interest as follows:

While I would consider the above to be simplified enough; one could also take this a step further and destructure the isRequired props, which, in some circumstances, may be useful as well:

Admittedly, this example is rather straight-forward; however, it does help to emphasize the point that only through consistent vigilance can we ensure our source will continue to evolve organically while remaining as simple as possible.

NPM & Root Permissions

Tuesday, November 1st, 2016

When dealing with NPM Permissions, often times it can be tempting to resort to installing modules as root (sudo), especially when under tight time constraints; where troubleshooting such issues will only serve to delay your progress.

Admittedly, I have been guilty of this more often than I care to admit. That said, being as I always have the Broken Windows Theory in the back of my mind, I knew this workaround needed to be resolved as soon as I had a moment to dig into it a bit more.

Previously, I had followed the instructions from docs.npmjs; however, they focus more on installations of global dependencies, rather than local dependencies. Fortunately, after a few quick searches, it became apparent that by simply changing permissions to the ~/.npm directory, this issue could easily be resolved as, all that is needed is to change the owner of the ~/.npm directory to the current user (as opposed to root).

To do so, simply run the following:

Likewise, you can use your username explicitly; e.g.:

And with that, the issue can safely be resolved, allowing you to run npm install as expected without having to fallback to using sudo.

Quick Tip: Backbone Collection Validation

Sunday, January 19th, 2014

Often times I find the native Backbone Collection implementation to be lacking when compared to it’s Backbone.Model counterpart. In particular, Collections generally lack in terms of direct integration with a backend persistence layer, as well as the ability to validate models within the context of the collection as a whole.

Fortunately, such short comings can easily be circumvented due to the extensibility of Backbone’s design as a generalized framework. In fact, throughout my experience utilizing Backbone, I can assert that there has yet to be a problem I have come across which I was unable to easily solve by leveraging one of the many Backbone extensions, or, more often than not, by simply overriding Backbone’s default implementation of a given API.

Validating Collections

Perhaps a common use-case for validating a collection of Models can be found when implementing editors which allow for adding multiple entries of a given form section (implemented as separate Views), whereby each section has a one-to-one correlation with an individual model. Rather than invoke validation on models from each individual view, and manage which model’s are in an invalid state from the context of a composite view, it can be quite useful to simply validate the collection from the composite view which, in turn, results in all models being validated and their associated views updating accordingly.

Assuming live validation is not being utilized, validation is likely to occur when the user submits the form. As such, it becomes necessary to validate each model after their views have updated them as a result of the form being submitted. This can be achieved quite easily by implementing an isValid method on the collection which simply invokes isValid on each model within the collection (or optionally, against specific models within the collection). A basic isValid implementation for a Collection is as follows:

As can be seen in the above example, the Collection’s isValid method simply invokes isValid on it’s models. This causes each model to be re-validated which, in turn, results in any invalid models triggering their corresponding invalidation events, allowing for views to automatically display validation indicators, messages, and the like; particularly when leveraging the Backbone.Validation Plugin.

This example serves well to demonstrate that, while Backbone may not provide everything one could ever ask for “out of the box”, it does provide a design which affords developers the ability to quickly, easily, and effectively extend the native framework as needed.

Simplifying Testing of Sign-up Processes

Thursday, November 14th, 2013

When testing the various steps of account sign-up, creation and activation processess, one must be mindful of designs which call for unique email addresses to be confirmed for each new account. While this is a generally accepted approach, it does pose a slight challenge when end-to-end testing requires email activation from the address with which the account was created.

Creating multiple Accounts under the same Email Address

Recently, while implementing an account registration feature, I was looking for a simple means of creating and testing multiple user accounts against a single email address, so as to allow for continuous testing without the need to use different email addresses for each test.

Fortunately, a few clever solutions have been available all along in Gmail which can be leveraged for this very problem (in addition to others).

The Plus (+) Trick

Perhaps the simplest means of creating or generating unique email addresses for account creation which will all be sent to the same email address is to leverage the Plus (+) Trick.

For instance, suppose you are testing account creation and activation and want to confirm the sign-up process at someuser@gmail.com. Typically, one would want to test this feature multiple times – either by means of automated testing, manual integration testing, or the like. In order to test the account creation feature continuously using the same someuser@gmail.com account, one can simply create new accounts by appending a plus (+) to their gmail user name, and postfix a unique string of charachters to the plus sign:

In this example, emails sent to any of the addresses above would all go to someuser@gmail.com. With this in mind, it is quite easy to continuously test account creation processess without the need to use multiple (real) email addresses.

The Dot (.) Trick

As with the Plus (+) Trick, similarly, the Dot (.) Trick can also be employed to test account creation under the same email address.

For instance, to test an account creation and activation process feature continuously using the same someuser@gmail.com account, one can simply create new accounts by inserting a dot “.” within their gmail user name:

In this example, emails sent to any of these addresses above would all be sent to someuser@gmail.com.

Concluding Thoughts

As can be seen in the above examples, both the Plus (+) Trick and Dot (.) Trick, respectively, can be used to create and test account sign-up and activation processes against a single email address, greatly simplifying testing.

While the Dot (.) Trick is quite useful, it is obviously limited to a finite combination of pseudo-unique email addresses, and thus, it is better suited for fewer testing scenarios.

For more extensive testing scenarios, the Plus (+) Trick is much better suited, as it allows for seemingly infinite permutations of the same email address, and is ideal for generating addresses from which account creation is to be tested.