The Pipe Operator: A Glimpse into the Future of Functional JavaScript


In the dynamic landscape of JavaScript, the TC39 proposal for the Pipe Operator tends to stand out as an interesting progression in terms of streamlining function composition in a way increases readability, maintainability, and DX.

In this article, we dive a bit deeper into the realms of functional programming in JavaScript, and how upcoming language features such as the Pipe Operator aid in the ability to facilitate a more declarative approach to functional programming.

At its core, the Pipe Operator, denoted by (|>) introduces syntactic sugar for function composition, allowing developers to pass the result of an expression as an argument to a function. And, while the syntax may appear somewhat unfamiliar at first glance, this seemingly simple language feature harbors some rather profound implications for code clarity and maintainability.

Before diving into some examples, let’s first take a look at how functions are typically composed in JavaScript, and then touch on some of the drawbacks that result from these traditional approaches.

For instance, consider this simple example which demonstrates how one could compose three functions:

As can be seen in the above, composing functions together in this manner is cumbersome at best. Moreover, implementations such as this significantly lack in readability as they effectively obscure intent; that is, we simply want to end up with “abc”, but to do so requires an inversion of our thinking.

Of course, we can simplify things quite a bit by implementing a simple compose function (or utilizing a utility library, such as lodash/fp), which we can then leverage for composing functions in a more natural way:

With the above implementation, managing the composition of functions becomes easier?-?and we can also defer invoking the function to a later time. Yet, it still leaves much to be desired, especially in terms of maintainability. For instance, should we need to change the composition, the order of arguments must be changed proportionately.

Alternatively, developers may choose to bypass chaining altogether and opt for a temporary variable approach in order to simplify implementation and readability. For example:

While this is rather subjective, the use of temporary variables arguably creates unnecessary cognitive load as one must follow the order of assignments, and contend with temporary values which, if not implemented as constants, could lead to potential mutations, etc.

Considering the traditional approach to nested function calls which results in a right-ward drift that is challenging to read and understand, the Pipe Operator on the other hand turns this paradigm on its head, so to speak, by enabling left-to-right composition of functions which organically reflects our natural way of thinking and recognizing patterns, as can be seen in the following:

In the above example, expression is the value that is first passed to functionA, the result of which (i.e. the value returned from functionA) is then passed to functionB, and so on until the last function (functionC) returns, at which point the final value is assigned to result. The readability of this approach as compared to traditional function composition is self-evident, reducing cognitive load and making the flow of data much more apparent.

Given the previous examples, with the Pipe Operator, we can now simplify the implementation in a much more natural way:

The simplicity and utility of the Pipe Operator results in much more succinct expressions which in turn reduces the mental overhead of reading and understanding the implementations intent.

The practical applications of the Pipe Operator are vast, as they can be used to simplify compositions for everything from data processing pipelines to event handling flows.

For instance, consider a scenario where we need to process a dataset through a series of transformations. Using the Pipe Operator, we can accomplish this in a simple and concise manner:

With the streamlined syntax of the Pipe Operator, both intent and the flow of control become much clearer. In addition, maintainability is vastly improved as we can change the order of the processes with considerably less effort. For example, if we decide we want to enrich the results prior to normalizing, we simply just change the order accordingly as needed:

As we see, changing the order of invocations is rather simple, thus maintainability is vastly improved.

A particularly intriguing aspect of the Pipe Operator proposal is the inclusion of “Topic References”; a concept which increases expressiveness and the utility of the Pipe Operator by providing direct access to values via a topicToken.

Topic References allow for elegant handling of the current value within the pipeline, using a symbol (currently, %) as a placeholder reference to the value. This feature enables more complex operations and interactions with the piped value beyond that of simply passing the value as an argument to a function.

The main purpose of topic references is to enhance readability and flexibility for use-cases which involve multiple transformations or operations. By using a placeholder for the current value, developers can clearly express operations like method calls, arithmetic operations, and more complex function invocations directly on the value being piped through, without needing to wrap these operations in additional functions.

Consider a scenario where you’re processing a string to ultimately transform it into a formatted message. Without topic references, each step would require an additional function, even for simple operations. With topic references, however, the process becomes much more direct and readable:

One point to note regarding the topicToken is that it has not been finalized, thus the token is subject to change but will ultimately be one of the following: %, ^^, @@, ^, or #. Currently, @babel/plugin-proposal-pipeline-operator defaults to %, which can be configured to use one of the proposed topicTokens.

Through the use of topic references, the Pipe Operator proposal not only adheres to traditional functional programming principles, but also enhances developer experience by allowing for more intuitive and maintainable implementations. Features such as these represents a significant step forward in providing more declarative and expressive patterns in JavaScript.

The Pipe Operator proposal is currently in the pipeline for standardization, reflecting a collective effort within the JavaScript community to adopt functional programming paradigms. By facilitating a more declarative approach to coding, this proposal aligns with the language’s evolution towards offering constructs that support modern development practices.

Key benefits of the Pipe Operator include:

  • Enhanced Readability: Allows for a straightforward expression of data transformations, improving the readability of the code and making it more accessible to developers.
  • Reduced Complexity: Simplifies complex expressions that would otherwise require nested function calls or intermediate variables, thereby reducing the potential for errors.
  • A More Functional Paradigm: By promoting function composition, the Pipe Operator strengthens JavaScript’s capabilities as a language well-suited for functional programming.

As the JavaScript ecosystem continues to evolve, with TC39 proposals such as the Pipe Operator set to play an important role in shaping the future of the language, especially from a functional programming perspective.

While the proposal is still under consideration, its potential to enhance developer experience and promote functional programming principles is most certainly something to look forward to.

(Update: August, 2021, proposal has been moved to Stage 2)

ES2020 Optional Chaining & Nullish Coalescing

Of the various Features proposed in ES2020, perhaps two of the simplest features will prove to be the most useful, at least in terms of simplification and maintenance are concerned.

Specifically, the Optional Chaining Operator and Nullish Coalescing Operator are of particular interest as they are certain to result in less verbose, less error prone expressions.

In a nutshell, Optional Chaining provides a syntax for undefined / null checks when performing nested object references using a simple question mark appended by a dot (?.) notation.

For instance, consider how many times you may have written defensive expressions similar to the following:

Or perhaps you have assigned intermediate values to temporary variables to perform the same:

The need to check for possible reference errors quickly becomes tedious, and with each lookup we increase the potential for introducing bugs. Utilities can be implemented for delegating these checks, but ultimately, this just moves the problem from one context to another, resulting in additional points for failure.

With Optional Chaining, however, accessing properties safely becomes considerably less verbose, as the examples above can be simplified to:

Reference checks when invoking functions also become simplified:

And dynamic property references can safely be performed as well:

In addition, combined with the Nullish Coalescing Operator, Optional Chaining becomes even more succinct as one can specify a value to resolve to rather than the default (undefined) by simply using a double question mark (??) notation. For example:

Moreover, Nullish Coalescing, while intended as a compliment to Optional Chaining, also solves additional problems when dealing with falsy values. For instance, consider how many times you may have written something similar to the following:

With the Nullish Coalescing Operator, we can avoid the problems outlined above as only undefined and null values will evaluate to true, so falsy values are safe:

Since Nullish Coalescing only checks for undefined and null, the above holds true for all other falsy values, so false, empty strings, and NaN are safe as well..

One thing to note is that Optional Chaining does not resolve when destructuring. So, for example, the following will throw an exception:

Interestingly, though, combined with Nullish Coalescing, an exception will not be raised; though, the default will not be assigned, either:

As can be seen, ES2020 has no shortage of new features on offer to be excited about and, while arguably not as exciting as other features, Optional Chaining combined with Nullish Coalescing will certainly prove to be valuable additions.

Both Optional Chaining and Nullish Coalescing proposals are currently at Stage 4 and are available in most modern browsers as well as via the following babel plugins: @babel/plugin-proposal-optional-chaining and @babel/plugin-proposal-nullish-coalescing-operator.

Benefits of JavaScript Generators

JavaScript Symbols

One of the more nuanced features introduced in ES6 is that of Generator functions. Generators offer a powerful, yet often misunderstood mechanism for controlling the flow of operations, allowing developers to implement solutions with improved readability and efficiency. This article briefly delves into a few of the benefits that JavaScript Generators have to offer, elucidating on their purpose, functionality, and specific scenarios which can benefit from their usage.

A Generator function is a special type of function that can pause execution and subsequently resume at a later time, making it quite valuable for handling asynchronous operations as well as many other use cases. Unlike regular functions which run to completion upon invocation, Generator functions return an Iterator through which their execution can be controlled. It is important to note that while generators facilitate asynchronous operations, they do so by yielding Promises and require external mechanisms, such as async/await or libraries, to handle the asynchronous resolution.

Generators are defined with the function keyword followed by an asterisk (*); i.e. (function*), and are instantiated when called, but not executed immediately. Rather, they wait for the caller to request the next result. This is achieved using the Iterator.next() method, which resumes execution until the next yield statement is encountered, or the generator function returns.

As mentioned, Generator functions return an Iterator, therefore, all functionality of Iterables are available to them, such as for...of loops, destructuring, ...rest parameters, etc.:

Generators allow for the creation of custom iteration logic, such as generating sequences without the need to pre-calculate the entire set. For example, one can generate a Fibonacci sequence using generators as follows:

Generators have the ability to maintain state between yields, thus they are quite useful for managing stateful iterations. This feature can be leveraged in scenarios such as those which require pause and resume logic based on runtime conditions. For instance:

It may initially seem confusing as to how the value passed to game.next(value) is referenced within the Generator function. However, it is important to understand how this mechanism works as it is a core feature of generators, allowing them to interact dynamically with external input. Below is a breakdown outlining this behavior in the context of the above example:

  1. Starting the Generator: When game.next() is first called, the gameState generator function begins execution until it reaches the first yield statement. This initial call starts the generator but does not yet pass any value into it, as the generator is not yet paused at a yield that could receive a value.
  2. Pausing Execution: The yield statement pauses the generator’s execution and waits for the next input to be provided. This pausing mechanism is what differentiates generators from regular functions, allowing for a two-way exchange of values.
  3. Resuming with a Value: After the generator is initiated and paused at a yield, calling game.next(value) resumes execution, passing the value into the generator. This passed value is received by the yield expression where the generator was paused.
  4. Processing and Pausing Again: Once the generator function receives the value and resumes execution, it processes operations following the yield until it either encounters the next yield (and pauses again, awaiting further input), reaches a return statement (effectively ending the generator’s execution), or completes its execution block.

This interactive capability of generators to receive external inputs and potentially alter their internal state or control flow based on those inputs is what makes them particularly powerful for tasks requiring stateful iterations or complex control flows.

In addition to yielding values with yield, generators have a distinct behavior when it comes to the return statement. A return statement inside a generator function does not merely exit the function, but instead, it provides a value that can be retrieved by the iterator. This behavior allows generators to signal a final value before ceasing their execution.

When a generator encounters a return statement, it returns an object with two properties: value, which is the value specified by the return statement, and done, which is set to true to indicate that the generator has completed its execution. This is different from the yield statement, which also returns an object but with done set to false until the generator function has fully completed.

This example illustrates that after the return statement is executed, the generator indicates it is done, and no further values can be yielded. However, the final value returned by the generator can be used to convey meaningful information or a result to the iterator, effectively providing a clean way to end the generator’s execution while also returning a value.

Generators also provide a return() method that can be used to terminate the generator’s execution prematurely. When return() is called on a generator object, the generator is immediately terminated and returns an object with a value property set to the argument provided to return(), and a done property set to true. This method is especially useful for allowing clients to cleanly exit generator functions, such as for ensuring resources are released appropriately, etc..

In this example, after the first yield is consumed, return() is invoked on the generator. This action terminates the generator, returns the provided value, and sets the done property of the generator to true, indicating that the generator has completed and will no longer yield values.

This capability of generators to be terminated early and cleanly, returning a specified value, provides developers fine-grained control over generator execution.

Generators provide a robust mechanism for error handling, allowing errors to be thrown back into the generator’s execution context. This is accomplished using the generator.throw() method. When an error is thrown within a generator, the current yield expression is replaced by a throw statement, causing the generator to resume execution. If the thrown error is not caught within the generator, it propagates back to the caller.

This feature is particularly useful for managing errors in asynchronous operations, enabling developers to handle errors in a synchronous-like manner within the asynchronous control flow of a generator.

This example illustrates how generator.throw() can be used to simulate error conditions and test error handling logic within generators. It also shows how generators maintain their state and control flow, even in the presence of errors, providing a powerful tool for asynchronous error management.

One particularly interesting feature of Generators is that they can be composed of other generators via the yield* operator.

The ability to compose Generators allows for implementing various levels of abstraction and reuse, making their usage much more flexible.

Generators can be used for many purposes, ranging from basic use-cases such as generating a sequence of numbers, to more complex scenarios such as handling streams of data so as to allow for processing input as it arrives. Through the brief examples above, we’ve seen how Generators can improve the way we, as developers, approach implementing solutions for asynchronous programming, iteration, and state management.

Quick Tip: React Spring & Babel Loader

Recently, I had integrated React Spring within an Application, and while it is one of the best Animation Libraries for React I have come across in quite some time; unfortunately, I encountered some issues when running tests and production builds.

Essentially, the issues I experienced were related to the imported modules being written in ES6. This was an issue for me as I prefer to have webpack babel-loader configured to exclude node_modules and only transpile project sources.

Fortunately, the work around for this is quite simple: just import the CommonJS modules (i.e. .cjs extensions) rather than their ES6 counterparts (i.e. no extension).

Thus, simply changing:

To:

Resolves the issue.

And so, should you happen to come across build issues when using React Spring, a nice alternative to including the node_modules directory or specific dependencies is to simply import the CommonJS modules.

Synchronizing package.json with yarn.lock


After having used Yarn almost exclusively for the past couple of years, there has been one nagging issue which seemed to continually crop up. Specifically, the inability to have a project’s package.json dependency versions kept in sync with the actual versions in yarn.lock. And so, while running yarn upgrade results in all packages being updated to the latest versions (as specified via the given semver ranges), the versions defined in package.json are not updated to reflect that which they have been upgraded to.

This can prove problematic as, one can not easily discern a project’s dependency versions by simply viewing their respective values in package.json.

In particular, as part of process, after each production release I have scripts which are executed to automate the process of updating all project dependencies to their respective latest Minor and Patch revisions prior to opening master for new development. While the scripts manage the updates and committals internally, each project’s package.json would remain unmodified, making it challenging to determine which packages have been upgraded, and which have not. Having to automate or manually inspect the yarn.lock files is less than ideal, and quite cumbersome to say the least.

Fortunately, like most things in the Javascript world, there is a package for this; syncyarnlock, which provides exactly what one would need to ensure that the dependency versions defined in package.json are kept in sync with the project’s yarn.lock.

Simply install syncyarnlock, and execute with the options applicable to your needs.

For example, to sync a project’s package.json with the project’s yarn.lock, and have the ranges remain intact while updating the versions to reflect what will actually be installed, simply run: syncyarnlock -s -k.

This will result in the dependency ranges being preserved, while also updating their versions to reflect the versions that will actually be installed.

And with that, we have proper syncing. A definite time-saver!

Separation of Concerns: propTypes and Immutable.js

When considering the separation of concerns between Container and Presentational Components (stateful / stateless components), I find it useful to leverage the core concepts of these patterns in order to define a clear boundary between where Immutable data types are used, and where raw JavaScript types are referenced exclusively.

By having a clear separation which compartmentalizes where Immutable types are used and where they are not, team members are afforded the ability to easily determine a components propTypes; as, without having a clear cut-off point, one must give thought as to if a prop passed down to a component will be an Immutable object, or not.

It’s no stretch of the imagination to see how this can quickly lead to code which becomes much harder to maintain than it needs to be. As such, the Container / Presentational Component pattern provides a rather natural boundary for separating these concerns.

Unfortunately; however, while such a boundary may seem rather obvious, it may not always be clearly defined, and this tends to lead to overly complex propType declarations.

For instance, on a number of occasions I’ve seen propTypes declared similar to the following:

Given the above example, it’s obvious that it was unclear to the original implementor (or current maintainer) of SomePresentationalComponent as to what the expected propTypes will ultimately be. In certain cases, it appears someList could be of type array; whereas, in other cases, it could be of type object (e.g. Immutable.List). Likewise, in some cases someItem could be an object, whereas in others it could be an Immutable.Map.

As you can see, this is obviously problematic and indeed a very good candidate for a bug (not to mention, a maintenance headache indeed).

Moreover, it results in all sorts of unnecessary type check permutations before accessing properties. For example, just to check the length of the list:

Likewise, just to get the id of someItem:

At best, this is far from ideal, to say the very least …

Now, obviously the developer could simply define a single propType and refactor Containers which are passing an invalid type; however, it may not always be clear what the type should be, if say, the component is being used by multiple applications to which the developer does not have access, and some of those applications are not using Immutable.js, in which case, it would be best to simply disallow Immutable from the component all together and have consumers of the component update their Containers. In any event, it’s symptomatic of a team not having a clear understanding of what kind of components work exclusively with Immutable data types, and which do not.

Solutions

Fortunately, as one might imagine, there are a couple of very simple solutions to this problem:

  1. Only use Immutable types throughout the entire application.
  2. Segment which components use Immutable types, and which do not.

Now, in some cases the argument for Option #1 may very well be a valid one; however, I find Option #2 to be much more feasible (and flexible) as, it helps to ensure Presentational component are kept pure, and that means only using JavaScript types. For my purposes, this is especially important as I have to maintain a shared library which must limit dependencies as much as possible; and some projects are using Immutable, Redux, etc., and some are not. As always – consider the context.

Pros

By having an internal design contract (or convention) which mandates that Container components are only ever to work with Immutable types and, Presentational components are only ever to be passed JavaScript data types, it becomes much clearer to team members where the boundary is defined, and thus, much easier to maintain a large application over time.

Furthermore, it allows less experienced developers to gradually become acclimated with the React Ecosystem by assigning them tasks focused on presentational features. This can be very useful as it only requires knowledge of core concepts without being inundated with additional libraries and APIs. This approach also affords team members with more experience to focus on the more complex portions of the application (application logic, reducers, containers, etc.).

In addition, destructuring, …rest parameters and related ES6 features can be used much more extensively to simplify implementation when using JavaScript types exclusively, helping to ensure Presentational components are kept intentionally “dumb”. Not to mention, in doing so, testing becomes considerably less complex when working with native JavaScript types – and this is equally important when helping newer developers become productive while still getting up to speed.

And, while not always likely, by reducing our dependency on Immutable.js, we position ourselves for a much more easier migration path in the event we decide to swap out Immutable for another library in the future.

Cons

Arguably, one could be justified in the assertion that only Immutable Data types should be used by both Container and Presentational components (Option #1), and indeed that would be a fair argument if you will be calling toJS() frequently when passing props down to Presentational Components (as there is obviously an inherent expense in doing so).

That being said, there is no reason why one would need to call toJS when passing props to Presentational Components as the Immutable API can be utilized to reduce the given props before being passed down to child components. In such cases, a Higher Order Component can be defined for doing either, which can simplify implementation considerably.

Summary

Like most design decisions, there is rarely a one-size-fits-all approach that perfectly solves any given problem, and what ultimately makes sense in one context, may not always be appropriate in another. However, in the context of when and where Immutable types are used, in most cases it is fair to say there should always be a clear boundary defined, regardless of where that boundary must be.