You are viewing the Articles in the JavaScript Category

Front-End System Design: 12 Fundamentals You Need to Know

To commemorate the 20th Anniversary of this Blog, I wanted to publish an Article which would stand the test of time; providing value to developers of all levels; both now, and for many years to come. In giving thought as to what subject should be covered, I contemplated the recurring issues I have observed over the course of the past 2 decades. In nearly all cases, the issues encountered were almost exclusively related to a lack of awareness of specific design fundamentals. As such, the subject for this article naturally revealed itself …

The world in which we live consists of one philosophical constant – change. This fact is ever apparent in the continually evolving field of front-end web development. The overwhelming amount of emerging libraries, frameworks, tools, and methodologies may seem daunting at times. However, by having an understanding of the 12 Fundamental system design principles outlined in this article, developers can move forward with confidence in the knowing that they have strategies to employ which transcend technologies and trends.

This article aims to provide insight into each principle; presenting them with a succinct overview, as well as a mental model by way of metaphor so as to help form a conceptual framework at a deeper level. Having an understanding of these fundamental principles and, more importantly, applying them in practice, will serve you well when making design decisions, both now and well into the future …

Note, references to current frameworks, methodologies, and language specifics have been intentionally omitted so as to reflect the timelessness of each principle.

Fundamental to software architecture are specific tried and tested core principles which provide a conceptual baseline upon which a system can be designed. While there are many aspects to consider, the 12 Principles discussed in this article have been chosen specifically due to their being fundamental in nature. They are as follows:

Each of these foundational principles provide innate tenants of quality software design. They transcend technologies and frameworks, and applying them in practice will naturally translate across all programming paradigms. It is important to develop an understanding of each at scale to the point where they become second-nature when making design decisions; be it a relatively simple function, or a large enterprise-class application.

If one were to ask any member of the teams I have guided throughout my career what the single most advocated principle I champion has been, I am rather certain they would unanimously respond with: Separation of Concerns – and for good reason as, software architecture is all about managing complexity, and by utilizing Separation of Concerns, systems become significantly easier to manage. In fact, each subsequent principle outlined within this article is inherently related to Separation of Concerns. Therefore, it is appropriate to include Separation of Concerns as the first fundamental.


Separation of Concerns involves structuring a system into distinct sections, each addressing a specific concern. All Front-End Developers are familiar with this concept at a high level as, separating markup (HTML), styling (CSS), and behavior (JS) is an age old best practice. This same concept can be applied to span across the entirety of the system, from individual pages and features, down to low-level implementation specifics such as functions and conditions.

For instance, we have all seen components which start out relatively simple and are focused on a single concern, only to degrade over time to include additional responsibilities which are beyond the scope of the component’s original intent. More often than not, this is the result of developers failing to take a step back to think through how changes should be integrated when tasked with introducing new requirements, and instead, making the mistake of just adding additional logic and behaviors to the existing component. This very issue is intrinsically related to many of the principles included in this article, specifically, Abstraction, Loose Coupling, and the Open / Closed Principle.

Mental Model

Think of a house, where the concept of Separation of Concerns is similar to the distinct roles each area has within the home. Just as a house is divided into different rooms, each serving a specific purpose, the principle of Separation of Concerns involves structuring a software system into distinct sections, each concerned with a specific aspect of the system.

In a typical home, you may have a kitchen, living room, bedrooms, and a bathroom. Each room is designed and equipped for a specific function. The kitchen has appliances for cooking, the living room has seating and entertainment options, bedrooms have beds and wardrobes, bathrooms have showers and toilets, and so on. This intentional separation allows for efficient organization and effective maintenance. This same concept can then be further iterated on for each individual area, such as separating the concerns for each function within a given area. By utilizing Separation of Concerns, a structured and organized living space which is much easier to maintain can be achieved; for instance, if you want to remodel the bathroom, you don’t need to make changes to any other areas of the home.

Just as a well-designed house provides a comfortable and functional living space, applying Separation of Concerns in software development results in a modular system which is much easier to maintain and facilitate scale. Each ‘room’ or component can be implemented, tested, and modified independently, resulting in the system as a whole being much easier to adapt to change.


Separation of Concerns is the foundational Principle by which all subsequent principles abide; it is simply about dividing a system into specific areas (concerns) based on what they are responsible for. By taking the time to define each concern based on it’s context and scope within the system, and implementing dedicated modules for each concern, the cumulative result will be a maintainable and scalable system which simply could not be achieved otherwise.

Just as with Separation of Concerns, if one were to ask what the second most advocated principle I champion has been, I am certain they would respond with: Abstraction – and likewise, for good reason as a key aspect of Abstraction is that it serves as the primary agent for reuse – this is a very important point to keep in mind. In addition, Abstraction is the key mechanism by which DRY Principles are facilitated.


The concept of Abstraction simply refers to the process of breaking down complexity into smaller, more discrete units, each responsible for a specific behavior in order to facilitate reuse and simplification.

Mental Model

Abstraction is perhaps the one of the most omnipresent concepts in life, so much so that it often goes completely unnoticed due to its ubiquity. The simplification of highly complex concepts and representations into easily comprehensible constructs can be found literally everywhere. For instance, your name is merely an abstraction of what would otherwise be a highly complex representation which encompasses your personality, experiences, characteristics, and so forth. The examples of abstraction are virtually endless as it is employed everywhere to simplify our day-to-day lives.


By utilizing Abstraction consistently, overtime you will get to the point where you intuitively “see” what should be abstracted, and when it should be abstracted – without having to give much thought to it at all.

Abstraction requires discipline as, developers may initially feel that doing so results in “more work”; however, I strongly advice against this mindset as it can easily be argued that not doing so will ultimately result in considerably more work down the line, at which point addressing the technical debt will undoubtedly require more effort. Pay now, or Pay more later.

A final take away I can attest to based on my experience is that be utilizing abstraction, it will result in your becoming a significantly faster developer. I state this with confidence as, the additional time required when abstracting needs to be managed with your agreed upon deliverables. However, by making a commitment to cease opportunities for reuse via abstraction you will inevitably become a much better, and much faster, developer. Couple this with the additional level of reuse abstraction affords and your future self and co-workers will thank you for having made the investment up front.

A core concept in object-oriented programming is that of Encapsulation, which involves co-locating state and the APIs which operate on that state within a single unit. While in traditional OOP, the unit would typically refer to a class, this concept can also be applied to virtually anything; from a large component down to a small function.


The idea behind Encapsulation is to simply restrict access of internal state to only that of the unit itself, which is a means of preventing accidental public exposure and unintentional access and / or misuse of the operations and state; thus access should only ever be exposed via a public API. The less state and internals exposed by a component, the better.

Mental Model

Encapsulation in software development can be metaphorically compared to a vending machine. Imagine a vending machine filled with various snacks and drinks. From the outside, users interact with a simple interface: they select a product and make a payment. However, the internal mechanisms of the vending machine – the way it stores items, manages inventory, and processes payments – are hidden from the user.

In this analogy, the vending machine represents a component or module in the system. Clients (users of the component), much like customers of the vending machine, only see and interact with its public interface. The internal state and logic of the component – akin to the inner workings of the vending machine – are encapsulated and hidden from the outside world.

This encapsulation ensures that the internal state of components are protected from unintended interference and misuse, similar to how a vending machine’s inventory and internal mechanisms are safeguarded behind its exterior. This allows developers to change the internal implementation without affecting other parts of the system that depend on the component, just as the internals of vending machine can be changed without altering the customer’s experience.


In front-end development, encapsulation allows us to create components that hide their internal state and behavior, and only expose what is absolutely necessary; thus, providing a clear and consistent API. This not only enhances readability and maintainability but also improves data integrity and security within the application. Effective use of encapsulation leads to more modular and scalable solutions, allowing developers to change one part of the system without affecting others – which is a good segue into our next fundamental.

We have all experienced this at one time or another – you make a change in one component and inadvertently introduce a regression in another. This is almost always the result of tight-coupling, which is precisely what Loose Coupling aims to address.


Loose Coupling is the concept of designing components such that changes in one area will not affect those in other areas. This decoupling can be accomplished utilizing many of the other 12 Fundamental Principles, specifically, Separation of Concerns, Abstraction, Encapsulation, Single Responsibility Principle, and the Dependency Inversion Principle.

Mental Model

Metaphorically, Loose Coupling can be compared to an entertainment system in a home where you have a TV, a speaker system, a streaming device, and a remote control. Each component operates independently: the TV displays content, the speakers provide audio, the streaming device offers various content options, and the remote controls these devices.

While each is connected to form the whole, they are not connected in such a way that a change in one component should necessitate changes in other components. For instance, if you replace the streaming device with a newer model, this doesn’t require a new TV or speaker system; they continue to function together seamlessly. This independence and interchangeability reflects the essence of loose coupling in software, whereby different modules or components interact with each other through well-defined interfaces, without being heavily reliant on the implementation specifics of the others in the system.


By designing components such that each operates independently and only interacts with other parts of the system based on specific integration points, we can build robust systems which are not prone to the negative effects of brittle, tightly coupled designs.

Loose Coupling always results in systems which are significantly easier to maintain and scale as, should a change be required in one area, the impact of that change in other areas will be minimal. These benefits can also be greatly enhanced when utilizing the Dependency Inversion Principle, which outlined as well in this article.

A final take away on Loose Coupling based on my experience is that, as general rule of thumb, if it is hard to unit test, the SUT is likely not utilizing Loose Coupling.

Having it’s origins in the context of Object Oriented Programming, the Principle of Composition Over Inheritance simply refers to the process of having a class composed of other classes to provide specific functionality rather than extending an existing class and providing the functionality directly in the sub-class.


While Composition Over Inheritance is traditionally an aspect of Object Oriented Programming, which is also relevant in the design of non-visual front-end APIs; more generally speaking, in front-end development, preferring composition over inheritance can be equated to designing complex features based on a composition of many smaller sub-features. This affords the ability to modify a part of the system without the need to make changes to other parts of the system, or the system as a whole.

Mental Model

Metaphorically, composition over inheritance is akin to assembling a kitchen within a home based on various standalone appliances. For example, imagine a kitchen where you have a coffee maker, a blender, a toaster, and a microwave. Each appliance performs a specific function and can be used independently or in combination with others to accomplish a task, such as making breakfast. If you need new functionality, say, a food processor, you can simply add it to your kitchen without altering the existing appliances.

This modular approach, where you compose a kitchen with different appliances, mirrors the principle of composition over inheritance in software. Rather than inheriting all functionalities from a monolithic parent component (a multi-function appliance), you compose a component with other components as needed (like adding various kitchen appliances). This in turn provides a great amount of flexibility and ease of maintenance; allowing for more customizable and adaptable systems.


While in functional component based frameworks inheritance isn’t always directly relevant, composition most certainly is. It is a primary tenant of modularity, and by composing features from a number of sub-features the resulting benefit is a systems which can adapt to change much more easily while implicitly adhering to many of the 12 Principles, specifically, Loose Coupling, Abstraction, and Separation of Concerns.

As with all of the 12 Fundamental Principles, the scope of Composition Over Inheritance can be applied at scale to all aspects of the system, such as breaking down large conditions into a set of smaller conditions, or large functions into a composition of smaller functions, composing larger features based on a set of smaller sub-features, etc.

Most developers have come across this problem at one time or another; a ticket gets created for a bug where users are complaining that they are making updates in one area of the application, but those changes are not being reflected in another area. This issue is almost always related to violating the Single Source of Truth Principle; as what should be represented as a single state, in a single location within the application is instead duplicated and is being referenced differently by different components.


The Principle of Single Source of Truth simply entails having a single, authoritative location where a particular piece of information (state, etc.) resides. In the context of front-end development, this generally equates to having immutable state management strategies where the state of the application is managed centrally rather than scattered across various components.

Mental Model

Single Source of Truth can be compared to a bank account number. Just as a bank account number uniquely identifies your account and all transactions associated with it, serving as the single, authoritative reference to your account, Single Source of Truth in software ensures that every piece of only data exists in single location. In the context of this metaphor, regardless of how many times you might access your account — be it through an ATM, online banking, or from a physical branch — the account number is always a reference to the same account. This prevents discrepancies and confusion.


It is important to design systems such that all references to application and / or asynchronous state are unified. Transformations of shared state should be the responsibility of the components which need to make such transformations; as opposed to duplicating state to accommodate the needs of specific components. By having a single, authoritative source of data, we can ensure that all components reference and / or update the same data / state, maintaining consistency and integrity throughout the application.

Anyone who regularly conducts code reviews has inevitably found themselves dumbfounded at one time or another as to why a specific piece of code has been included within a certain context. For example, an implementation specific, non-generic string utility added to a common String utilities module. Or a specific translation entry added to a completely unrelated translation document, etc. These types of mistakes always result in Low Cohesion, which is certainly not desirable, and is effectively the inverse of High Cohesion.


High Cohesion in front-end development refers to the practice of co-locating related functionality so that each module or component is focused and designed around a specific task. High Cohesion is often correlated to Loose Coupling, and vice-versa.

Mental Model

High Cohesion can be likened to a well-organized toolbox. In a toolbox, tools are organized in a way that each tool is grouped by its specific function – for instance, screwdrivers of various sizes in one compartment, wrenches in another, and so forth. Each compartment represents a cohesive unit, holding tools that are closely related in terms of their function. This makes it easy to find and use the tools for a particular task, enhancing efficiency and orderliness. Similarly, in software development, high cohesion refers to the practice of designing components or modules in a way that they are focused on a specific set of related tasks or functionalities. Like the compartmentalized toolbox where each section is dedicated to a specific type of tool, a highly cohesive software module or component focuses on a particular aspect of the application’s functionality, making the software easier to maintain, understand, and extend.


Utilizing high cohesion results in increased discoverability, which is crucial to facilitating reuse as, by having related components and APIs intuitively structured, it becomes much easier for teams to understand the system and find what exists within the codebase; thus reducing the probability of duplication – which lead us to our next Principle …

Another common issue we often find ourselves having to contend with is that of repetition. This can manifest in many, many ways. For instance, something as seemingly simple as repeated references to the same property over and over again rather than creating a variable which points to the property and using it throughout. Or developers simply copy / pasting implementations repeatedly across components as opposed to utilizing Abstraction to create a reusable implementation for use by all components. The number of permutations of duplication are virtually endless; however, by thinking in terms of DRY Principles, we can ensure such duplication is negated.


The DRY (Don’t Repeat Yourself) Principle emphasizes the importance of reducing repetition and duplication across a system. This can be accomplished by simply ensuring each piece of functionality is only implemented once, thus DRY helps promote a significantly cleaner, easy to comprehend codebase which promotes reuse and is easier to maintain.

Mental Model

Imagine you are baking cookies and you have a cookie cutter mold in a specific shape, say a star. This mold allows you to consistently produce star-shaped cookies without having to manually shape each one.

In a non-DRY approach, this is akin to not having a cookie cutter and shaping each cookie by hand. This method is not only time-consuming but also leads to inconsistencies in size and shape.

Applying the DRY principle is like using a cookie cutter mold. You create the design once, and then you use it repeatedly to produce consistent results. This ensures uniformity while also saving a significant amount of time. If you need to change the design, you simply modify the mold (the single implementation), and every new cookie (function or feature) produced thereafter will reflect this change automatically.

This metaphor illustrates how the DRY principle in software development encourages creating a single, reusable ‘mold’ for recurring patterns or functionality. Just like the cookie cutter allows for efficient and consistent cookie production, DRY leads to more efficient and consistent code by reducing repetition and simplifying maintenance. When a change is needed, we need only update a single implementation, which ensures all parts of the system which use the implementation (mold) are automatically updated.


In the context of software, DRY isn’t just about avoiding duplicate lines of code. It’s about recognizing patterns and abstracting them into reusable components – be it functions, classes, modules, etc.. This abstraction means that a single change can be reflected consistently throughout the application, while also being tested in isolation from the rest of the system.

One point to keep in mind; however, is that it is important to strike a balance. That is, overzealous application of DRY, like overly relying on a single cookie cutter mold for multiple unrelated purposes, can lead to convoluted and rigid structures. The key is to identify and abstract genuinely reusable elements, ensuring that the abstraction makes logical sense and enhances the system’s maintainability and scalability.

When designing a system it can be tempting to anticipate additional features and requirements that will be needed in the future. However, it is important to use constraint and address any anticipated changes by ensuring our designs can support them rather than adding them straight-away. This mindset embodies the Principle of KISS (Keep It Simple, Stupid).


The KISS Principle encourages simplicity in design. In front-end development, this can encompass anything favoring straight-forward solutions over unnecessarily complex ones, choosing simpler algorithms rather than overly arcane implementations, and resisting the urge to include additional libraries or heavy frameworks when doing so would be overkill.

Mental Model

The KISS Principle can be compared to cooking a meal. For instance, imagine you’re preparing a dish – you have a number of ingredients at your disposal. The KISS principle here would be synonymous with choosing to create a delicious, yet simple meal that requires fewer ingredients and straightforward techniques, rather than opting for an overly elaborate recipe that complicates the cooking process and increases the chances of something going wrong.

In the context of front-end development, KISS encourages the use of simple, straightforward solutions. It’s akin to choosing to make a classic dish that you know well, rather than attempting an intricate recipe for the first time when you have guests over in a few hours. For example, if you need to implement a feature that could be accomplished with basic JavaScript, it would be more in line with KISS to avoid introducing a new library that adds unnecessary complexity and overhead to your project.

It is important to note that this principle is not about always choosing the simplest possible solution, but rather, it is about not introducing unnecessary complexity – remember: software architecture is all about managing complexity.

In our cooking metaphor, it’s not about always opting to make a sandwich, but choosing not to prepare a five-course meal when a single, well-made dish would suffice. It’s about understanding the requirements and context, and then applying the most straightforward solution that effectively meets current requirements.


In essence, KISS is about efficiency and simplicity. It’s concept serves as a reminder to favor simplicity over complication – to focus on the essentials and keep things as uncomplicated as possible, while also being sure to account for additional complexity within your designs for when they are actually needed. The KISS Principle is closely related to our next principle as well …

Similar to the KISS Principle, the YAGNI Principle advocates for restraining from adding anything that is not immediately necessary to satisfy the current requirements. By resisting the temptation to over engineer parts of the system, we ensure our codebase remains lightweight and maintainable.


The YAGNI Principle advises against implementing features or functionalities until they are actually needed. In front-end development, this can include anything from adding additional libraries or tooling prior to having an actual need to include them, to delivering overly complex features prior to having an MVP released.

Mental Model

The YAGNI principle in software development can be likened to packing for a vacation – you have a suitcase and a wide variety of items you could potentially take with you. YAGNI is like having the wisdom to only pack what you know you will need for the trip, rather than burdening yourself with items for every possible – and unlikely – scenario. By omitting unnecessary items just in case you might need them, your planning and travel experience becomes much more simplified, not to mention, you’ll have a lighter suitcase to carry around.

In front-end development, applying YAGNI means focusing on what is essential for the application’s current requirements. It’s akin to packing your suitcase with just the right amount of essentials necessary for your trip, and leaving behind the “just-in-case” items that are unlikely to be used and add nothing more than additional weight. For instance, if you’re building a simple web application, YAGNI would advocate for only introducing a proportionally simple frameworks rather than a potentially overly complex option, especially if it is known upfront that the application will not require additional features once completed.

In essence, YAGNI encourages developers to add complexity only when it is justified by a real and present need.


The YAGNI Principle is about avoiding the temptation to include unnecessary features, tools, or additional functionality based purely on the speculation that they might be useful in the future. This temptation often arises due to developers wanting to utilize a new library of framework; however, the decision to do so should be based on the current needs of the business and technical specifics of the project – there is no need to introduce a solution for which there is no problem.

By adhering to YAGNI, we can ensure that our application remains agile, lightweight, and easy to manage, only ever adding what is needed, when it is needed.

The Law of Demeter (Principle of Least Knowledge) advocates that a given unit of code should have limited knowledge about other units; that is, only closely related units should have cross-cutting concerns with other units in the system. In front-end development, this could be expressed as a component should not directly manipulate or rely on the internal workings of other components aside from what is explicitly required.


The Law of Demeter, also referred to as The Principle of Least Knowledge, is a design guideline which emphasizes minimizing the knowledge that a particular module has about the internal workings of other modules in the system. This concept is fundamentally about reducing tight-coupling between components, so as to promote Loose Coupling.

Mental Model

Conceptually, this Principle can be thought of as having a “need-to-know” basis within an organization. For instance, imagine an organization where each department focuses solely on its specific responsibilities and communicates with other departments in a controlled and limited manner. Each department only knows and interacts with the information and resources that are directly related to its function, rather than being entangled in the specifics of other departments’ operations.

In the context of front-end development, this principle can be visualized as a component structure where each component is like an individual department. According to the Law of Demeter, a component should not have intricate knowledge of the internal workings of other components. It should interact with other components or sub-components in a restricted and well-defined manner. This approach is akin to departments communicating with one another through official channels and predefined protocols, ensuring that each maintains its autonomy and boundaries.

For example, in a web application, consider a parent component that renders child components. The Law of Demeter suggests that the parent should not directly manipulate the state or internal methods of the child components. Instead, it should pass state as needed, akin to sending official memos or directives. This encapsulates the child components’ internal logic, allowing them to operate independently of the parent’s implementation details, much like how a department functions autonomously within the larger framework of the organization.


The Law of Demeter in the context of front-end development is concerned with how components interact with each other: by maintaining their autonomy, and interacting through well-defined interfaces, much like departments within an organization communicate through official protocols. The result will be a much more modular, decoupled and cohesive system.

Arguably the most proliferated of all system design principles, SOLID Principles present a set of five specific design guidelines which collectively promote elegant software designs with an emphasis on maintainability and scale.

The 5 core Principles of SOLID are briefly described below.

  • This Principle is essentially akin to Separation of Concerns, whereby each module or component in the system should have only a single responsibility, thus reducing complexity and improving maintainability.

    Mental Model

    The Single Responsibility Principle can be metaphorically compared to the roles of individuals in an organization whereby each employee has a distinct and specific role. For example, an accountant is responsible for managing finances, a human resources officer handles employee-related issues, a salesperson focuses on selling the company’s products or services, and so on. Just as the accountant doesn’t manage sales, and the salesperson doesn’t handle HR issues, in software design, each module should have one, and only one, responsibility, and thus, only one reason to change. This results in an encapsulation of all of the responsibilities associated with a specific role within the organization.

  • The Open / Closed Principle prescribes that components should be open for extension, but closed for modification. This helps promote scalability while reducing the risk of introducing breaking changes to existing code.

    Mental Model

    The Open / Closed Principle in software design can be compared to a concert stage. For instance, imagine the stage is initially set up for a particular performance, say, a classical music concert. The stage (software module) is designed and built (implemented and tested), and serves its purpose (satisfies the requirements) for the concert. Now, suppose the next event is for a completely different genre, say, an EDM festival. Rather than completely dismantling and rebuilding the stage (modifying the existing code), the stage is simply extended – perhaps by adding new lighting system, an enhanced speaker array, etc., thus, extending the existing functionality.

  • The Liskov Substitution Principle specifies that objects within a system should be able to be used interchangably with instances of their subtypes without altering the correctness of the program.

    Mental Model

    Think of this principle as being similar to interchangeable power adapters for electronic devices. Just as different adapters can be used interchangeably with various devices without affecting their operation, components in a system should be replaceable with instances of their subtypes without altering the program’s correctness.

  • This Interface Segregation Principle advocates that larger components should be broken down into smaller, more specific components so as to ensure each only needs to interact with the APIs that are relevant to them.

    Mental Model

    Consider a multi-purpose screwdriver with detachable bits, each bit serving a specific function; such as a phillips head bit, a flathead screwdriver bit, etc.. Users attach only the bit they need, similar to how larger components should be broken down into smaller, more specific components. This way, each part of the system only interacts with the interfaces it requires, making it more manageable and easier to understand.

  • The Dependency Inversion Principle states that systems are to depend on abstractions, rather than concrete implementations. In doing so, we afford ourselves the ability to decouple dependencies, which results in significantly more flexibility. This principle is closely related to the concept of IoC Containers, Dependency Injection, and Inversion of Control.

    Mental Model

    This principle is like hiring a contractor to remodel your house. You’re not concerned with the specific tools they use; you’re interested in the skills and results they provide. In software, systems should depend on abstractions (the skills) rather than concrete implementations (the specific tools), allowing for greater flexibility and easier maintenance.

Hopefully this article has provided some insight into the timeless benefits afforded by the these 12 fundamental principles. If you are new to them, I encourage you to try a practical exercise where you simply apply these principles in a context unrelated to software development.

For instance, make a list of the 12 principles and utilize them to re-organizing your workspace, crossing out each as they have been applied. Once completed, try applying them for each specific area of your workspace. Continue this exercise for other areas of your living space, such as re-organizing your closet, or your kitchen cabinets, etc. These exercises will not only lead to practical and productive results, but will help you form a mental model of the principles in practice, which will naturally translate in your development practices. The more you apply them, the more intuitively you will begin to recognize when to apply them in your day to day work.

The landscape of front-end web development is constantly evolving; however, by understanding these 12 Fundamental Principles, and more importantly, applying them in practice, developers and teams can design elegant architectures in confidence which not only meet current and future demands, but also allow for ceasing new opportunities as well.

Cross-published at

Native Data Categorization with Object.groupBy

The introduction of Object.groupBy allows for a streamlined, native approach to organizing collections of data based on user defined criterion; thus simplifying the task of data analysis and categorization without the need for managing third-party dependencies.

Using Object.groupBy is simple and straight-forward. If you have previously used Lodash groupBy, then you are already familiar with it’s API. Object.groupBy accepts an array and a callback function which defines the grouping logic, and returns an object of groupings based on the callback function’s returned key.

For example, we can group employees by department as follows:

In the above example, we see that the provided array can easily be grouped into specific properties, in this case, by department. We can just as easily have grouped the array by any other property as well, such as date of hire (doh) to categories by employee tenure (more on this shortly ).

Indeed, Object.groupBy is particularly useful for grouping collections of objects; however, it is not restricted to objects alone, it can also be used to create grouping primitives as well:

While the above examples are useful in their own right, the real power of Object.groupBy is revealed when more complex logic is required for determining groupings. For example, we can group the employees array by tenure as follows:

New features such as Object.groupBy serve to highlight the TC39 Committee’s commitment to providing developers with powerful tools which simplify common tasks. By introducing a native facility for grouping objects, Object.groupBy simplifies overhead and maintainability while also opening up new opportunities for native data aggregation and analysis.

Update: November 28, 2023: Object.groupBy is now currently in Stage 4 status.

Simplified Error Handling with Error Causes

Exception handling is a critical aspect to ensuring the reliability and resilience of a system. Perhaps of equal importance is the ability for developers to easily debug exception traces back to a root cause. In JavaScript, however, traditionally this process has often required rather convoluted solutions which lead to intricate patterns that ultimately continued to obscure the underlying root cause. Fortunately, with the introduction of the error.cause property, JavaScript debugging has now taken a significant step forward towards simplifying this process, providing native capabilities which facilitate improved error traceability.

In the legacy paradigm, JavaScript’s error handling was akin to a labyrinth, often requiring developers to traverse a complex maze of stack traces in order to pinpoint the origin of an issue. This often necessitated verbose logging mechanisms, which, while somewhat effective, still lacked fundamental standardization and tended to introduce additional layers of complexity which must be integrated within a system, and understood by team members.

While error cause contexts have been available in numerous other languages such as Rust/WASM, Python, etc. in JavaScript, historically speaking, such a facility has been unavailable. Thus, to mitigate these short-comings, developers would need to resort to basic workarounds such as appending custom properties to re-thrown errors or appending error messages. Although these solutions provided a makeshift bridge to identify error causes, they were rather convoluted at best, and often led to fragmented and inconsistent implementation which never truly solved the problem at hand.

The error.cause property heralds a new era, providing a streamlined approach to attach and propagate the underlying cause of an error, offering a standardized approach for encapsulating the origin of subsequent errors.

Consider the traditional approach where a custom property might have been used to include information related to the originating error:

With error.cause, the same can be now achieved natively while retaining the full stack trace back to the origin root cause:

The benefits of adopting error.cause are manifold, resulting in a significant improvement to Developer Experience though native error traceability. A few key benefits include:

Clarity: Provides a clear lineage of errors, akin to a well-documented review process, making it much easier to understand the flow of exceptions.

Consistency: Promotes a more uniform error handling mechanism across applications.

Simplicity: Reduces the need for additional error handling constructs, streamlining error propagation and handling.

As with countless other language enhancements, the introduction of the error.cause property is a testament to JavaScript’s evolution, offering developers a robust and simplified error handling mechanism; supporting more reliable facility for error tracing, reshaping the way debugging and exception management can be approached.

The Pipe Operator: A Glimpse into the Future of Functional JavaScript

In the dynamic landscape of JavaScript, the TC39 proposal for the Pipe Operator tends to stand out as an interesting progression in terms of streamlining function composition in a way increases readability, maintainability, and DX.

In this article, we dive a bit deeper into the realms of functional programming in JavaScript, and how upcoming language features such as the Pipe Operator aid in the ability to facilitate a more declarative approach to functional programming.

At its core, the Pipe Operator, denoted by (|>) introduces syntactic sugar for function composition, allowing developers to pass the result of an expression as an argument to a function. And, while the syntax may appear somewhat unfamiliar at first glance, this seemingly simple language feature harbors some rather profound implications for code clarity and maintainability.

Before diving into some examples, let’s first take a look at how functions are typically composed in JavaScript, and then touch on some of the drawbacks that result from these traditional approaches.

For instance, consider this simple example which demonstrates how one could compose three functions:

As can be seen in the above, composing functions together in this manner is cumbersome at best. Moreover, implementations such as this significantly lack in readability as they effectively obscure intent; that is, we simply want to end up with “abc”, but to do so requires an inversion of our thinking.

Of course, we can simplify things quite a bit by implementing a simple compose function (or utilizing a utility library, such as lodash/fp), which we can then leverage for composing functions in a more natural way:

With the above implementation, managing the composition of functions becomes easier?-?and we can also defer invoking the function to a later time. Yet, it still leaves much to be desired, especially in terms of maintainability. For instance, should we need to change the composition, the order of arguments must be changed proportionately.

Alternatively, developers may choose to bypass chaining altogether and opt for a temporary variable approach in order to simplify implementation and readability. For example:

While this is rather subjective, the use of temporary variables arguably creates unnecessary cognitive load as one must follow the order of assignments, and contend with temporary values which, if not implemented as constants, could lead to potential mutations, etc.

Considering the traditional approach to nested function calls which results in a right-ward drift that is challenging to read and understand, the Pipe Operator on the other hand turns this paradigm on its head, so to speak, by enabling left-to-right composition of functions which organically reflects our natural way of thinking and recognizing patterns, as can be seen in the following:

In the above example, expression is the value that is first passed to functionA, the result of which (i.e. the value returned from functionA) is then passed to functionB, and so on until the last function (functionC) returns, at which point the final value is assigned to result. The readability of this approach as compared to traditional function composition is self-evident, reducing cognitive load and making the flow of data much more apparent.

Given the previous examples, with the Pipe Operator, we can now simplify the implementation in a much more natural way:

The simplicity and utility of the Pipe Operator results in much more succinct expressions which in turn reduces the mental overhead of reading and understanding the implementations intent.

The practical applications of the Pipe Operator are vast, as they can be used to simplify compositions for everything from data processing pipelines to event handling flows.

For instance, consider a scenario where we need to process a dataset through a series of transformations. Using the Pipe Operator, we can accomplish this in a simple and concise manner:

With the streamlined syntax of the Pipe Operator, both intent and the flow of control become much clearer. In addition, maintainability is vastly improved as we can change the order of the processes with considerably less effort. For example, if we decide we want to enrich the results prior to normalizing, we simply just change the order accordingly as needed:

As we see, changing the order of invocations is rather simple, thus maintainability is vastly improved.

A particularly intriguing aspect of the Pipe Operator proposal is the inclusion of “Topic References”; a concept which increases expressiveness and the utility of the Pipe Operator by providing direct access to values via a topicToken.

Topic References allow for elegant handling of the current value within the pipeline, using a symbol (currently, %) as a placeholder reference to the value. This feature enables more complex operations and interactions with the piped value beyond that of simply passing the value as an argument to a function.

The main purpose of topic references is to enhance readability and flexibility for use-cases which involve multiple transformations or operations. By using a placeholder for the current value, developers can clearly express operations like method calls, arithmetic operations, and more complex function invocations directly on the value being piped through, without needing to wrap these operations in additional functions.

Consider a scenario where you’re processing a string to ultimately transform it into a formatted message. Without topic references, each step would require an additional function, even for simple operations. With topic references, however, the process becomes much more direct and readable:

One point to note regarding the topicToken is that it has not been finalized, thus the token is subject to change but will ultimately be one of the following: %, ^^, @@, ^, or #. Currently, @babel/plugin-proposal-pipeline-operator defaults to %, which can be configured to use one of the proposed topicTokens.

Through the use of topic references, the Pipe Operator proposal not only adheres to traditional functional programming principles, but also enhances developer experience by allowing for more intuitive and maintainable implementations. Features such as these represents a significant step forward in providing more declarative and expressive patterns in JavaScript.

The Pipe Operator proposal is currently in the pipeline for standardization, reflecting a collective effort within the JavaScript community to adopt functional programming paradigms. By facilitating a more declarative approach to coding, this proposal aligns with the language’s evolution towards offering constructs that support modern development practices.

Key benefits of the Pipe Operator include:

  • Enhanced Readability: Allows for a straightforward expression of data transformations, improving the readability of the code and making it more accessible to developers.
  • Reduced Complexity: Simplifies complex expressions that would otherwise require nested function calls or intermediate variables, thereby reducing the potential for errors.
  • A More Functional Paradigm: By promoting function composition, the Pipe Operator strengthens JavaScript’s capabilities as a language well-suited for functional programming.

As the JavaScript ecosystem continues to evolve, with TC39 proposals such as the Pipe Operator set to play an important role in shaping the future of the language, especially from a functional programming perspective.

While the proposal is still under consideration, its potential to enhance developer experience and promote functional programming principles is most certainly something to look forward to.

(Update: August, 2021, proposal has been moved to Stage 2)

ES2020 Optional Chaining & Nullish Coalescing

Of the various Features proposed in ES2020, perhaps two of the simplest features will prove to be the most useful, at least in terms of simplification and maintenance are concerned.

Specifically, the Optional Chaining Operator and Nullish Coalescing Operator are of particular interest as they are certain to result in less verbose, less error prone expressions.

In a nutshell, Optional Chaining provides a syntax for undefined / null checks when performing nested object references using a simple question mark appended by a dot (?.) notation.

For instance, consider how many times you may have written defensive expressions similar to the following:

Or perhaps you have assigned intermediate values to temporary variables to perform the same:

The need to check for possible reference errors quickly becomes tedious, and with each lookup we increase the potential for introducing bugs. Utilities can be implemented for delegating these checks, but ultimately, this just moves the problem from one context to another, resulting in additional points for failure.

With Optional Chaining, however, accessing properties safely becomes considerably less verbose, as the examples above can be simplified to:

Reference checks when invoking functions also become simplified:

And dynamic property references can safely be performed as well:

In addition, combined with the Nullish Coalescing Operator, Optional Chaining becomes even more succinct as one can specify a value to resolve to rather than the default (undefined) by simply using a double question mark (??) notation. For example:

Moreover, Nullish Coalescing, while intended as a compliment to Optional Chaining, also solves additional problems when dealing with falsy values. For instance, consider how many times you may have written something similar to the following:

With the Nullish Coalescing Operator, we can avoid the problems outlined above as only undefined and null values will evaluate to true, so falsy values are safe:

Since Nullish Coalescing only checks for undefined and null, the above holds true for all other falsy values, so false, empty strings, and NaN are safe as well..

One thing to note is that Optional Chaining does not resolve when destructuring. So, for example, the following will throw an exception:

Interestingly, though, combined with Nullish Coalescing, an exception will not be raised; though, the default will not be assigned, either:

As can be seen, ES2020 has no shortage of new features on offer to be excited about and, while arguably not as exciting as other features, Optional Chaining combined with Nullish Coalescing will certainly prove to be valuable additions.

Both Optional Chaining and Nullish Coalescing proposals are currently at Stage 4 and are available in most modern browsers as well as via the following babel plugins: @babel/plugin-proposal-optional-chaining and @babel/plugin-proposal-nullish-coalescing-operator.

Benefits of JavaScript Generators

JavaScript Symbols

One of the more nuanced features introduced in ES6 is that of Generator functions. Generators offer a powerful, yet often misunderstood mechanism for controlling the flow of operations, allowing developers to implement solutions with improved readability and efficiency. This article briefly delves into a few of the benefits that JavaScript Generators have to offer, elucidating on their purpose, functionality, and specific scenarios which can benefit from their usage.

A Generator function is a special type of function that can pause execution and subsequently resume at a later time, making it quite valuable for handling asynchronous operations as well as many other use cases. Unlike regular functions which run to completion upon invocation, Generator functions return an Iterator through which their execution can be controlled. It is important to note that while generators facilitate asynchronous operations, they do so by yielding Promises and require external mechanisms, such as async/await or libraries, to handle the asynchronous resolution.

Generators are defined with the function keyword followed by an asterisk (*); i.e. (function*), and are instantiated when called, but not executed immediately. Rather, they wait for the caller to request the next result. This is achieved using the method, which resumes execution until the next yield statement is encountered, or the generator function returns.

As mentioned, Generator functions return an Iterator, therefore, all functionality of Iterables are available to them, such as for...of loops, destructuring, parameters, etc.:

Generators allow for the creation of custom iteration logic, such as generating sequences without the need to pre-calculate the entire set. For example, one can generate a Fibonacci sequence using generators as follows:

Generators have the ability to maintain state between yields, thus they are quite useful for managing stateful iterations. This feature can be leveraged in scenarios such as those which require pause and resume logic based on runtime conditions. For instance:

It may initially seem confusing as to how the value passed to is referenced within the Generator function. However, it is important to understand how this mechanism works as it is a core feature of generators, allowing them to interact dynamically with external input. Below is a breakdown outlining this behavior in the context of the above example:

  1. Starting the Generator: When is first called, the gameState generator function begins execution until it reaches the first yield statement. This initial call starts the generator but does not yet pass any value into it, as the generator is not yet paused at a yield that could receive a value.
  2. Pausing Execution: The yield statement pauses the generator’s execution and waits for the next input to be provided. This pausing mechanism is what differentiates generators from regular functions, allowing for a two-way exchange of values.
  3. Resuming with a Value: After the generator is initiated and paused at a yield, calling resumes execution, passing the value into the generator. This passed value is received by the yield expression where the generator was paused.
  4. Processing and Pausing Again: Once the generator function receives the value and resumes execution, it processes operations following the yield until it either encounters the next yield (and pauses again, awaiting further input), reaches a return statement (effectively ending the generator’s execution), or completes its execution block.

This interactive capability of generators to receive external inputs and potentially alter their internal state or control flow based on those inputs is what makes them particularly powerful for tasks requiring stateful iterations or complex control flows.

In addition to yielding values with yield, generators have a distinct behavior when it comes to the return statement. A return statement inside a generator function does not merely exit the function, but instead, it provides a value that can be retrieved by the iterator. This behavior allows generators to signal a final value before ceasing their execution.

When a generator encounters a return statement, it returns an object with two properties: value, which is the value specified by the return statement, and done, which is set to true to indicate that the generator has completed its execution. This is different from the yield statement, which also returns an object but with done set to false until the generator function has fully completed.

This example illustrates that after the return statement is executed, the generator indicates it is done, and no further values can be yielded. However, the final value returned by the generator can be used to convey meaningful information or a result to the iterator, effectively providing a clean way to end the generator’s execution while also returning a value.

Generators also provide a return() method that can be used to terminate the generator’s execution prematurely. When return() is called on a generator object, the generator is immediately terminated and returns an object with a value property set to the argument provided to return(), and a done property set to true. This method is especially useful for allowing clients to cleanly exit generator functions, such as for ensuring resources are released appropriately, etc..

In this example, after the first yield is consumed, return() is invoked on the generator. This action terminates the generator, returns the provided value, and sets the done property of the generator to true, indicating that the generator has completed and will no longer yield values.

This capability of generators to be terminated early and cleanly, returning a specified value, provides developers fine-grained control over generator execution.

Generators provide a robust mechanism for error handling, allowing errors to be thrown back into the generator’s execution context. This is accomplished using the generator.throw() method. When an error is thrown within a generator, the current yield expression is replaced by a throw statement, causing the generator to resume execution. If the thrown error is not caught within the generator, it propagates back to the caller.

This feature is particularly useful for managing errors in asynchronous operations, enabling developers to handle errors in a synchronous-like manner within the asynchronous control flow of a generator.

This example illustrates how generator.throw() can be used to simulate error conditions and test error handling logic within generators. It also shows how generators maintain their state and control flow, even in the presence of errors, providing a powerful tool for asynchronous error management.

One particularly interesting feature of Generators is that they can be composed of other generators via the yield* operator.

The ability to compose Generators allows for implementing various levels of abstraction and reuse, making their usage much more flexible.

Generators can be used for many purposes, ranging from basic use-cases such as generating a sequence of numbers, to more complex scenarios such as handling streams of data so as to allow for processing input as it arrives. Through the brief examples above, we’ve seen how Generators can improve the way we, as developers, approach implementing solutions for asynchronous programming, iteration, and state management.