You are viewing the Articles in the Agile Category

Front-End System Design: 12 Fundamentals You Need to Know



To commemorate the 20th Anniversary of this Blog, I wanted to publish an Article which would stand the test of time; providing value to developers of all levels; both now, and for many years to come. In giving thought as to what subject should be covered, I contemplated the recurring issues I have observed over the course of the past 2 decades. In nearly all cases, the issues encountered were almost exclusively related to a lack of awareness of specific design fundamentals. As such, the subject for this article naturally revealed itself …


The world in which we live consists of one philosophical constant – change. This fact is ever apparent in the continually evolving field of front-end web development. The overwhelming amount of emerging libraries, frameworks, tools, and methodologies may seem daunting at times. However, by having an understanding of the 12 Fundamental system design principles outlined in this article, developers can move forward with confidence in the knowing that they have strategies to employ which transcend technologies and trends.

This article aims to provide insight into each principle; presenting them with a succinct overview, as well as a mental model by way of metaphor so as to help form a conceptual framework at a deeper level. Having an understanding of these fundamental principles and, more importantly, applying them in practice, will serve you well when making design decisions, both now and well into the future …

Note, references to current frameworks, methodologies, and language specifics have been intentionally omitted so as to reflect the timelessness of each principle.


Fundamental to software architecture are specific tried and tested core principles which provide a conceptual baseline upon which a system can be designed. While there are many aspects to consider, the 12 Principles discussed in this article have been chosen specifically due to their being fundamental in nature. They are as follows:

Each of these foundational principles provide innate tenants of quality software design. They transcend technologies and frameworks, and applying them in practice will naturally translate across all programming paradigms. It is important to develop an understanding of each at scale to the point where they become second-nature when making design decisions; be it a relatively simple function, or a large enterprise-class application.


If one were to ask any member of the teams I have guided throughout my career what the single most advocated principle I champion has been, I am rather certain they would unanimously respond with: Separation of Concerns – and for good reason as, software architecture is all about managing complexity, and by utilizing Separation of Concerns, systems become significantly easier to manage. In fact, each subsequent principle outlined within this article is inherently related to Separation of Concerns. Therefore, it is appropriate to include Separation of Concerns as the first fundamental.

Overview

Separation of Concerns involves structuring a system into distinct sections, each addressing a specific concern. All Front-End Developers are familiar with this concept at a high level as, separating markup (HTML), styling (CSS), and behavior (JS) is an age old best practice. This same concept can be applied to span across the entirety of the system, from individual pages and features, down to low-level implementation specifics such as functions and conditions.

For instance, we have all seen components which start out relatively simple and are focused on a single concern, only to degrade over time to include additional responsibilities which are beyond the scope of the component’s original intent. More often than not, this is the result of developers failing to take a step back to think through how changes should be integrated when tasked with introducing new requirements, and instead, making the mistake of just adding additional logic and behaviors to the existing component. This very issue is intrinsically related to many of the principles included in this article, specifically, Abstraction, Loose Coupling, and the Open / Closed Principle.

Mental Model

Think of a house, where the concept of Separation of Concerns is similar to the distinct roles each area has within the home. Just as a house is divided into different rooms, each serving a specific purpose, the principle of Separation of Concerns involves structuring a software system into distinct sections, each concerned with a specific aspect of the system.

In a typical home, you may have a kitchen, living room, bedrooms, and a bathroom. Each room is designed and equipped for a specific function. The kitchen has appliances for cooking, the living room has seating and entertainment options, bedrooms have beds and wardrobes, bathrooms have showers and toilets, and so on. This intentional separation allows for efficient organization and effective maintenance. This same concept can then be further iterated on for each individual area, such as separating the concerns for each function within a given area. By utilizing Separation of Concerns, a structured and organized living space which is much easier to maintain can be achieved; for instance, if you want to remodel the bathroom, you don’t need to make changes to any other areas of the home.

Just as a well-designed house provides a comfortable and functional living space, applying Separation of Concerns in software development results in a modular system which is much easier to maintain and facilitate scale. Each ‘room’ or component can be implemented, tested, and modified independently, resulting in the system as a whole being much easier to adapt to change.

Summary

Separation of Concerns is the foundational Principle by which all subsequent principles abide; it is simply about dividing a system into specific areas (concerns) based on what they are responsible for. By taking the time to define each concern based on it’s context and scope within the system, and implementing dedicated modules for each concern, the cumulative result will be a maintainable and scalable system which simply could not be achieved otherwise.


Just as with Separation of Concerns, if one were to ask what the second most advocated principle I champion has been, I am certain they would respond with: Abstraction – and likewise, for good reason as a key aspect of Abstraction is that it serves as the primary agent for reuse – this is a very important point to keep in mind. In addition, Abstraction is the key mechanism by which DRY Principles are facilitated.

Overview

The concept of Abstraction simply refers to the process of breaking down complexity into smaller, more discrete units, each responsible for a specific behavior in order to facilitate reuse and simplification.

Mental Model

Abstraction is perhaps the one of the most omnipresent concepts in life, so much so that it often goes completely unnoticed due to its ubiquity. The simplification of highly complex concepts and representations into easily comprehensible constructs can be found literally everywhere. For instance, your name is merely an abstraction of what would otherwise be a highly complex representation which encompasses your personality, experiences, characteristics, and so forth. The examples of abstraction are virtually endless as it is employed everywhere to simplify our day-to-day lives.

Summary

By utilizing Abstraction consistently, overtime you will get to the point where you intuitively “see” what should be abstracted, and when it should be abstracted – without having to give much thought to it at all.

Abstraction requires discipline as, developers may initially feel that doing so results in “more work”; however, I strongly advice against this mindset as it can easily be argued that not doing so will ultimately result in considerably more work down the line, at which point addressing the technical debt will undoubtedly require more effort. Pay now, or Pay more later.

A final take away I can attest to based on my experience is that be utilizing abstraction, it will result in your becoming a significantly faster developer. I state this with confidence as, the additional time required when abstracting needs to be managed with your agreed upon deliverables. However, by making a commitment to cease opportunities for reuse via abstraction you will inevitably become a much better, and much faster, developer. Couple this with the additional level of reuse abstraction affords and your future self and co-workers will thank you for having made the investment up front.


A core concept in object-oriented programming is that of Encapsulation, which involves co-locating state and the APIs which operate on that state within a single unit. While in traditional OOP, the unit would typically refer to a class, this concept can also be applied to virtually anything; from a large component down to a small function.

Overview

The idea behind Encapsulation is to simply restrict access of internal state to only that of the unit itself, which is a means of preventing accidental public exposure and unintentional access and / or misuse of the operations and state; thus access should only ever be exposed via a public API. The less state and internals exposed by a component, the better.

Mental Model

Encapsulation in software development can be metaphorically compared to a vending machine. Imagine a vending machine filled with various snacks and drinks. From the outside, users interact with a simple interface: they select a product and make a payment. However, the internal mechanisms of the vending machine – the way it stores items, manages inventory, and processes payments – are hidden from the user.

In this analogy, the vending machine represents a component or module in the system. Clients (users of the component), much like customers of the vending machine, only see and interact with its public interface. The internal state and logic of the component – akin to the inner workings of the vending machine – are encapsulated and hidden from the outside world.

This encapsulation ensures that the internal state of components are protected from unintended interference and misuse, similar to how a vending machine’s inventory and internal mechanisms are safeguarded behind its exterior. This allows developers to change the internal implementation without affecting other parts of the system that depend on the component, just as the internals of vending machine can be changed without altering the customer’s experience.

Summary

In front-end development, encapsulation allows us to create components that hide their internal state and behavior, and only expose what is absolutely necessary; thus, providing a clear and consistent API. This not only enhances readability and maintainability but also improves data integrity and security within the application. Effective use of encapsulation leads to more modular and scalable solutions, allowing developers to change one part of the system without affecting others – which is a good segue into our next fundamental.


We have all experienced this at one time or another – you make a change in one component and inadvertently introduce a regression in another. This is almost always the result of tight-coupling, which is precisely what Loose Coupling aims to address.

Overview

Loose Coupling is the concept of designing components such that changes in one area will not affect those in other areas. This decoupling can be accomplished utilizing many of the other 12 Fundamental Principles, specifically, Separation of Concerns, Abstraction, Encapsulation, Single Responsibility Principle, and the Dependency Inversion Principle.

Mental Model

Metaphorically, Loose Coupling can be compared to an entertainment system in a home where you have a TV, a speaker system, a streaming device, and a remote control. Each component operates independently: the TV displays content, the speakers provide audio, the streaming device offers various content options, and the remote controls these devices.

While each is connected to form the whole, they are not connected in such a way that a change in one component should necessitate changes in other components. For instance, if you replace the streaming device with a newer model, this doesn’t require a new TV or speaker system; they continue to function together seamlessly. This independence and interchangeability reflects the essence of loose coupling in software, whereby different modules or components interact with each other through well-defined interfaces, without being heavily reliant on the implementation specifics of the others in the system.

Summary

By designing components such that each operates independently and only interacts with other parts of the system based on specific integration points, we can build robust systems which are not prone to the negative effects of brittle, tightly coupled designs.

Loose Coupling always results in systems which are significantly easier to maintain and scale as, should a change be required in one area, the impact of that change in other areas will be minimal. These benefits can also be greatly enhanced when utilizing the Dependency Inversion Principle, which outlined as well in this article.

A final take away on Loose Coupling based on my experience is that, as general rule of thumb, if it is hard to unit test, the SUT is likely not utilizing Loose Coupling.


Having it’s origins in the context of Object Oriented Programming, the Principle of Composition Over Inheritance simply refers to the process of having a class composed of other classes to provide specific functionality rather than extending an existing class and providing the functionality directly in the sub-class.

Overview

While Composition Over Inheritance is traditionally an aspect of Object Oriented Programming, which is also relevant in the design of non-visual front-end APIs; more generally speaking, in front-end development, preferring composition over inheritance can be equated to designing complex features based on a composition of many smaller sub-features. This affords the ability to modify a part of the system without the need to make changes to other parts of the system, or the system as a whole.

Mental Model

Metaphorically, composition over inheritance is akin to assembling a kitchen within a home based on various standalone appliances. For example, imagine a kitchen where you have a coffee maker, a blender, a toaster, and a microwave. Each appliance performs a specific function and can be used independently or in combination with others to accomplish a task, such as making breakfast. If you need new functionality, say, a food processor, you can simply add it to your kitchen without altering the existing appliances.

This modular approach, where you compose a kitchen with different appliances, mirrors the principle of composition over inheritance in software. Rather than inheriting all functionalities from a monolithic parent component (a multi-function appliance), you compose a component with other components as needed (like adding various kitchen appliances). This in turn provides a great amount of flexibility and ease of maintenance; allowing for more customizable and adaptable systems.

Summary

While in functional component based frameworks inheritance isn’t always directly relevant, composition most certainly is. It is a primary tenant of modularity, and by composing features from a number of sub-features the resulting benefit is a systems which can adapt to change much more easily while implicitly adhering to many of the 12 Principles, specifically, Loose Coupling, Abstraction, and Separation of Concerns.

As with all of the 12 Fundamental Principles, the scope of Composition Over Inheritance can be applied at scale to all aspects of the system, such as breaking down large conditions into a set of smaller conditions, or large functions into a composition of smaller functions, composing larger features based on a set of smaller sub-features, etc.


Most developers have come across this problem at one time or another; a ticket gets created for a bug where users are complaining that they are making updates in one area of the application, but those changes are not being reflected in another area. This issue is almost always related to violating the Single Source of Truth Principle; as what should be represented as a single state, in a single location within the application is instead duplicated and is being referenced differently by different components.

Overview

The Principle of Single Source of Truth simply entails having a single, authoritative location where a particular piece of information (state, etc.) resides. In the context of front-end development, this generally equates to having immutable state management strategies where the state of the application is managed centrally rather than scattered across various components.

Mental Model

Single Source of Truth can be compared to a bank account number. Just as a bank account number uniquely identifies your account and all transactions associated with it, serving as the single, authoritative reference to your account, Single Source of Truth in software ensures that every piece of only data exists in single location. In the context of this metaphor, regardless of how many times you might access your account — be it through an ATM, online banking, or from a physical branch — the account number is always a reference to the same account. This prevents discrepancies and confusion.

Summary

It is important to design systems such that all references to application and / or asynchronous state are unified. Transformations of shared state should be the responsibility of the components which need to make such transformations; as opposed to duplicating state to accommodate the needs of specific components. By having a single, authoritative source of data, we can ensure that all components reference and / or update the same data / state, maintaining consistency and integrity throughout the application.


Anyone who regularly conducts code reviews has inevitably found themselves dumbfounded at one time or another as to why a specific piece of code has been included within a certain context. For example, an implementation specific, non-generic string utility added to a common String utilities module. Or a specific translation entry added to a completely unrelated translation document, etc. These types of mistakes always result in Low Cohesion, which is certainly not desirable, and is effectively the inverse of High Cohesion.

Overview

High Cohesion in front-end development refers to the practice of co-locating related functionality so that each module or component is focused and designed around a specific task. High Cohesion is often correlated to Loose Coupling, and vice-versa.

Mental Model

High Cohesion can be likened to a well-organized toolbox. In a toolbox, tools are organized in a way that each tool is grouped by its specific function – for instance, screwdrivers of various sizes in one compartment, wrenches in another, and so forth. Each compartment represents a cohesive unit, holding tools that are closely related in terms of their function. This makes it easy to find and use the tools for a particular task, enhancing efficiency and orderliness. Similarly, in software development, high cohesion refers to the practice of designing components or modules in a way that they are focused on a specific set of related tasks or functionalities. Like the compartmentalized toolbox where each section is dedicated to a specific type of tool, a highly cohesive software module or component focuses on a particular aspect of the application’s functionality, making the software easier to maintain, understand, and extend.

Summary

Utilizing high cohesion results in increased discoverability, which is crucial to facilitating reuse as, by having related components and APIs intuitively structured, it becomes much easier for teams to understand the system and find what exists within the codebase; thus reducing the probability of duplication – which lead us to our next Principle …


Another common issue we often find ourselves having to contend with is that of repetition. This can manifest in many, many ways. For instance, something as seemingly simple as repeated references to the same property over and over again rather than creating a variable which points to the property and using it throughout. Or developers simply copy / pasting implementations repeatedly across components as opposed to utilizing Abstraction to create a reusable implementation for use by all components. The number of permutations of duplication are virtually endless; however, by thinking in terms of DRY Principles, we can ensure such duplication is negated.

Overview

The DRY (Don’t Repeat Yourself) Principle emphasizes the importance of reducing repetition and duplication across a system. This can be accomplished by simply ensuring each piece of functionality is only implemented once, thus DRY helps promote a significantly cleaner, easy to comprehend codebase which promotes reuse and is easier to maintain.

Mental Model

Imagine you are baking cookies and you have a cookie cutter mold in a specific shape, say a star. This mold allows you to consistently produce star-shaped cookies without having to manually shape each one.

In a non-DRY approach, this is akin to not having a cookie cutter and shaping each cookie by hand. This method is not only time-consuming but also leads to inconsistencies in size and shape.

Applying the DRY principle is like using a cookie cutter mold. You create the design once, and then you use it repeatedly to produce consistent results. This ensures uniformity while also saving a significant amount of time. If you need to change the design, you simply modify the mold (the single implementation), and every new cookie (function or feature) produced thereafter will reflect this change automatically.

This metaphor illustrates how the DRY principle in software development encourages creating a single, reusable ‘mold’ for recurring patterns or functionality. Just like the cookie cutter allows for efficient and consistent cookie production, DRY leads to more efficient and consistent code by reducing repetition and simplifying maintenance. When a change is needed, we need only update a single implementation, which ensures all parts of the system which use the implementation (mold) are automatically updated.

Summary

In the context of software, DRY isn’t just about avoiding duplicate lines of code. It’s about recognizing patterns and abstracting them into reusable components – be it functions, classes, modules, etc.. This abstraction means that a single change can be reflected consistently throughout the application, while also being tested in isolation from the rest of the system.

One point to keep in mind; however, is that it is important to strike a balance. That is, overzealous application of DRY, like overly relying on a single cookie cutter mold for multiple unrelated purposes, can lead to convoluted and rigid structures. The key is to identify and abstract genuinely reusable elements, ensuring that the abstraction makes logical sense and enhances the system’s maintainability and scalability.


When designing a system it can be tempting to anticipate additional features and requirements that will be needed in the future. However, it is important to use constraint and address any anticipated changes by ensuring our designs can support them rather than adding them straight-away. This mindset embodies the Principle of KISS (Keep It Simple, Stupid).

Overview

The KISS Principle encourages simplicity in design. In front-end development, this can encompass anything favoring straight-forward solutions over unnecessarily complex ones, choosing simpler algorithms rather than overly arcane implementations, and resisting the urge to include additional libraries or heavy frameworks when doing so would be overkill.

Mental Model

The KISS Principle can be compared to cooking a meal. For instance, imagine you’re preparing a dish – you have a number of ingredients at your disposal. The KISS principle here would be synonymous with choosing to create a delicious, yet simple meal that requires fewer ingredients and straightforward techniques, rather than opting for an overly elaborate recipe that complicates the cooking process and increases the chances of something going wrong.

In the context of front-end development, KISS encourages the use of simple, straightforward solutions. It’s akin to choosing to make a classic dish that you know well, rather than attempting an intricate recipe for the first time when you have guests over in a few hours. For example, if you need to implement a feature that could be accomplished with basic JavaScript, it would be more in line with KISS to avoid introducing a new library that adds unnecessary complexity and overhead to your project.

It is important to note that this principle is not about always choosing the simplest possible solution, but rather, it is about not introducing unnecessary complexity – remember: software architecture is all about managing complexity.

In our cooking metaphor, it’s not about always opting to make a sandwich, but choosing not to prepare a five-course meal when a single, well-made dish would suffice. It’s about understanding the requirements and context, and then applying the most straightforward solution that effectively meets current requirements.

Summary

In essence, KISS is about efficiency and simplicity. It’s concept serves as a reminder to favor simplicity over complication – to focus on the essentials and keep things as uncomplicated as possible, while also being sure to account for additional complexity within your designs for when they are actually needed. The KISS Principle is closely related to our next principle as well …


Similar to the KISS Principle, the YAGNI Principle advocates for restraining from adding anything that is not immediately necessary to satisfy the current requirements. By resisting the temptation to over engineer parts of the system, we ensure our codebase remains lightweight and maintainable.

Overview

The YAGNI Principle advises against implementing features or functionalities until they are actually needed. In front-end development, this can include anything from adding additional libraries or tooling prior to having an actual need to include them, to delivering overly complex features prior to having an MVP released.

Mental Model

The YAGNI principle in software development can be likened to packing for a vacation – you have a suitcase and a wide variety of items you could potentially take with you. YAGNI is like having the wisdom to only pack what you know you will need for the trip, rather than burdening yourself with items for every possible – and unlikely – scenario. By omitting unnecessary items just in case you might need them, your planning and travel experience becomes much more simplified, not to mention, you’ll have a lighter suitcase to carry around.

In front-end development, applying YAGNI means focusing on what is essential for the application’s current requirements. It’s akin to packing your suitcase with just the right amount of essentials necessary for your trip, and leaving behind the “just-in-case” items that are unlikely to be used and add nothing more than additional weight. For instance, if you’re building a simple web application, YAGNI would advocate for only introducing a proportionally simple frameworks rather than a potentially overly complex option, especially if it is known upfront that the application will not require additional features once completed.

In essence, YAGNI encourages developers to add complexity only when it is justified by a real and present need.

Summary

The YAGNI Principle is about avoiding the temptation to include unnecessary features, tools, or additional functionality based purely on the speculation that they might be useful in the future. This temptation often arises due to developers wanting to utilize a new library of framework; however, the decision to do so should be based on the current needs of the business and technical specifics of the project – there is no need to introduce a solution for which there is no problem.

By adhering to YAGNI, we can ensure that our application remains agile, lightweight, and easy to manage, only ever adding what is needed, when it is needed.


The Law of Demeter (Principle of Least Knowledge) advocates that a given unit of code should have limited knowledge about other units; that is, only closely related units should have cross-cutting concerns with other units in the system. In front-end development, this could be expressed as a component should not directly manipulate or rely on the internal workings of other components aside from what is explicitly required.

Overview

The Law of Demeter, also referred to as The Principle of Least Knowledge, is a design guideline which emphasizes minimizing the knowledge that a particular module has about the internal workings of other modules in the system. This concept is fundamentally about reducing tight-coupling between components, so as to promote Loose Coupling.

Mental Model

Conceptually, this Principle can be thought of as having a “need-to-know” basis within an organization. For instance, imagine an organization where each department focuses solely on its specific responsibilities and communicates with other departments in a controlled and limited manner. Each department only knows and interacts with the information and resources that are directly related to its function, rather than being entangled in the specifics of other departments’ operations.

In the context of front-end development, this principle can be visualized as a component structure where each component is like an individual department. According to the Law of Demeter, a component should not have intricate knowledge of the internal workings of other components. It should interact with other components or sub-components in a restricted and well-defined manner. This approach is akin to departments communicating with one another through official channels and predefined protocols, ensuring that each maintains its autonomy and boundaries.

For example, in a web application, consider a parent component that renders child components. The Law of Demeter suggests that the parent should not directly manipulate the state or internal methods of the child components. Instead, it should pass state as needed, akin to sending official memos or directives. This encapsulates the child components’ internal logic, allowing them to operate independently of the parent’s implementation details, much like how a department functions autonomously within the larger framework of the organization.

Summary

The Law of Demeter in the context of front-end development is concerned with how components interact with each other: by maintaining their autonomy, and interacting through well-defined interfaces, much like departments within an organization communicate through official protocols. The result will be a much more modular, decoupled and cohesive system.


Arguably the most proliferated of all system design principles, SOLID Principles present a set of five specific design guidelines which collectively promote elegant software designs with an emphasis on maintainability and scale.

The 5 core Principles of SOLID are briefly described below.

  • This Principle is essentially akin to Separation of Concerns, whereby each module or component in the system should have only a single responsibility, thus reducing complexity and improving maintainability.

    Mental Model

    The Single Responsibility Principle can be metaphorically compared to the roles of individuals in an organization whereby each employee has a distinct and specific role. For example, an accountant is responsible for managing finances, a human resources officer handles employee-related issues, a salesperson focuses on selling the company’s products or services, and so on. Just as the accountant doesn’t manage sales, and the salesperson doesn’t handle HR issues, in software design, each module should have one, and only one, responsibility, and thus, only one reason to change. This results in an encapsulation of all of the responsibilities associated with a specific role within the organization.

  • The Open / Closed Principle prescribes that components should be open for extension, but closed for modification. This helps promote scalability while reducing the risk of introducing breaking changes to existing code.

    Mental Model

    The Open / Closed Principle in software design can be compared to a concert stage. For instance, imagine the stage is initially set up for a particular performance, say, a classical music concert. The stage (software module) is designed and built (implemented and tested), and serves its purpose (satisfies the requirements) for the concert. Now, suppose the next event is for a completely different genre, say, an EDM festival. Rather than completely dismantling and rebuilding the stage (modifying the existing code), the stage is simply extended – perhaps by adding new lighting system, an enhanced speaker array, etc., thus, extending the existing functionality.

  • The Liskov Substitution Principle specifies that objects within a system should be able to be used interchangably with instances of their subtypes without altering the correctness of the program.

    Mental Model

    Think of this principle as being similar to interchangeable power adapters for electronic devices. Just as different adapters can be used interchangeably with various devices without affecting their operation, components in a system should be replaceable with instances of their subtypes without altering the program’s correctness.

  • This Interface Segregation Principle advocates that larger components should be broken down into smaller, more specific components so as to ensure each only needs to interact with the APIs that are relevant to them.

    Mental Model

    Consider a multi-purpose screwdriver with detachable bits, each bit serving a specific function; such as a phillips head bit, a flathead screwdriver bit, etc.. Users attach only the bit they need, similar to how larger components should be broken down into smaller, more specific components. This way, each part of the system only interacts with the interfaces it requires, making it more manageable and easier to understand.

  • The Dependency Inversion Principle states that systems are to depend on abstractions, rather than concrete implementations. In doing so, we afford ourselves the ability to decouple dependencies, which results in significantly more flexibility. This principle is closely related to the concept of IoC Containers, Dependency Injection, and Inversion of Control.

    Mental Model

    This principle is like hiring a contractor to remodel your house. You’re not concerned with the specific tools they use; you’re interested in the skills and results they provide. In software, systems should depend on abstractions (the skills) rather than concrete implementations (the specific tools), allowing for greater flexibility and easier maintenance.


Hopefully this article has provided some insight into the timeless benefits afforded by the these 12 fundamental principles. If you are new to them, I encourage you to try a practical exercise where you simply apply these principles in a context unrelated to software development.

For instance, make a list of the 12 principles and utilize them to re-organizing your workspace, crossing out each as they have been applied. Once completed, try applying them for each specific area of your workspace. Continue this exercise for other areas of your living space, such as re-organizing your closet, or your kitchen cabinets, etc. These exercises will not only lead to practical and productive results, but will help you form a mental model of the principles in practice, which will naturally translate in your development practices. The more you apply them, the more intuitively you will begin to recognize when to apply them in your day to day work.

The landscape of front-end web development is constantly evolving; however, by understanding these 12 Fundamental Principles, and more importantly, applying them in practice, developers and teams can design elegant architectures in confidence which not only meet current and future demands, but also allow for ceasing new opportunities as well.

Cross-published at medium.com

Code Review Essentials

Code Reviews are an essential part of Software Engineering, providing numerous benefits for teams and the products they deliver. Having spent a significant amount of time conducting them for many years now, in this article, we will touch upon some key aspects to consider which, generally speaking, are of particular importance.

Similar to functional testing, Code Reviews provide a unique set of quality controls which help ensure standards are upheld; affording teams the ability to verify a number of critical concerns early on and within the confines of engineering specific constructs. This almost certainly yields a higher return as the time investment required to address issues at this stage requires minimal involvement across teams and functions.

Code Reviews also serve to aid in the verification and upholding of best practices, standards, and conventions across teams and within organizations. These standards can cover a broad range of concerns such as consistency, facilitation of reuse, scalability, security, optimization, readability, simplification, and any other auxiliary criteria specific to a given organization.

Additionally, the Code Review helps to confirm that requirements have been fulfilled in the context of the underlying feature being reviewed as, it is not uncommon for developers to misinterpret requirements.

Likewise, developers are generally focused on solving various small problems in a very particular and limited scope. Because of this, it is inevitable that opportunities will be missed, and oversights will be made. One of the primary responsibility of the Reviewer is to provide a holistic and broad perspective which takes into account not only the soundness of the code being reviewed, but also how it measures, complies, and integrates in the context of the larger system as a whole.

By having another set of eyes, so to speak, we arm ourselves with a very important second line of defense, as well as an agent for opportunity.

One of the most beneficial aspects of Code Reviews is the investment in overall knowledge throughout the team; and ultimately, the ROI it provides. As such, core to the Code Review is the proliferation of knowledge. This applies to both the Reviewee, and the Reviewer alike.

For the Reviewee, when areas of improvement, best practices, optimizations, abstractions and the like are outlined, an opportunity is presented for one to learn new (often improved) techniques which they may not have been aware of otherwise. This holds particularly true for more junior developers who simply have yet to acquire the experiential knowledge obtained by their more senior counterparts. By learning from the experiences of others, the Reviewee can expedite their own growth as a Developer. Here, the expectation is that, overtime, each Reviewee will have fewer and fewer of the same review comments to address as they now have a dedicated platform (even if unofficially so) from which to continually learn.

For the Reviewer, Code Reviews provide an opportunity to share knowledge and insight, while affording one the ability to obtain a broader understanding of the system in its entirety, as this knowledge is vital to providing a successful review.

Additionally, it may be necessary for a Reviewer to devise and provide solutions to problems which the may not have encountered previously and, in order to be effective, a Reviewer must be confident in the feedback and solutions they are providing. This alone affords the Reviewer themselves the ability to gain a deeper understanding of their own knowledge, while also challenging themselves in order to obtain the information necessary to do so. Thus, for the Reviewer, Code Reviews present a tremendous opportunity to not only provide value to others, but also to obtain and enhance their own value as well.

In general, developers more or less tend to work in a rather silo’d manner, primarily focusing on one particular problem space (particularly in the scope of a given feature), and only collaborating when necessitated by DSMs, meetings, or when they or another team member runs into a problem and needs assistance. While much of this is a rather natural by-product of feature development, so to, can it be said that Code Reviews naturally cultivate collaboration; thus, collaboration can be built into our processes by default.

With Code Reviews, no one Developer is ever working completely on their own. This has numerous benefits, many of which have already been outlined above, yet perhaps one of the most significant benefits is that developers are much more likely to double check their work and submit something that they can be proud of when they know someone else on their team will be reviewing their work. Likewise, Reviewers, no matter how experienced, are much more likely to validate and double check their feedback for the exact same reasons. This alone lends itself to higher quality output across the board.

Key Aspects to Consider

While numerous aspects must be considered with respect to conducting Code Reviews, generally speaking, there are common considerations which by and large tend to hold true. While certainly not an exhaustive treatise, what follows is a brief outline of those I have found to provide particular value.

Atomicity: PRs should be atomic (relatively small in nature). If PR is excessively large, it should be rejected and the engineer should be informed to break out the PR into smaller submissions (generally these smaller submissions can be merged to an intermediary branch before being merged to the intended target branch). This is crucial as the surface area for mistakes and missed opportunities is proportional to the amount of code being reviewed. In addition, requiring PRs which are smaller in scope encourages developers to think in terms of smaller units of function and subsystems, which in turn leads to clearer separation of concerns, and encapsulation. As such, it is often helpful to impose a change threshold for submitted PRs.

Compatibility: Changes should remain backwards compatible and not introduce breaking changes (unless expressly coordinated across teams). Reviewers need not checkout each PR and explicitly test each feature being submitted, rather, they should always be cautious of breaking changes, particularly in terms of APIs (e.g. argument positions changing, etc.).

Consistency: PRs must fully adhere to well documented and established standards and conventions; typically supported via commit convention tooling. This is crucial as, consistency and conformity of standards leads to a unified codebase where developers can easily work across packages and features with very limited effort as, the overall structure and coding style is consistent; making it much easier to know where everything should, and is, defined, how modules are organized, and readability is immediate as formatting and structure remains the same across packages and modules.

Clarity: All modules, functions, classes, types, etc. are always be clearly named, defined, remain properly encapsulated, and reside within a logical and appropriate location.

Readability: Readability should be favored over excessive succinctness or overly “clever” implementations which do not read well. Conversely, overly verbose implementations are to be avoided as well. It is important to remain cognizant of the fact that code is read many, many more times than it is written. Moreover, when implementations become hard to reason about, that is often a sign of a poor implementation (usually the result of a specific unit doing too far much). Succinct, yet meaningful names must always be used. Strive to ensure code is self documenting in terms of its intention.

Reusability: Implementations must take reuse into account at all times; be it abstractions to common packages, abstractions within a particular project, or abstractions within a particular scope of a project. In addition, Reviewers should always be on the look out for additions which are redundant and should be removed and replaced with existing APIs available. This includes both internal APIs, as well as third-party libraries. Always ensure native APIs are being leveraged (Array.forEach, etc. rather than explicit for loops) as well as standard third party libraries (lodash.debounce, etc. rather than custom implementations). No redundancies should be introduced, and implementations should fully utilize existing APIs, Modules, Components, etc. throughout the available packages.

Simplicity: Solutions should always be implemented in the simplest way possible. Less is more, this extends down to each line of code. Keep things as simple as possible, but no simpler.

Scalability: Implementations must be performant and optimized to an acceptable and expected level – generalized optimizations must be made, and premature optimizations should only be suggested when necessary.

Securability: Implementations must be secure, keeping standardized security measures in place and ensuring attack vectors and cumulative surfaces are fully understood, accounted for, and securely addressed.

Discoverability: Documentation and / or related tools should follow specific conventions and remain succinct and to the point. Ideal documentation should provide a meaningful, yet brief description, followed by a useful example which speaks for itself (often, unit test expectations can be used verbatim here). On a related note, sources should not contain overly verbose inline comments as well. When, for example, a function has more lines of inline comments than actual implementation code, that’s usually a sign that the code does not read well, or the developer has merely been leaving “note to self” comments. In such cases, strive to provide ways to simplify the implementation such that it achieves better readability by being self documenting.

Accountability: It is crucial that all Team members are aware of the criteria against which their code will be reviewed as, doing so essentially holds developers accountable for ensuring they not only understand what is expected, but are diligent in reviewing their own work prior to submission. Developers should be encouraged to pre-submit PRs for performing a “self review” prior to officially submitting and / or assigning a reviewer. This approach is quite valuable as it provides the developer with a high-level overview of their changes outside of the environment they have been working in, and within the context of the branch to which their changes will be integrated.

While there are certainly other factors to consider when conducting Code Reviews, the above considerations touch upon some of the more fundamental aspects, with the key points hopefully being apparent as, perhaps the most important trait of a successful reviewer is in one’s ability to clearly express intent while also passing this knowledge on to others.

Leveraging GPT to Revolutionize Workflows and Processes

In the history of technological breakthroughs, Generative Pre-trained Transformers (GPT) stand out as a monumental leap in Artificial Intelligence, with the potential to fundamentally transform the way we, as Developers, work.

This highly advanced and sophisticated AI Language Model offers a plethora of ground-breaking software engineering applications, ranging from code generation to automating complex, repetitive tasks. This article explores the concept of GPT, its various applications, limitations, and tips for optimal utilization in the context of Software Engineering.

What is GPT?

GPT, or Generative Pre-trained Transformer, is a Machine Learning model which utilizes Deep Learning techniques to produce human-like natural language text. It can be applied to a wide range of tasks, such as answering intricate questions within context, summarizing text, code generation, language translation, as well as numerous other applications.

GPT-3.5: The current version of GPT, GPT-3.5, is based on a dataset of billions of webpages, books, and text-based information (up until 2021), and contains 175 billion parameters.

GPT-4: The next release of GPT, GPT-4, is anticipated to feature a vast dataset of trillions of webpages, books, and other textual sources, and is expected to contain over 100 trillion parameters.

How can GPT be used today?

There are numerous Tools on the market that are built on GPT Technology, and, from a Developer perspective, the following outlines those which are most likely to provide the best entry point for enhancing DX.

ChatGPT: The most common entry into GPT, ChatGPT is a language model that is trained on a massive amount of textual data. This allows it to generate human-like text and respond to a wide range of prompts with impressively high accuracy. Conceptually, ChatGPT can be thought of as a successor to traditional search in that it essentially cuts out the entire process of searching, identifying relevant results, following links to those results, sifting through content, and trying to arrive at an answer. GPT eliminates this by providing answers or relevant information directly in response to questions in a natural and intuitive manner.

GPT API: The GPT API allows developers to access GPT’s capabilities via a REST API. The API can be used to generate text, translate text, and answer questions. API access is based on a pay-per-use basis, with pricing dependent on the number of requests issued and the amount of text generated. A free tier for developers to test the API is also available, as well as custom pricing for enterprise customers with high volume usage.

GPT Playground: Similar to ChatGPT, yet fully configurable and more stable, the Open AI GPT Playground allows users to experiment with the full set of GPT’s capabilities, including Model selection, introspection, and much more.

Additional Tools built on GPT: There are far too many to tools available which are built on GPT to list within the scope of this short article, however a few notable mentions are the ChatGPT – Genie AI VSCode Plugin, as well as the OpenAI NPM Package.

How can GPT Enhance Developer Experience?

While there are numerous applications for which GPT Technology can be utilized to provide an enhanced Developer Experience (DX), below are is a brief summary of a few of the most common.

Unit Test Generation: GPT can be used to generate test cases and setup, allowing developers to expedite the process of test setup, configuration, and initial test cases.

Debugging: GPT can be utilized to help debug issues in source code, identify misconfigurations, and more.

Code Generation: GPT can be utilized to generate source code, examples for specific languages and frameworks, convert source code from one language to another, and much more.

Streamlining Workflows: GPT can be integrated into development tools, such as IDEs and issue tracking systems, to automate repetitive tasks and streamline workflows.

Technical Documentation: GPT can be utilized to generate technical documents, such as API docs, design specifications, and more, thus improving the quality and accuracy of the information available to developers and teams.

Automating Repetitive Tasks: GPT can be trained to handle repetitive tasks such as scheduling builds, deployments, responding to common queries and more, freeing up engineering developer’s time for more important tasks.

Streamlining Communication: GPT can be integrated into communication tools such as Jira, Teams, etc., allowing Developers to quickly and easily communicate with team members, saving time and improving efficiency.

Identifying Patterns and Trends: GPT can be leveraged to analyze large amounts of data, such as engineering analytics, project management information, etc. to identify patterns and trends that may be difficult for humans to detect, helping Teams to make informed decisions.

Current Limitations

As a relatively new Product, certain limitations and issues are to be expected as the platform matures, namely, they are as follows.

Error Prone: GPT is regularly prone to error, and in certain cases, once an error is encountered, the conversation cannot be continued, leaving one to have to start their prompts over again within a new chat.

Accuracy and Completeness: GPT’s accuracy and completeness is often quite limited, and so it is crucial that Developers be prudent in validating outputs. Moreover, as the Model’s dataset cutoff date was in 2021, not all prompt outputs are currently relevant.

User Experience: The ChatGPT UX is lacking in many areas and doesn’t quite do the underlying platform justice. The UI is often slow and a bit disjointed; however, when it is stable, it is certainly quite usable and helps to accomplish one’s goals – this is particularly true when using a Chat GPT Plus Account.

Tips and Considerations

As with any tool, it is crucial to have an understanding of it’s capabilities and best practices in order to get the most from the experience. A few mentionable items are as follows.

Utilize Prompt Engineering: Be specific and focus on one particular topic or aspect of a topic. Resist the urge to use polite expressions such as “please”, “thank you”, etc. Instead, focus on including the necessary input required to receive the desired output.

Provide Specific Context: The more specific the information you provide to the model, the better the output will be. This can be done by providing a clear and concise, yet very specific question, including the necessary context required for the task you want the model to perform. Likewise, be mindful of ethical considerations – do not interact with ChatGPT in an unethical manner.

Be Mindful of Sensitive Information: Inputs provided to ChatGPT should always be assumed to be persisted and potentially made publicly available. Do not provide any sensitive or proprietary information, such as usernames, passwords, keys, domain specifics, or business specifics.

Validate and Verify Output: Always make sure to validate and verify received output. Never use output directly without first vetting it for accuracy, completeness, etc.

Explore the Open API Playground: Once you are comfortable using ChatGPT, try the Open API Playground, as it provides low-level access to GPT, such as switching models, configuring token length, and numerous additional configurations.

Innovative Use-Cases

While it is inevitable that there will be countless applications for utilizing GPT technology in Software Development, the following outlines some exciting possibilities on the horizon.

Application Source Ingestion and Optimization: Utilizing GPT to ingest application source code provides significantly enhanced analysis. Such integrations can create a model of an application’s data and control flow and suggest opportunities for optimization, reactively identify issues, and generate comprehensive design documentation.

Automated Code Reviews: Integrating GPT as an NLP tool to perform automated code reviews based on organization and team best practices, industry best practices, and historical data from previous code reviews can streamline the process. This can be integrated directly within IDEs, significantly speeding up existing code review processes.

Application Integration: Integrating GPT within applications can streamline help documentation, how-to guides, and augment existing features, providing users with a more seamless experience.

Enhanced API Docs: Integration within platforms can optimize adoptability via enhanced API examples. For instance, a Swagger implementation where a user simply states what they are trying to do, and instantly receives a complete example, streamlining the development process.

Conclusion

GPT offers a transformative leap in Natural Language Processing, significantly impacting developers and engineering managers by streamlining workflows, automating repetitive tasks, and providing advanced capabilities in various aspects of software development. As the technology continues to evolve, it is essential for developers and engineering teams to stay informed about the latest developments, limitations, and best practices to make the most out of this powerful AI tool.

Test First Workflow – A Short Story

As a depiction of the typical approach taken when solving a problem with Test First practices in mind, below is a brief excerpt from a recent conversation with a colleague who inquired as to how one generally goes about solving a problem using Test First methodologies. My explanation was rather simple, and read somewhat like a short story, though I describe it as being more of a step by step process from a Pair Programming perspective.

The general workflow conveyed in my description, while brief, covers the essentials:

  1. We have a problem to solve.
  2. We discuss the problem, asking questions as needed; then dig a bit deeper to ensure we understand what it is we are really trying to solve; and, most importantly, why.
  3. We consider potential solutions, identifying those most relevant, evaluating each against the problem; then agree upon one which best meets our needs.
  4. We define a placeholder test/spec where our solution will be exercised. It does nothing yet.
  5. We implement the solution in the simplest manner possible, directly within the test itself; the code is quite ugly, and that is perfectly fine, for now. We run our test, it fails
  6. We adjust our implementation, continuing to focus solely on solving the problem; all the while making sure not to become too distracted with implementation details at this point.
  7. We run our test again, it passes. We’re happy, we’ve solved the problem.
  8. We move our solution out of the test/spec to the actual method which is to be implemented, which, until now, had yet to exist.
  9. We update our test assertions/expectations against the actual (SUT). We run our test, it passes.
  10. We’re happy, we have a working, tested solution; however, the implementation is substandard; this has been nagging at us all along, so we shift focus to our design; refactoring our code to a more elegant, performant solution; one which we can be proud of.
  11. We run our test again, it fails. That’s fine, perhaps even preferable, as it verifies our test is doing exactly what is expected of it; thus, we can continue to refactor in confidence.
  12. We adjust our code, continuing to make design decisions and implementation changes as needed. We run our test again, it passes.
  13. We refactor some more, continuing to focus freely, and without worry on the soundness of our design and our implementation. We run our test again, it passes.

Rinse and Repeat…

While the above steps are representative of a typical development work-flow based on Test First processes, it is worth noting that as one becomes more acclimated with such processes, certain steps often become unnecessary. For example, I generally omit Step #5 insofar as implementing the solution within the test/spec itself is concerned; but rather, once I understand the problem to be solved, I then determine an appropriate name for the method which is to be tested, and implement the solution within the SUT itself, as opposed to the test/spec; effectively eliminating the need for Step #8. As such, the steps can be reduced down to only those which experience proves most appropriate.

Concluding Thoughts

Having become such an integral part of my everyday workflow for many years now, I find it rather challenging to approach solving a problem without using Test First methodologies. In fact, attempting to solve a problem of even moderate complexity without approaching it from a testing perspective feels quite awkward.

The simple fact is, without following general Test First practices, we are just writing implementation code, and if we are just writing implementation code, then, in turn, we are likely not thinking through a problem in it’s entirety. Consequently, it follows then that we are also not thinking through our solutions in their entirety, and hence our designs. Because of this, solutions feel uncertain, and ultimately leave us feeling much less confident in the code we deliver.

Conversely, when following sound testing practices we afford our team and ourselves an unrivaled sense of confidence in terms of the specific problems we are solving, why we are solving them, and how we go about solving them; from that, we achieve a concerted understanding of the problem domain, as well as a much clearer, holistic understanding of our designs.

Practices of an Agile Developer

Of the many software engineering books I have read over the years, Practices of an Agile Developer in particular continues to be one book I find myself turning to time and time again for inspiration.

Written by two of my favorite technical authors, Andy Hunt and Venkat Subramaniam, and published as part of the Pragmatic Bookshelf, Practices of an Agile Developer provides invaluable, practical and highly inspirational solutions to the most common challenges we as software engineers face project after project.

What makes Practices of an Agile Developer something truly special is the simplicity and easy to digest format in which it is written; readers can jump in at any chapter, or practically any page for that matter, and easily learn something new and useful in a matter of minutes.

While covering many of the most common subjects on software development, as well as many particularly unique subjects, it is the manner in which the subjects are presented that makes the book itself quite unique. The chapters are formatted such that each provides an “Angel vs. Devil on your shoulders” perspective of each topic. This is quite useful as one can briefly reference any topic to take away something useful by simply reading the chapters title and the “Angel vs. Devil” advice, and from that come to a quick understanding of the solution. Moreover, each chapter also provides tips on “How it Feels” when following one of the prescribed approaches. The “How it feels” approach is very powerful in that it instantly draws readers in for more detailed explanations. Complimentary to this is the “Keeping your balance” suggestions which provide useful insights to many of the challenges one might face when trying to apply the learnings of a particular subject. “Keeping your Balance” tips answer questions which would otherwise be left to the reader to figure out.

I first read Practices of an Agile Developer almost 4 years ago, and to this day I regularly find myself returning to it time and time again for inspiration. A seminal text by all means, I highly recommend it as a must read for Software Developers of all levels and disciplines.

Some Useful Tips to Keep in Mind

Throughout my career I have always been drawn to books which provide a practical way of thinking about software. Books of this nature tend to have an emphasis on fundamental principles which apply to all software engineering disciplines, and form much of the basis of the Agile methodologies many of us have come to appreciate.

Often, I find myself going back to the seminal text; The Pragmatic Programmer as, it provides a great resource for some important things I like to keep in mind from day to day. And so, I just wanted to take a moment to share some of the best tips from the book which I have found to be particularly useful and inspiring.

Care About Your Craft

Why spend your life developing software unless you care about doing it well?

Provide Options, Don’t Make Lame Excuses

Instead of excuses, provide options. Don’t say it can’t be done; explain what can be done.

Critically Analyze What You Read and Hear

Don’t be swayed by vendors, media hype, or dogma. Analyze information in terms of you and your project.

Design with Contracts

Use contracts to document and verify that code does no more and no less than it claims to do.

Refactor Early, Refactor Often

Just as you might weed and rearrange a garden, rewrite, rework, and re-architect code when it needs it. Fix the root of the problem.

Costly Tools Don’t Produce Better Designs

Beware of vendor hype, industry dogma, and the aura of the price tag. Judge tools on their merits.

Start When You’re Ready

You’ve been building experience all your life. Don’t ignore niggling doubts.

Don’t Be a Slave to Formal Methods

Don’t blindly adopt any technique without putting it into the context of your development practices and capabilities.

It’s Both What You Say and the Way You Say It

There’s no point in having great ideas if you don’t communicate them effectively.

You Can’t Write Perfect Software

Software can’t be perfect. Protect your code and users from the inevitable errors.

Build Documentation In, Don’t Bolt It On

Documentation created separately from code is less likely to be correct and up to date.

Put Abstractions in Code, Details in Metadata

Program for the general case, and put the specifics outside the compiled code base.

Work with a User to Think Like a User

It’s the best way to gain insight into how the system will really be used.

Program Close to the Problem Domain

Design and code in your user’s language.

Use a Project Glossary

Create and maintain a single source of all the specific terms and vocabulary for a project.

Be a Catalyst for Change

You can’t force change on people. Instead, show them how the future might be and help them participate in creating it.

DRY – Don’t Repeat Yourself

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

Eliminate Effects Between Unrelated Things

Design components that are self-contained, independent, and have a single, well-defined purpose.

Iterate the Schedule with the Code

Use experience you gain as you implement to refine the project time scales.

Use the Power of Command Shells

Use the shell when graphical user interfaces don’t cut it.

Don’t Panic When Debugging

Take a deep breath and THINK! about what could be causing the bug.

Don’t Assume It – Prove It

Prove your assumptions in the actual environment—with real data and boundary conditions.

Write Code That Writes Code

Code generators increase your productivity and help avoid duplication.

Test Your Software, or Your Users Will

Test ruthlessly. Don’t make your users find bugs for you.

Don’t Gather Requirements—Dig for Them

Requirements rarely lie on the surface. They’re buried deep beneath layers of assumptions, misconceptions, and politics.

Abstractions Live Longer than Details

Invest in the abstraction, not the implementation. Abstractions can survive the barrage of changes from different implementations and new technologies.

Don’t Think Outside the Box—Find the Box

When faced with an impossible problem, identify the real constraints. Ask yourself: “Does it have to be done this way? Does it have to be done at all?”;

Some Things Are Better Done than Described

Don’t fall into the specification spiral—at some point you need to start coding.

Don’t Use Manual Procedures

A shell script or batch file will execute the same instructions, in the same order, time after time.

Test State Coverage, Not Code Coverage

Identify and test significant program states. Just testing lines of code isn’t enough.

Gently Exceed Your Users’ Expectations

Come to understand your users’ expectations, then deliver just that little bit more.

Don’t Live with Broken Windows

Fix bad designs, wrong decisions, and poor code when you see them.

Remember the Big Picture

Don’t get so engrossed in the details that you forget to check what’s happening around you.

Make It Easy to Reuse

If it’s easy to reuse, people will. Create an environment that supports reuse.

There Are No Final Decisions

No decision is cast in stone. Instead, consider each as being written in the sand at the beach, and plan for change.

Estimate to Avoid Surprises

Estimate before you start. You’ll spot potential problems up front.

Use a Single Editor Well

The editor should be an extension of your hand; make sure your editor is configurable, extensible, and programmable.

Fix the Problem, Not the Blame

It doesn’t really matter whether the bug is your fault or someone else’s—it is still your problem, and it still needs to be fixed.

“select” Isn’t Broken

It is rare to find a bug in the OS or the compiler, or even a third-party product or library. The bug is most likely in the application.

Learn a Text Manipulation Language

You spend a large part of each day working with text. Why not have the computer do some of it for you?

Use Exceptions for Exceptional Problems

Exceptions can suffer from all the readability and maintainability problems of classic spaghetti code. Reserve exceptions for exceptional things.

Minimize Coupling Between Modules

Avoid coupling by writing shy” code and applying the Law of Demeter.

Design Using Services

Design in terms of services: independent, concurrent objects behind well-defined, consistent interfaces.

Don’t Program by Coincidence

Rely only on reliable things. Beware of accidental complexity, and don’t confuse a happy coincidence with a purposeful plan.

Organize Teams Around Functionality

Don’t separate designers from coders, testers from data modelers. Build teams the way you build code.

Test Early. Test Often. Test Automatically.

Tests that run with every build are much more effective than test plans that sit on a shelf.

Find Bugs Once

Once a human tester finds a bug, it should be the last time a human tester finds that bug. Automatic tests should check for it from then on.

Sign Your Work

Craftsmen of an earlier age were proud to sign their work. You should be, too.

It is my hope that you will find some of these tips helpful and, if so, I suggest keeping those which resonate with you (as well as some of your own) someplace visible for reference.