Property Change Observers in Polymer

Wednesday, January 6th, 2016

When building Web Components the ability to observe property / attribute changes on custom elements and respond to them accordingly can prove quite useful.

Fortunately, Polymer makes this incredibly easy. Let’s take a quick look …
(note, we’ll be using ES6 here)

Single Property Observers

In it’s most basic form, a Single Property Observer can be defined by simply implementing a method and adding it to the property’s observer configuration:

Now, whenever the property changes, Polymer will automatically invoke the observer method; handily passing two arguments: the updated value, and the previous value:

Try it

Pretty cool, right? It gets even better…

Multi-Property Observers

In addition to Single Property Observers, multiple properties can be observed for changes using the observers array:

The observers array is rather self-explanatory: each item is simply a string representation of the method to be invoked with the observed properties specified as arguments:

Try it.

For more information, see multi-property-observers.

Sub-Property Observers

Similar to Multi-Property Observers, sub-properties can be observed as well (e.g. user.username, or user.account.name, etc.). For instance:

Try it

Deep Sub-Property Observers

As with explicit Sub-Property Observers, (n-level) arbitrary sub-properties can be observed using wildcard notation:

Try it.

Both Sub-Property Observers and Deep Sub-Property Observers differ from Single-Property Observers in that a changeRecord is passed to the observer method as opposed to the updated value. A changeRecord is simply an object which contains the following properties (as per the Polymer Docs):

  • changeRecord.path: Path to the property that changed.
  • changeRecord.value: New value of the path that changed.
  • changeRecord.base: The object matching the non-wildcard portion of the path.

It’s important to keep in mind that Sub-Property, and Deep Sub-Property observations can only be made using either property bindings or the set method.

Array Mutation Observers

Complimentary to Single, Multi, Sub, and Deep Property Observers, Polymer provides Array Mutation Observers which allow for observing Array and Array element properties for changes.

This is where the API requires a little getting used to IMHO, and so I would recommend reading the Docs in detail.

That being said, Array Mutation Observers are quite powerful, for example:

Try it.

When observing Arrays, in order for bindings to reflect properly, Polymer’s Array Mutation Methods must be used. This is quite simple in that the API is the same as that of the corresponding Native Array methods, with the only difference being the first argument is the path to the array which is to be modified. For example, rather than: this.items.splice(...) one would simply use: this.splice('items', ...).

Conclusion

Hopefully this simple introduction to Polymer Observers has demonstrated some of the powerful capabilities they provide. Understanding how each can be implemented will certainly simplify the implementation of your custom elements, therefore leveraging them where needed is almost always a good design decision.

Feel free to explore any of the accompanying examples.

Chuck Norris on Polymer

Wednesday, December 16th, 2015

For the past several months I have been evaluating potential frameworks which could facilitate the implementation of context aware Web Components such that each component can be assembled declaratively into recombinant features and higher-level applications. After a focused period of prototyping each candidate framework, Polymer 1.x has proven to be the most effective approach to satisfy these particular design goals, and many others as well.

While I have also been leveraging Angular 2 for implementing self-contained Web Applications which need not be composed outside the scope of the application’s root component / template, the requirement to provide elements which can be arbitrarily composed within an html document declaratively or imperatively independent of using any one particular framework is one which proves somewhat challenging; yet, can easily be satisfied by leveraging Web Components, and Polymer simplifies the process of doing so considerably.

Polymer 1.x

On a high-level, Polymer provides some much welcomed sugaring over the four Web Component specifications; HTML Templates, Shadow DOM, HTML Imports, and Custom Elements, respectively. The higher level abstraction and API offered by Polymer significantly simplifies the process of Web Component development, while reducing the time and effort required to meet both simple and complex use-cases alike.

Features provided out-of-the-box embrace that which developers have come to expect from a modern library or framework, such as one-way and two-way binding annotations, template helpers, a Local and Light DOM API, declarative and imperative event mappings, declared properties with observers and attribute reflection, and much more.

Add to this the growing Catalog of Elements provided by the Polymer Project, a complete Web Component Testing solution via WCT, optimization features such as Vulcanization with tooling support for Gulp, API Documentation components via Iron Component Page (though still pending ES6 Support) concise documentation with easy to use examples, and developers are afforded a rather elegant solution for building high quality, future facing Web Components, today.

In addition, Polymer comes in three specific layers, each of which builds upon the previous lower-level implementation. This allows for a nice level of flexibility in choosing the most appropriate implementation based on your specific needs. Depending on what is required, one can choose from the mirco-implementation for providing basic custom element sugaring, the mini-implementation for more advanced local DOM and life-cycle hooks, and the standard-implementation which provides the full suite of Polymer features.

Given the capabilities Polymer has to offer, as well as the growing number of organizations using Polymer, and some rather interesting applications being built with Polymer, if you haven’t already, I highly recommend given it a try.

Chuck Norris!

So what does any of this have to do Chuck Norris?

Well, nothing actually.

Except, like Chuck Norris, Polymer is cool – very cool, and so after accidentally coming across the ICNCB service, I thought a simple Web Component which displays some comical facts about Chuck Norris could serve as a useful Polymer example.

And so, if you like to laugh a bit while learning something new, feel free to clone the repo over on Github to get familiar with a few of Polymer’s general capabilities, or simply check out the app here and have a few laughs.

Coming up, Chuck Norris on Angular 2 …

NET::ERR_CERT_INVALID Trick

Thursday, October 22nd, 2015

So here is an utterly ridiculous trick that may actually prove to be quite useful should you ever need it.

With recent Chrome updates, hosts which fail to provide a valid SSL certificate are blocked via a NET::ERR_CERT_INVALID error. This essentially is the result of a secure site failing to provide a valid SSL Certificate in some way. In Chrome, when this occurs, you will see a screen similar to the following:

NET::ERR_CERT_INVALID

Previously one could circumvent this by clicking on a link which would allow you to override the error. However, such links in current builds of Chrome are no longer provided.

Interestingly, the work around for this is simple, bordering the ridiculous. Just focus on the page and type “danger“. The page will then automatically refresh and load as if the certificate was valid from that point on.

Obviously you want to be mindful of this work around (e.g. only using it for known hosts, such as a dev environment, etc., as was the case in my example).

Back from the Peripheral

Tuesday, October 20th, 2015

Hello again…

Indeed, a greeting may be appropriate here as it has been quite some time since I have actively blogged which, for all intents and purposes, has been for good reason.

After a decade of continuous blogging with a minimum of at least one (hopefully) informative article per month, I had made a conscious decision to take a step back, re-evaluate, and, on the peripheral, dedicate my otherwise allocated writing time towards focusing on the many aspects of future Web Development.

With that in mind, my time spent has been put to good use, and so, I’ve decided to begin sharing my experiences here more frequently once again; both trivial and non-trivial alike.

The fact is, I miss writing, and outside of day to day writing of general API docs and the occasional technical spec, I haven’t had a dedicated space to share my more subjective, creative thoughts about modern Web Development. Enough about me, though, I’ll get to the point…

Before I delve into the specific topics I plan to discuss, I think it makes sense to first outline an underlying theme which will permeate throughout my future articles. Specifically, I plan to to focus on topics and technologies which I believe will, to a certain extent, transcend the more immediate trends within the Web Development landscape that tend to come and go with the times. I strongly believe it is essential for our community to continue on a path of sustainable growth, and the path of least resistance will more than likely be in our ability to move from library to library, framework to framework, easily and with minimal transition. This is imperative as the rate at which technologies are changing has grown exponentially in the past few years, perhaps no where more so than in the Web Development space. As such, the ability to identify and focus on core underlying concepts, principles and technologies which will remain relevant for some time to come becomes ever more important.

The simple fact is, we need to pay particular attention to what we dedicate our time to learning, and ensure our learning is focused on key aspects that can carry over as each new technology increases in adoption, and then makes way for the next in line.

With that in mind, I plan to focus this space on my findings and most recent experiences with current Web Development technologies of interest, particularly, but not limited to, the recent work of the TC39 Committee, all things ES2015 / ES2016, Web Components, Polymer, Flux / React, Angular 2, and the many other complimentary tools and technologies surrounding modern workflows.

So stay tuned, as always, there’s a lot to talk about! Good things to come …

BDD/TDD Mental Models

Thursday, February 13th, 2014

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

Quick Tip: Backbone Collection Validation

Sunday, January 19th, 2014

Often times I find the native Backbone Collection implementation to be lacking when compared to it’s Backbone.Model counterpart. In particular, Collections generally lack in terms of direct integration with a backend persistence layer, as well as the ability to validate models within the context of the collection as a whole.

Fortunately, such short comings can easily be circumvented due to the extensibility of Backbone’s design as a generalized framework. In fact, throughout my experience utilizing Backbone, I can assert that there has yet to be a problem I have come across which I was unable to easily solve by leveraging one of the many Backbone extensions, or, more often than not, by simply overriding Backbone’s default implementation of a given API.

Validating Collections

Perhaps a common use-case for validating a collection of Models can be found when implementing editors which allow for adding multiple entries of a given form section (implemented as separate Views), whereby each section has a one-to-one correlation with an individual model. Rather than invoke validation on models from each individual view, and manage which model’s are in an invalid state from the context of a composite view, it can be quite useful to simply validate the collection from the composite view which, in turn, results in all models being validated and their associated views updating accordingly.

Assuming live validation is not being utilized, validation is likely to occur when the user submits the form. As such, it becomes necessary to validate each model after their views have updated them as a result of the form being submitted. This can be achieved quite easily by implementing an isValid method on the collection which simply invokes isValid on each model within the collection (or optionally, against specific models within the collection). A basic isValid implementation for a Collection is as follows:

As can be seen in the above example, the Collection’s isValid method simply invokes isValid on it’s models. This causes each model to be re-validated which, in turn, results in any invalid models triggering their corresponding invalidation events, allowing for views to automatically display validation indicators, messages, and the like; particularly when leveraging the Backbone.Validation Plugin.

This example serves well to demonstrate that, while Backbone may not provide everything one could ever ask for “out of the box”, it does provide a design which affords developers the ability to quickly, easily, and effectively extend the native framework as needed.