Bill White's Blog

Bill White's Blog

Angular Development Best Practices

by admin on January 11, 2020, no comments

I wanted to take a moment and write down some of the things I have learned working with Angular for the past four years. These are some practices that I have found very helpful when building large applications. My suggestions and recommendations are based on my experience of trying to make a scalable platform that allows multiple developers to contribute to the codebase without stepping all over each other or introducing lots of regressions. This applies only to the latest version(s) of Angular.

1. Update package.json dependencies often

A while back, a company I was working for based their entire UI on a the Dojo library. This was a very viable choice at the time, but we were not diligent about keep that library up-to-date with the latest revisions, even though it was only being updated every month or two at the time. This was partially due to the extra workload, but also because of the risk of adding security flaws and bugs and not having time to test for both.

It was not really a problem until a couple of years later when we realized we were not just a version or two behind, but that the API of the library had changed so significantly that comprehending and understand the refactoring work involved became an almost insurmountable task. As a result, the codebase became stuck in a sort of hybrid state with some parts being updated while other remaining off-limits due to it being locked down with old technology. It was eventually left to wither in the background as newer, more flexible supporting applications grew up around it.

At the time, I understood this reluctance to update the versions of that library. But now, the world has changed with the advent of the NPM ecosystem. I had seen first-hand how letting technical debt pile up left us with a hefty bill, and thus, I have adopted a new strategy: I firmly believe you should check for updates in your package.json at least once a week at the minimum and do your best to keep it current. This comes with a few caveats and hiccups, but I assure you it is much better than the alternative.

My current approach is a follows: When it comes to dependencies in my package.json file, I keep them current with an almost fanatic attitude. I use the NPM check update library which provides an ‘npm‘ executable that queries for updates to all of the packages in the project. Once I see which libraries have been updated, I examine the results. Since the versions are given in Semantic versioning syntax, changes to minor and patch versions are updated immediately and then smoke tested. Major versions will usually prompt me to check out that project’s repository README to see what breaking changes have been made. In most cases, I will also implement and smoke-test those updates and see if there are any obvious problems. Obviously, the more extensive your testing facilities are, the safer this will all become. However, I have found that 90%+ of the time, there is little to no impact to the code base. The issues that do immediately become apparent can be handled directly, without having to spend countless hours trying to figure out which update just broke something (because by doing this so frequently, you will only have 2-5 updates a week to worry about). If I update, for example, my Immer package and the application immediately throws an error on startup, I know exactly what caused it and exactly where to look. Try doing that 6 months later after you find 40 different version updates to your package.json file and then try to find the time to update and test them one by one. Frequent checks will break this down into a manageable, bit-sized task that can be simple to resolve. Plus, you can file bug reports quickly and find out if others have hit the same issue since the conversation around that particular update with still be current. If you find a problem and 6 months have passed, the developer may be unmotivated to retro-actively apply some fix that you are requesting.

We have been keeping our package.json current on a weekly basis for nearly 4 years now and the number of problems caused by this approach has been surprisingly low. Every so often we will find a glitch that was caused by a package update that was missed by the smoke and unit tests, but it is usually easy to track down and helps us build up our suite of tests to prevent it from being missed again in the future.

2. Wrap external package implementations to make swapping easier

Related to the recommendation above, I was updating a logger library we used in a project. It turns out that the library had a major version point release and was changing its module structure, which meant it would no longer work with IE11 (big surprise). As much I would love to join this movement and leave all things IE behind, project managers never want to leave it behind and demand that we support that decrepit old browser. I began to contemplate changing out the logging package in favor of another. However, I realized that I had imported that particular library by name in the imports in at least a 100+ files in my application. This was not the end of the world, but with something as familiar and consistent as the use of a logger, I figured the interface between my code and any particular logging package should be easy to abstract behind a standard wrapper. I created a Logger class of my own, gave it the common logging methods such as debug(), info(), error(), etc and then used that to wrap the implementation of the actual Logger provided by this NPM library so it now resides in one single location. After about an hour of updating those 100+ files to point to my internal Logger wrapper, I now have a codebase that will allow me to swap out logger package implementations with only one touchpoint. This may not be feasible for every package you reply upon, but for things like loggers, toasters, export utilities etc, it may be worth it to try and isolate those libraries behind a common wrapper class that will not change. This will make it less painful to swap out NPM projects that provide the same function.

3. Make use of a redux style store (either NGRX or NGXS)

I cannot overstate this one enough: from the start you will want to have a reactive-style store as a “single source of truth” that you can use to display data in your components. While I have used NGRX for a couple of years now, I have a special place in my heart for NGXS. It is definitely more “angular-like” in its application and requires a lot less code to get the same results as NGRX. The only reason I cling to NGRX is the greater support base and its higher popularity. However, if I was putting together a small-to-medium sized application today, it would almost certainly use NGXS.

4. Intellij Plugins for Angular are helpful

We have chosen to go with separate files structure for the code, template, and stylesheet (ts/html/css) for each component. I find that our classes and templates are usually large enough that keeping them separate reduces the clutter. The majority of our developers use Intellij to edit their code. There is a great little plugin that will collapse those three files down into a single group in the project tree. This means you no longer see all three files representing a component, but instead see a rolled up node representing them all, which can be expanded to get access to a specific one. Check it out.

I hope you find these recommendations useful. If you have any questions or feedback, feel free to leave them in the comments section ūüôā

Angular User Interfaces with NGRX – Dragging Things

by admin on January 2, 2020, no comments

I’ve been working with some highly interactive user interfaces lately and wanted to build them in a reactive style rather than the imperative way I have created such applications in the past. I found that Angular combined with NGRX (Angular’s redux pattern) makes this type of architecture a breeze. It is extremely scalable and performant because once you get the desired content in the store, you can references it in various parts of the UI with very little effort. Having the ‘Single Source of Truth‘ allows you to ditch all of the manually “update-then-retrieve-and-display” coding you would normally have to handle.

One thing that was a bit counter-intuitive was making the UI render based upon the contents of the NGRX store rather than using the traditional imperative instructions. In the case of an interactive diagram, when the user moves something on the screen, instead of updating the DOM elements in the UI, we instead update the NGRX store, which in turn applies the updates to the elements on the screen to reflect the current state.

The benefits of this are tremendous. First, you get massive scalability with a redux-style application. For example, if you have a detail section on the screen, the ‘title’ field will always show the same value as the shape that is selected in the diagram since that value is coming from a single place in the store.

You also get the ability to add application-wide features lie undo/redo with very little effort. For this demo, I have implemented a simple “undo move” function that simply rolls back the initial move action. However, there exists several NGRX undo/redo libraries that can be added to an application to extend that capability beyond just a single action.

Below you can see a demo of what I am describing. At first glance, it looks fairly simple and generic. You move a shape and drag it around. You also see the current details in the form on the right. No big deal right? However, if you take a look at the code, you will find that there is more going on here. The mouse listeners essentially determine what the new x/y position of the selected shape that is being dragged will be and then an NGRX action is created and dispatched. This action updates those values in the NGRX store. The store determines what is displayed on the screen at any given time. Thus, updating the store will in turn update the shape’s position on the screen. It feels like you are moving the shape, but in reality you are just updating the NGRX store.

This diagram show the sequence of what is occurring:

Logic Flow – Mouse movement creates updates to NGRX store which are reflected in the UI

So why is this interesting? Well, first it opens up a whole lot of possibilities when it comes to your designs. You can see that the shape information form on the right is always in sync with the currently selected shape. That type of updating and syncing takes minimal effort once the information is in the store. It is nothing to add another diagram that mirrors the first. You can see below (and in the code) that I have added a read only diagram that always reflects what you see in the main diagram. Both diagrams reflect the store so both diagrams are always in sync:

For those who are well versed in reactive programming, this is nothing new. I just wanted to share this technique for those who are interested in ways of using reactive design to handle intensive interaction with a graph, visualization, or editor-style UI. ūüôā

Happy New Year – Welcome 2020

by admin on January 1, 2020, no comments

It has been a while since I have been able to blog about the things I’ve been working on for the past few years. While I still create interactive user interfaces, I have also spent the last few years focusing heavily on architecting extremely scalable and maintainable applications which are primarily constructed using Angular and NGRX (Angular’s reactive state management framework). The result has been applications that are performant and maintainable and the design allows for team members to add new features without stepping all over each other or breaking each other’s work.

By moving to this “reactive design”, we are finding that we can add new capabilities to the applications with amazing speed. One of my New Year’s resolutions is to get back to writing some blog entries describing what I have found along the way. I hope to be able to share how productive your platforms can become once you make the leap into reactive architectures ūüôā

d3 Minimap v4 Update

by admin on September 12, 2017, 9 comments

Understanding d3 v4 ZoomBehavior in the Pan and Zoom Minimap

See the Pen d3 Minimap Pan and Zoom Demo by Bill White (@billdwhite) on CodePen.

It has been a while since I posted any new articles. ¬†I’ve been working ¬†non-stop with Angular 4 (formerly Angular 2), Typescript, RxJS and d3 v4. ¬† I¬†recently needed to add a minimap for a visualization and I decided to update my old minimap demo to make us of d3 version 4. ¬†While researching the updates to the zooming behavior, I came across¬†this post on GitHub where another developer user was trying to do the same thing (funny enough, also using my previous example). ¬†After reading Mike’s response about cross-linking the zoom listeners, I decided to give it a try. ¬† While implementing his suggestion, I learned quite a bit about how the new and improved zoom functionality works and I wanted to share that in this post.

My previous example did not make good use of the capabilities of d3’s ZoomBehavior. ¬†I was manually updating the transforms for the elements based on zoom events as well as directly inspecting the attributes of related elements. ¬†With the latest updates to d3 v4, creating this type of minimap functionality ended being really simple and I was able to remove a lot of code from the old demo. ¬†I found that, while the release notes¬†and the docs for the zoom feature are helpful, actually reading through the source is also enlightening. ¬†I was eventually able to boil it down to some basic bullet points.

 

Design

  1. The is a canvas component (the visualization) and a minimap component.  I will refer to these as counterparts, with each making updates to the other.  The canvas has a viewport (the innerwrapper element in my example) and a visualization (the pancanvas in my example).  The minimap also has a viewport (the frame) and a miniature version of the visualization (the container).  As the visualization is scaled and translated, the relative size and scale is portrayed in the minimap.
  2. The d3 v4 updated demo has 2 zoomHandler methods that each react separately to ‘local’ zoom events. ¬†One makes changes to the main visualization’s transform and scale. ¬†The other does the same to the minimap’s frame.
  3. There are also 2 ‘update’ methods. ¬†These are called by the zoomHandler in the counterpart component. ¬†Each update method will then call the local zoomHandler on behalf of the counterpart. ¬†This effectively convey’s the ZoomBehavior changes made in one counterpart over to the other component.
  4. There will continue to be separate logic for updating the miniature version of the base visualization over to the minimap. (found in the minimap’s render() method)
  5. There will continue to be separate logic to sync the size of the visualization with the size of the frame. (also found in the minimap’s render() method)

 

Zoom Behavior notes

  1. The ZoomBehavior is applied to an element, but that does not mean it will zoom THAT element. ¬†It simply hooks up listeners to that element upon which it is called/applied and gives you a place to react to those events (the zoomHandler that is listening for the “zoom” event).
  2. The actual manipulation of the elements on the screen will take place in the zoomHandler which receives a d3.event.transform value (which is of type ZoomEvent for those of you using Typescript). ¬†That event provides you with the details about what just happened (i.e. the transformation and scale that the user just performed on the ZoomBehavior’s target). ¬†At this point, you have to decide what to do with that information, such as applying that transformation/scaling to an element. ¬†Again, that element does not have to be the same element that the ZoomBehavior was originally applied to (as is the case here).
  3. We have to add a filtering if() check within each zoom handler to avoid creating an infinite loop. ¬†More on that in a in the next section…..

 

Logic Flow

  1. We apply the ZoomBehavior on two elements (using <element>.call(zoom)). ¬†The InnerWrapper of the visualization’s canvas and the Container of the minimap. ¬†They will listen for user actions and report back using the zoomHandler.
  2. Once the zoomHandler is called, we will take the d3.event.transform information it received and update some other element. ¬†In this demo, the InnerWrapper’s zoom events are applied to the PanCanvas, while the minimap’s Container events are used to transform the minimap’s Frame element.
  3. Once each zoom handler has transformed it’s own local target, it then examines the event to see if it originated from its own local ZoomBehavior. ¬†If it did, then the logic executes an ‘update()’ call over on its counterpart so that it can also be modified. ¬†So we get a sequence like this: ¬†“InnerWrapper(ZoomBehavior) –> zoomHandler(in Canvas component) –> updates PanCanvas element –> did zoom event occur locally? –> if so, update minimap”. ¬†And from the other side we have this: “minimap Container(ZoomBehavior) -> zoomHandler(in minimap) -> updates minimap Frame element -> did zoom event occur locally? -> if so, update visualization”. ¬†You can see how this could lead to an infinite loop so the check to see if the event originated locally is vital.

This diagram shows the general structure I’m describing:

So the end result is a “push me/pull you” type of action. ¬†Each side makes its own updates and, if necessary, tells the counterpart about those updates. ¬†A few other things to point out:

  1. Because the minimap’s frame (representing the visualization’s viewport) needs to move and resize inverse to the viewport, I modify the transform event within¬†the minimap when they are received and before they are sent out. ¬†The encapsulates the inversion logic in one place and puts that burden solely on the minimap component.
  2. When modifying a transform, ordering matters.  Mike Bostock mentions this in the docs, but I still got tripped up by this when my minimap was not quite in sync with the visualization.  I had to scale first, then apply the transform.
  3. Rather than using the old¬†getXYFromTranslate() method that parses the .attr(“transform”) string property off an element, it is much easier to use the method d3.zoomTransform(elementNode) to get this information. (remember, that method works with nodes, not selections)

At this point, the design works. ¬†However, there’s another problem waiting for us:

When the user moves an element on one side, the counterpart on the other gets updated. ¬†However, when the user goes to move the counterpart element, the element will “jump back” to the last position that it’s own ZoomBehavior triggered. ¬†This is because, when the ZoomBehavior on an element contacts its zoomHandler, it stashes the transform data for future reference so it can pick up where it left off on future zoom actions. ¬†This ‘stashing’ only happens when the ZoomBehavior is triggered from UI events (user zooming/dragging etc). ¬†So when we manually update the PanCanvas in response to something other than the ZoomBehavior’s ‘zoom’ event, the stashing does not occur and the state is lost. ¬†To fix this, we must manually stash the latest transform information ourselves when updating the element outside of the ZoomBehavior’s knowledge. ¬†There’s another subtle point here that briefly tripped me up: ¬†the ZoomBehavior stashes the zoom transform on the element to which it was applied, NOT the element upon which we are acting. ¬†So when the ZoomBehavior hears zoom events on the InnerWrapper, it updates the __zoom property on the InnerWrapper. ¬†Later on, when the minimap makes an update call back to the visualization, we have to manually update that property on the InnerWrapper, even though we are using that data to transform the PanCanvas in the zoomHandler.

 

So here is the final interaction:

  1. User moves the mouse over the PanCanvas
  2. The ZoomBehavior on the InnerWrapper hears those events and saves that transform data in the __zoom property on the InnerWrapper.
  3. The ZoomBehavior then emits the ‘zoom’ event which is handled by the local zoomHandler in the visualization (canvas component in the demo)
  4. The zoomHandler will apply the transform to the PanCanvas to visibly update its appearance in response to the mouse actions from step #1
  5. The zoomHandler looks at the event and if it determines that it originated locally, it makes an update call over to the minimap so it can be updated
  6. The minimap’s update handler inverts the transform event data and applies it to the Frame element
  7. Because the minimap’s updates to the Frame element occurred outside of the minimap’s own ZoomBehavior, ¬†we stash the latest transform data for future state reference. Note: we stash that state, not on the Frame element, but on the minimap’s Container element because that is the element to which the minimap’s ZoomBehavior was applied and that is where it will look for previous state when subsequent ZoomBehavior events are fired when the user mouses over the minimap.
  8. The minimap’s zoomHandler is called by the update method which applies the matching minimap appearance update to the Frame element.
  9. The minimap’s zoomHandler determines the update event did not come from the local ZoomBehavior and therefore it does not call the update method on the visualization, thus preventing an infinite loop.

Hopefully this will save you some time and help you understand how the d3 v4 ZoomBehavior can be used for functionality such as this demo. ¬†ūüôā

Book Review: Learning D3.js Mapping

by admin on February 24, 2015, one comment

I was recently asked to review a copy of PacktPub’s “Learning D3.js Mapping” by Thomas Newton and Oscar Villarreal. I’ve done a good deal of d3 development, but the mapping portion of the library has not received much of my attention until now. A fellow d3 developer and leader of our local d3 Meetup group, Andrew Thornton, has done considerable work with d3-driven mapping applications and it is something I’ve always wanted to delve into but never really had the chance. This book served as that motivation and while it is relatively short, it provides as a great introduction to generating maps in d3 and demonstrates just how powerful the library truly is for creating geographic visualizations.

The book starts by laying out the basic setup for running the provided code samples. They suggest running with the code using a local nodejs http-server, but you can just as easily run the examples directly from the downloaded code samples folder, or using an online IDE such as CodePen or JSBin, although you will need to have access to the hosted sample JSON data files used in the examples. A basic foundation of SVG fundamentals is also provided, which can be useful for those just getting started with SVG-based data visualizations. While the more seasoned d3 developers will be inclined to skip this section, the authors focus a good deal on SVG elements such as paths, which foreshadows their importance in creating maps later in the book. The third chapter is a d3 primer, providing some history behind the library itself. The enter/update/exit lifecycle is described although novice readers will want to read further about this before attempting the more advanced d3 code samples.

The meat of the book is found starting with chapter 4 where you create your first map. The reader is introduced to projections which handle the conversion of coordinates from one space to another. The authors do a good job of covering supporting topics such as scaling and translations which may be new to some readers. Later on in the chapter, an approach to creating choropleths (a fancy way of saying “color-filled maps”) is discussed which introduces the beginners to the use of d3 scales for mapping colors to values, handling click events and adding transitions to spruce up the user interaction. Again, nothing new for most d3 experts, but the authors are clearly trying not to leave anyone out with respect to programming backgrounds. Below is an example of one of the code samples discussed in Chapter 4:

See the Pen d3 Mapping demo 1 by Bill White (@billdwhite) on CodePen.

Chapter 5 is where the book really hits its stride. Starting with the addition of tooltips and then diving into the use of pan and zoom (one my personal favorites), this chapter takes a basic d3-generated map and turns it into a functional data visualization. The reader learns about orthographic projections (representing three-dimensions in a two-dimensional format) such as this:

See the Pen d3 Mapping demo 2 by Bill White (@billdwhite) on CodePen.

Of course, maps are cool to display using d3, but a big piece of the puzzle is obtaining the mapping data to drive the visualization. Chapter 6 discusses the difference between shapefiles, GeoJSON and TopoJSON file types and even shows you where to get free shapefiles, albeit they are the largest and most inefficient of the three types. I found this chapter to be very useful since understanding the mapping source data is key to creating solid mapping visualizations. Up to this point, I had never really understood the difference between these types, nor was I aware that Mike Bostock, the creator of d3, was also behind the TopoJSON project. (It is amazing how much code developers rely upon every day without knowing the names of the key contributors or understanding the motivation for it’s initial creation. This is yet further proof that open source software makes the world go ’round).

The final chapter review unit testing techniques, which seems a bit out of place, but is helpful and informative nonetheless. Unit testing is clearly an important aspect of writing good code. However I cannot help but think that, if what we want is more scalable, maintainable and testable code, then getting away from a weakly-typed, inconsistently-implemented, functional language such as Javascript would be a good first step. But since Javascript is really the only viable option we have these days, we have to make the best of it and the techniques the authors discuss here are certainly worth considering.

Overall, I enjoyed the book and would certainly recommend it to anyone looking to create maps using d3. The d3 library itself is quite challenging in its own right, but this book gives you enough background that you can get some geographic visualizations up and running in very little time.

D3 in 3D: Combining d3.js and three.js

by admin on January 12, 2015, 6 comments

Along with d3, the three.js library is one of the most powerful javascript libraries available for creating slick visualizations. I wondered if it would be possible to create data visualizations in threejs as easily as I had done using d3. One thing that d3 does very well is to take your data and apply a layout algorithm to it for use in drawing visualizations such as treemaps and piecharts. While I’m sure there are other ways to generate this layout data, the d3 approach is the one that I’m most comfortable with and I wondered if it would be possible to use this layout data to render 3D versions of those same visualizations.

This first demo shows a WebGL (or Canvas if you’ve got a crappy browser like IE) rendered treemap that was created using the d3 treemap layout: (I had to use JSBin instead of CodePen for the demos because the embedded CodePen snippets kept locking a few seconds after loading; no idea why)

See the Pen d3 3D Treemap by Bill White (@billdwhite) on CodePen.

The results work pretty well. The Tweening is handled using TweenJS which is what the threejs folks use. I wanted to do the updates using d3 transitions but using the Tween code from the examples I had seen worked straight away so maybe I can swithc over to d3 transitions in the future. Also, when you set the zscale to 0, you see a bit of color bleeding and I have no idea why. (Let me know if you have a solution.)

First, I must give credit to Steve Hall who has also investigated this concept. Last year I explored ways to create 3D visualizations with threejs using layout data from d3, but I hit a wall when I was unable to get the coordinates to translate between the two worlds correctly. Steve also worked on this idea and wrote some fantastic blog entries about it. His work focused more on created mini-visualizations, attaching them to DOM nodes and then having threejs render those nodes in 3D. After reading his work, I contacted him to ask if he knew how to correctly translate the coordinates and to my relief, he gladly provided the solution. So in the interests of full disclosure, my work here was greatly facilitated by his assistance. Thanks Steve!

The solution I landed on is relatively straightforward: Take a dataset, run it through d3 to apply layout positioning information to each data item and then render those items in threejs rather than the normal DOM-oriented CSS styling that we use with d3. In a typical d3 visualization, the data is bound to the DOM nodes and you use the enter/update/exit lifecycle to add, modify and remove those nodes. With threejs, we are no longer adding stuff to the DOM tree (unless we use the CSS3DRenderer, but more on that in a second) because the 3D world is rendered using WebGL (or Canvas when WebGL is unsupported).

So I am taking the dataset that has been processed by d3 and then adding each item to the threejs scene. I add, update and remove items from the scene in reaction to the add/update/remove calls from the d3 general update lifecycle. Interestingly, I’m actually adding DOM nodes to the tree using d3 but they are empty and unstyled so they end up only serving as placeholders for the current set of rendered data in the threejs scene. If the data changes, the d3-generated empty nodes are added/updated/removed and then, for each change, a corresponding change is made to the objects in the threejs scene.

Steve’s demos are implemented using the CSS3DRenderer which actually displays the 3D objects using only CSS3 transforms. Because of this, you are actually still in the d3 data-bound-to-the-DOM-node situation. It is really the best of both d3 and threejs because you can take full advantage of d3 selections and still have threejs display the 3D results using those same nodes. However, if you want your 3D visualization to look, well, three-dimensional, you have to bridge the gap between the d3 generated DOM nodes and the corresponding objects in the threejs scene yourself.

This second demo uses the CSS3DRenderer and uses d3 to create the DOM elements which are then manipulated by threejs. You can see that while it is nice, it is not quite as fancy as the first demo:

See the Pen CSS3DRenderer Treemap by Bill White (@billdwhite) on CodePen.