Arcade Games – 2021

I’m a nostalgia junkie at heart. As a kid, I could never waste a quarter on an arcade game, no matter how badly I wanted to play them. Having a full sized arcade game in the house was something 99.9999% of kids could only dream of at the time. I always dreamed I would get one if I ever got the opportunity. Nowadays, it seems silly since a phone can out perform any free standing game ever made. But there was just something about being there in front of that big cabinet, grasping those custom made controls that really took you into the moment.

Back in 2008, the opportunity arrived. I bought a Spy Hunter (in rough shape for about $900) and had it delivered to my one-bedroom apartment. I caught the fever, and over the next 13 years, I slowly added games from friends or sellers online. I’ve become quite adept at learning to diagnose and repair these machines. I’ve even fully restored several of them including the PaperBoy, the Cheyenne and the Spy Hunter to like-new condition. Here is how it looks today:

The collection includes: Galaga, Star Wars (with the sought after amplifone monitor), a PaperBoy, Cheyenne (which I loved to play at Six Flags back in the day), Spy Hunter, Ikari Warriors, Asteroids Deluxe, Lunar Lander (the most prestine example in existence), Midway’s Gunfight (the first game to ever feature a microprocessor), a 1956 Chicago Coin Steam Shovel, a fairly rare Track and Field cocktail, a Haunted House Pinball, and a 1989 Black Knight 2000 pinball machine, along with it’s 1980 predecessor, The Black Knight. The red and blue machines in the back are 1971 Computer Space machines. These are the first arcade games ever made and sold to the public (by Nolan Bushnell, who would go on to create Pong and then found obscure companies such as Atari and Chuck-E-Cheese). They are 2 of less than 100 surviving examples, and 2 of about 60 known to still function.

Classic Mac SE/30

Wanted to share my latest addition to the classic Mac collection: a vintage 1989 Mac SE/30. I loved this machine and at the time it was released, it would have cost nearly $6000 fully loaded. This monster was considered to be the premium Mac from the time it was released in 1989 well into the early 1990’s. Even when the Macintosh Color Classic was released in 1993, the SE/30 could still run circles around it. It could (eventually) accommodate up to 128 Mb of RAM (insane for that era) and it was considered a powerhouse even though it only sported a black and white monitor.

I grabbed this one off eBay and retro-fitted it with a Mac SD card and a recapped motherboard. I even used my 3D printer to design and print a custom bracket to fit the Mac SD card, leaving the slot accessible in the back swap out the SD card. I sent the Mac SD card creator the .stl files in case anyone needs them:

The finished result is a thing of beauty. 🙂

Mac Color Classics!

I wanted to take a moment to write up a post about my full set of Mac Color Classics. This was the machine that captivated me back in college. During my undergraduate degree at Texas, I had to write up a paper and ventured into a small computer lab in Burdine hall. I waited my turn and sat down to face a small, seemingly simple computer: a 1993 Macintosh Color Classic. Even though it was considered underpowered at the time, I credit this computer for changing my career. Right to left: the Mac Color Classic, the Mac Color Classic II and the Mac Performa 275:

At that time, I wasn’t exactly sure what the future held for me. I was drifting from one potential career trac to another and had settled on becoming a police officer or an FBI agent (it sounds silly in retrospect; kind of like wanting to be a ’superhero’ for a career 🙂 )

I grew up on sales floor of my late father’s Radio Shack franchise in Denton, Tx. From the time I was 8 years old, I was stocking shelves or selling VCRs and radar detectors to incredulous adults who considered it strange to be getting a sales pitch from a kid who couldn’t even drive a car. I always loved computers, particularly the Model 4 TRS-80 All-In-One that my dad would bring home for me to tinker with on the weekends (stores were closed on Sunday at that time so he didn’t need a floor model for about 36 hours, giving me time to recreate all the code scenes from WarGames on that beautiful Model 4 with dual 5 1/4 floppies.

By the time I got the UT, I was not long enamored with PCs. They were good for games but not much else. My visit to Burdine hall changed all that. I absolutely loved the way the UI looked and the way I could highlight and manipulate the text on the screen. The clean lines and smooth experience made me giddy to return for future papers. At the time, those systems ran around $2300 or so. Way beyond the means of a kid who was paying his own way through college using Pel grants and student loans. As much as I loved that computer, it would have to stay a dream for a while longer.

I finally managed to grab one off eBay a while back. In addition, I also snagged it’s big brother and big sister: The Mac Color Classic II and the Mac Performa 275. These were basically the Mac Color Classic “done right”. They had a faster data bus, more memory and were truly performant for the time period. They were only sold in Canada and Japan (hence the alternate characters on the keyboards). At the time, I never even knew they existed. But I’m thrilled to have them in my collection today. 🙂

Blog Archive

I have been so busy these past few years that have not had a chance to contribute to my blog. While I am not actively adding content, I plan to keep it open in case anyone finds the previous articles of interest. 🙂

Angular Development Best Practices

I wanted to take a moment and write down some of the things I have learned working with Angular for the past four years. These are some practices that I have found very helpful when building large applications. My suggestions and recommendations are based on my experience of trying to make a scalable platform that allows multiple developers to contribute to the codebase without stepping all over each other or introducing lots of regressions. This applies only to the latest version(s) of Angular.

1. Update package.json dependencies often

A while back, a company I was working for based their entire UI on a the Dojo library. This was a very viable choice at the time, but we were not diligent about keep that library up-to-date with the latest revisions, even though it was only being updated every month or two at the time. This was partially due to the extra workload, but also because of the risk of adding security flaws and bugs and not having time to test for both.

It was not really a problem until a couple of years later when we realized we were not just a version or two behind, but that the API of the library had changed so significantly that comprehending and understand the refactoring work involved became an almost insurmountable task. As a result, the codebase became stuck in a sort of hybrid state with some parts being updated while other remaining off-limits due to it being locked down with old technology. It was eventually left to wither in the background as newer, more flexible supporting applications grew up around it.

At the time, I understood this reluctance to update the versions of that library. But now, the world has changed with the advent of the NPM ecosystem. I had seen first-hand how letting technical debt pile up left us with a hefty bill, and thus, I have adopted a new strategy: I firmly believe you should check for updates in your package.json at least once a week at the minimum and do your best to keep it current. This comes with a few caveats and hiccups, but I assure you it is much better than the alternative.

My current approach is a follows: When it comes to dependencies in my package.json file, I keep them current with an almost fanatic attitude. I use the NPM check update library which provides an ‘npm‘ executable that queries for updates to all of the packages in the project. Once I see which libraries have been updated, I examine the results. Since the versions are given in Semantic versioning syntax, changes to minor and patch versions are updated immediately and then smoke tested. Major versions will usually prompt me to check out that project’s repository README to see what breaking changes have been made. In most cases, I will also implement and smoke-test those updates and see if there are any obvious problems. Obviously, the more extensive your testing facilities are, the safer this will all become. However, I have found that 90%+ of the time, there is little to no impact to the code base. The issues that do immediately become apparent can be handled directly, without having to spend countless hours trying to figure out which update just broke something (because by doing this so frequently, you will only have 2-5 updates a week to worry about). If I update, for example, my Immer package and the application immediately throws an error on startup, I know exactly what caused it and exactly where to look. Try doing that 6 months later after you find 40 different version updates to your package.json file and then try to find the time to update and test them one by one. Frequent checks will break this down into a manageable, bit-sized task that can be simple to resolve. Plus, you can file bug reports quickly and find out if others have hit the same issue since the conversation around that particular update with still be current. If you find a problem and 6 months have passed, the developer may be unmotivated to retro-actively apply some fix that you are requesting.

We have been keeping our package.json current on a weekly basis for nearly 4 years now and the number of problems caused by this approach has been surprisingly low. Every so often we will find a glitch that was caused by a package update that was missed by the smoke and unit tests, but it is usually easy to track down and helps us build up our suite of tests to prevent it from being missed again in the future.

2. Wrap external package implementations to make swapping easier

Related to the recommendation above, I was updating a logger library we used in a project. It turns out that the library had a major version point release and was changing its module structure, which meant it would no longer work with IE11 (big surprise). As much I would love to join this movement and leave all things IE behind, project managers never want to leave it behind and demand that we support that decrepit old browser. I began to contemplate changing out the logging package in favor of another. However, I realized that I had imported that particular library by name in the imports in at least a 100+ files in my application. This was not the end of the world, but with something as familiar and consistent as the use of a logger, I figured the interface between my code and any particular logging package should be easy to abstract behind a standard wrapper. I created a Logger class of my own, gave it the common logging methods such as debug(), info(), error(), etc and then used that to wrap the implementation of the actual Logger provided by this NPM library so it now resides in one single location. After about an hour of updating those 100+ files to point to my internal Logger wrapper, I now have a codebase that will allow me to swap out logger package implementations with only one touchpoint. This may not be feasible for every package you reply upon, but for things like loggers, toasters, export utilities etc, it may be worth it to try and isolate those libraries behind a common wrapper class that will not change. This will make it less painful to swap out NPM projects that provide the same function.

3. Make use of a redux style store (either NGRX or NGXS)

I cannot overstate this one enough: from the start you will want to have a reactive-style store as a “single source of truth” that you can use to display data in your components. While I have used NGRX for a couple of years now, I have a special place in my heart for NGXS. It is definitely more “angular-like” in its application and requires a lot less code to get the same results as NGRX. The only reason I cling to NGRX is the greater support base and its higher popularity. However, if I was putting together a small-to-medium sized application today, it would almost certainly use NGXS.

4. Intellij Plugins for Angular are helpful

We have chosen to go with separate files structure for the code, template, and stylesheet (ts/html/css) for each component. I find that our classes and templates are usually large enough that keeping them separate reduces the clutter. The majority of our developers use Intellij to edit their code. There is a great little plugin that will collapse those three files down into a single group in the project tree. This means you no longer see all three files representing a component, but instead see a rolled up node representing them all, which can be expanded to get access to a specific one. Check it out.

I hope you find these recommendations useful. If you have any questions or feedback, feel free to leave them in the comments section 🙂

Angular Drag and Drop the Reactive Way with NGRX

I’ve been working with some highly interactive user interfaces lately and wanted to build them in a reactive style rather than the imperative way I have created such applications in the past. I found that Angular combined with NGRX (Angular’s redux pattern) makes this type of architecture a breeze. It is extremely scalable and performant because once you get the desired content in the store, you can references it in various parts of the UI with very little effort. Having the ‘Single Source of Truth‘ allows you to ditch all of the manually “update-then-retrieve-and-display” coding you would normally have to handle.

One thing that was a bit counter-intuitive was making the UI render based upon the contents of the NGRX store rather than using the traditional imperative instructions. In the case of an interactive diagram, when the user moves something on the screen, instead of updating the DOM elements in the UI, we instead update the NGRX store, which in turn applies the updates to the elements on the screen to reflect the current state.

The benefits of this are tremendous. First, you get massive scalability with a redux-style application. For example, if you have a detail section on the screen, the ‘title’ field will always show the same value as the shape that is selected in the diagram since that value is coming from a single place in the store.

You also get the ability to add application-wide features lie undo/redo with very little effort. For this demo, I have implemented a simple “undo move” function that simply rolls back the initial move action. However, there exists several NGRX undo/redo libraries that can be added to an application to extend that capability beyond just a single action.

Below you can see a demo of what I am describing. At first glance, it looks fairly simple and generic. You move a shape and drag it around. You also see the current details in the form on the right. No big deal right? However, if you take a look at the code, you will find that there is more going on here. The mouse listeners essentially determine what the new x/y position of the selected shape that is being dragged will be and then an NGRX action is created and dispatched. This action updates those values in the NGRX store. The store determines what is displayed on the screen at any given time. Thus, updating the store will in turn update the shape’s position on the screen. It feels like you are moving the shape, but in reality you are just updating the NGRX store.

This diagram show the sequence of what is occurring:

Logic Flow – Mouse movement creates updates to NGRX store which are reflected in the UI

So why is this interesting? Well, first it opens up a whole lot of possibilities when it comes to your designs. You can see that the shape information form on the right is always in sync with the currently selected shape. That type of updating and syncing takes minimal effort once the information is in the store. It is nothing to add another diagram that mirrors the first. You can see below (and in the code) that I have added a read only diagram that always reflects what you see in the main diagram. Both diagrams reflect the store so both diagrams are always in sync:

For those who are well versed in reactive programming, this is nothing new. I just wanted to share this technique for those who are interested in ways of using reactive design to handle intensive interaction with a graph, visualization, or editor-style UI. 🙂

Happy New Year – Welcome 2020

It has been a while since I have been able to blog about the things I’ve been working on for the past few years. While I still create interactive user interfaces, I have also spent the last few years focusing heavily on architecting extremely scalable and maintainable applications which are primarily constructed using Angular and NGRX (Angular’s reactive state management framework). The result has been applications that are performant and maintainable and the design allows for team members to add new features without stepping all over each other or breaking each other’s work.

By moving to this “reactive design”, we are finding that we can add new capabilities to the applications with amazing speed. One of my New Year’s resolutions is to get back to writing some blog entries describing what I have found along the way. I hope to be able to share how productive your platforms can become once you make the leap into reactive architectures 🙂

d3 Minimap v4 Update

Understanding d3 v4 ZoomBehavior in the Pan and Zoom Minimap

See the Pen d3 Minimap Pan and Zoom Demo by Bill White (@billdwhite) on CodePen.

It has been a while since I posted any new articles.  I’ve been working  non-stop with Angular 4 (formerly Angular 2), Typescript, RxJS and d3 v4.   I recently needed to add a minimap for a visualization and I decided to update my old minimap demo to make us of d3 version 4.  While researching the updates to the zooming behavior, I came across this post on GitHub where another developer user was trying to do the same thing (funny enough, also using my previous example).  After reading Mike’s response about cross-linking the zoom listeners, I decided to give it a try.   While implementing his suggestion, I learned quite a bit about how the new and improved zoom functionality works and I wanted to share that in this post.

My previous example did not make good use of the capabilities of d3’s ZoomBehavior.  I was manually updating the transforms for the elements based on zoom events as well as directly inspecting the attributes of related elements.  With the latest updates to d3 v4, creating this type of minimap functionality ended being really simple and I was able to remove a lot of code from the old demo.  I found that, while the release notes and the docs for the zoom feature are helpful, actually reading through the source is also enlightening.  I was eventually able to boil it down to some basic bullet points.



  1. The is a canvas component (the visualization) and a minimap component.  I will refer to these as counterparts, with each making updates to the other.  The canvas has a viewport (the innerwrapper element in my example) and a visualization (the pancanvas in my example).  The minimap also has a viewport (the frame) and a miniature version of the visualization (the container).  As the visualization is scaled and translated, the relative size and scale is portrayed in the minimap.
  2. The d3 v4 updated demo has 2 zoomHandler methods that each react separately to ‘local’ zoom events.  One makes changes to the main visualization’s transform and scale.  The other does the same to the minimap’s frame.
  3. There are also 2 ‘update’ methods.  These are called by the zoomHandler in the counterpart component.  Each update method will then call the local zoomHandler on behalf of the counterpart.  This effectively convey’s the ZoomBehavior changes made in one counterpart over to the other component.
  4. There will continue to be separate logic for updating the miniature version of the base visualization over to the minimap. (found in the minimap’s render() method)
  5. There will continue to be separate logic to sync the size of the visualization with the size of the frame. (also found in the minimap’s render() method)


Zoom Behavior notes

  1. The ZoomBehavior is applied to an element, but that does not mean it will zoom THAT element.  It simply hooks up listeners to that element upon which it is called/applied and gives you a place to react to those events (the zoomHandler that is listening for the “zoom” event).
  2. The actual manipulation of the elements on the screen will take place in the zoomHandler which receives a d3.event.transform value (which is of type ZoomEvent for those of you using Typescript).  That event provides you with the details about what just happened (i.e. the transformation and scale that the user just performed on the ZoomBehavior’s target).  At this point, you have to decide what to do with that information, such as applying that transformation/scaling to an element.  Again, that element does not have to be the same element that the ZoomBehavior was originally applied to (as is the case here).
  3. We have to add a filtering if() check within each zoom handler to avoid creating an infinite loop.  More on that in a in the next section…..


Logic Flow

  1. We apply the ZoomBehavior on two elements (using <element>.call(zoom)).  The InnerWrapper of the visualization’s canvas and the Container of the minimap.  They will listen for user actions and report back using the zoomHandler.
  2. Once the zoomHandler is called, we will take the d3.event.transform information it received and update some other element.  In this demo, the InnerWrapper’s zoom events are applied to the PanCanvas, while the minimap’s Container events are used to transform the minimap’s Frame element.
  3. Once each zoom handler has transformed it’s own local target, it then examines the event to see if it originated from its own local ZoomBehavior.  If it did, then the logic executes an ‘update()’ call over on its counterpart so that it can also be modified.  So we get a sequence like this:  “InnerWrapper(ZoomBehavior) –> zoomHandler(in Canvas component) –> updates PanCanvas element –> did zoom event occur locally? –> if so, update minimap”.  And from the other side we have this: “minimap Container(ZoomBehavior) -> zoomHandler(in minimap) -> updates minimap Frame element -> did zoom event occur locally? -> if so, update visualization”.  You can see how this could lead to an infinite loop so the check to see if the event originated locally is vital.

This diagram shows the general structure I’m describing:

So the end result is a “push me/pull you” type of action.  Each side makes its own updates and, if necessary, tells the counterpart about those updates.  A few other things to point out:

  1. Because the minimap’s frame (representing the visualization’s viewport) needs to move and resize inverse to the viewport, I modify the transform event within the minimap when they are received and before they are sent out.  The encapsulates the inversion logic in one place and puts that burden solely on the minimap component.
  2. When modifying a transform, ordering matters.  Mike Bostock mentions this in the docs, but I still got tripped up by this when my minimap was not quite in sync with the visualization.  I had to scale first, then apply the transform.
  3. Rather than using the old getXYFromTranslate() method that parses the .attr(“transform”) string property off an element, it is much easier to use the method d3.zoomTransform(elementNode) to get this information. (remember, that method works with nodes, not selections)

At this point, the design works.  However, there’s another problem waiting for us:

When the user moves an element on one side, the counterpart on the other gets updated.  However, when the user goes to move the counterpart element, the element will “jump back” to the last position that it’s own ZoomBehavior triggered.  This is because, when the ZoomBehavior on an element contacts its zoomHandler, it stashes the transform data for future reference so it can pick up where it left off on future zoom actions.  This ‘stashing’ only happens when the ZoomBehavior is triggered from UI events (user zooming/dragging etc).  So when we manually update the PanCanvas in response to something other than the ZoomBehavior’s ‘zoom’ event, the stashing does not occur and the state is lost.  To fix this, we must manually stash the latest transform information ourselves when updating the element outside of the ZoomBehavior’s knowledge.  There’s another subtle point here that briefly tripped me up:  the ZoomBehavior stashes the zoom transform on the element to which it was applied, NOT the element upon which we are acting.  So when the ZoomBehavior hears zoom events on the InnerWrapper, it updates the __zoom property on the InnerWrapper.  Later on, when the minimap makes an update call back to the visualization, we have to manually update that property on the InnerWrapper, even though we are using that data to transform the PanCanvas in the zoomHandler.


So here is the final interaction:

  1. User moves the mouse over the PanCanvas
  2. The ZoomBehavior on the InnerWrapper hears those events and saves that transform data in the __zoom property on the InnerWrapper.
  3. The ZoomBehavior then emits the ‘zoom’ event which is handled by the local zoomHandler in the visualization (canvas component in the demo)
  4. The zoomHandler will apply the transform to the PanCanvas to visibly update its appearance in response to the mouse actions from step #1
  5. The zoomHandler looks at the event and if it determines that it originated locally, it makes an update call over to the minimap so it can be updated
  6. The minimap’s update handler inverts the transform event data and applies it to the Frame element
  7. Because the minimap’s updates to the Frame element occurred outside of the minimap’s own ZoomBehavior,  we stash the latest transform data for future state reference. Note: we stash that state, not on the Frame element, but on the minimap’s Container element because that is the element to which the minimap’s ZoomBehavior was applied and that is where it will look for previous state when subsequent ZoomBehavior events are fired when the user mouses over the minimap.
  8. The minimap’s zoomHandler is called by the update method which applies the matching minimap appearance update to the Frame element.
  9. The minimap’s zoomHandler determines the update event did not come from the local ZoomBehavior and therefore it does not call the update method on the visualization, thus preventing an infinite loop.

Hopefully this will save you some time and help you understand how the d3 v4 ZoomBehavior can be used for functionality such as this demo.  🙂

Cascading Treemap Links

Just wanted to post a couple of links to some updated treemap examples that also solve the issue of treemap headers that I discussed in this post.

Category: d3