Modernizing a site with Netlify, CircleCI, Preact-CLI and AWS

Leaning into modern web tools to rebuild

Worth a Watch is a site of mine that tells me which NBA games are worth watching from the night before: a vital service for any europe-based basketball fan!

I threw it together a couple of years ago when learning to use AWS lambda and the code was so hacky it was only ever going to be understandable to me. It was built using:

  • A static site hosted on S3
  • A lambda function responsible for returning the list of games
  • Dynamo DB for caching the upstream API responses
  • API Secrets kept out of source control by a .gitignore
  • A crude mustache implementation written in inline JS to render the UI
  • CSS written directly into a style tag in the head
  • Deployed by copying and pasting CLI commands on my laptop

Last summer I wanted to work on the site with a couple of friends but the logic to build and deploy it was impossible to share and explain. It needed a rebuild and doing so gave me an opportunity to lean more heavily into some of the modern tooling that’s now available. These are the steps I took along the way.

1. Move the static site to Netlify

If you haven’t heard of Netlify it’s a platform for serving static sites. There are a lot of optimisations going on under the hood to make it efficient at doing this but what really makes it stand out is how user friendly it is. Within minutes you can be up and running with a full continuous integration and deployment pipeline for your site. That means no more copy and pasting CLI commands!

It took about an hour from opening an account to having something set up, and most of that time was spent trying to understand how to update the DNS on my domain name.

I was able to configure deployments and give team members access rights so they could make changes and see them reflected on the site within seconds - way better than what I had before!

2. Move the UI code to Preact-CLI

The good part about my previous implementation was it required zero network requests and was very lightweight. The bad part was everything else.

It’s a very simple UI but even so I wanted to write it in (p)react just because it’s so pleasant to use. I chose Preact over React purely because it was lighter.

One thing I definitely didn’t want to manage was an elaborate build process. All I wanted was to be able to compile JS and CSS and serve it in an optimised format. So I picked up Preact-CLI which is a zero config build tool with all the right optimizations and server rendering built in. I could write modern JS, use CSS modules and drop in whatever other static assets I needed, Preact-CLI would serve them up statically or via a hot-reloading dev server. It worked really nicely out of the box.

The only thing I opted out of here was service workers. It’s something I wanted to have total control of - partly because their power scares me and partly because it was a good opportunity to learn how they work myself. I added this functionality much later on.

3. Move the lambda to a netlify function

Netlify can also host and deploy lambda functions for you so this was an obvious choice for me because I could manage everything in one place. I decided to split the previous lambda function in two and have the part hosted on netlify only speak to a database rather than the third party API (due to rate limit reasons). I’d get the scores into the database another way later.

Actually getting the code running on Netlify functions was a matter of one extra line of configuration. So easy. This was the only part where I ran into a Netlify gotcha though which cost me a decent amount of time and confusion:

Netlify has a nice UI where you can add environment variables so that your secrets don’t need to be in the code themselves. I added my AWS credentials there so that netlify could speak to dynamoDB. As soon as I had done this all my builds started crashing with a deploy error. I’d changed quite a few things so it wasn’t immediately obvious that it was due to the env variables.

Eventually I realised that adding my credentials as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY actually caused Netlify’s own credentials to be overwritten. Yeahhh :/ I prefixed them with MY_ and everything started to work nicely again.

4. Set up CircleCI for the import workflow

That was the “Site” work fully done: it worked well and was easy to manage. I then had to build the other side of the architecture (the import workflow) responsible for importing the scores.

The import workflow is made up of a third Party API, a serverless function running on a cron and DynamoDB to store the API results. I split the import part out because I wanted to lean more heavily on the third party API and I knew it would mean I’d be rate limited at 10 requests per minute. I didn’t want users to have to wait a minute for the page to load.

It didn’t make any sense to use Netlify to manage deployment of this serverless function and I already had it hosted in AWS so I chose to keep it there. I still wanted a way to build and deploy it that didn’t directly involve my laptop though. Enter CircleCI, a free Build and Deployment platform that again you can get set up with in minutes.

I created an account and a CircleCI config file, and within 30mins of trial and error I had a workflow to deploy both master and branch builds to stage and prod environments.

5. Set up Secret Management using AWS Parameter Store

I needed a better way to manage API tokens now that the build wasn’t happening on my laptop. This turned out to be incredibly easy in the end because AWS has a service for providing just this: the Parameter Store.

You can set secrets via the CLI, or via the AWS Console, and then fetch them using a really simple promise API with the aws-param-store npm package.

6. Retry requests when Rate Limited

I knew I was likely to be rate limited often so I wanted to be able to retry the request until it succeeded. This approach was possible because the requests were asynchronous to any actual user action.

I was tempted to write this logic myself but ultimately there was no need as this fetch-retried npm package did just the trick. It backs off exponentially between retries until the request has been fulfilled.

7. Use AWS SES to remind myself when the API Token expires

At this point we had a fully working system. The last remaining itch I wanted to scratch was token expiration. The API I was working with didn’t have a way to automate token renewals which meant that each month I had to remember to go to the UI and generate a new one.

I decided that one thing I could do was send myself an email reminder just before it was about to expire. Accomplishing this with SES was again fairly straightforward just by following a few online guides.

I created a new lambda which ran daily and calculated the remaining days left on the token. If it was close to expiring I sent an email using the SES library in the AWS-SDK npm package (my code). As everything was running in AWS I just had to grant my lambda function access to SES by extending the IAM role and updating my serverless.yml file.

And that’s it!

I essentially copy and pasted my way to a pretty robust architecture! I had rarely touched any of these tools before and was able to navigate them fairly easily by reading tutorials and blog posts. I was constantly impressed by how far along the tooling has come though, most of it was very user intuitive, and how quickly you can get a system up and running.

In total it took about a week of off-and-on work to get to this point and it ended up looking like the below (not including the email system):

The code has simplified a little bit since the initial build (I eventually switched APIs to one that without aggressive rate limiting) but the code is all more or less still there and free to browse:

And of course, if you need to know which NBA games were worth watching (spoiler-free!) you can do so at :)

Radical Candor in Code Review

Recently I read Radical Candor by Kim Scott. It discusses how we can communicate more directly and effectively and it’s something BuzzFeed have integrated into our culture. I find myself looking for more opportunities to give direct feedback to my colleagues, both positive and negative, where previously I would have shied away. Kim defines radical candor as:

Radical Candor™ is the ability to Challenge Directly and show you Care Personally at the same time.

Both of these sides are crucial. If you just challenge directly but don’t care about the person then you come across as an asshole. If you care but aren’t prepared to offer any guidance then you’re not helping that person. To emphasise this point she designed the following diagram:

She explains the other three quadrants (taken from

Obnoxious Aggression™ is what happens when you challenge but don’t care. It’s praise that doesn’t feel sincere or criticism that isn’t delivered kindly.

Ruinous Empathy™ is what happens when you care but don’t challenge. It’s praise that isn’t specific enough to help the person understand what was good or criticism that is sugarcoated and unclear.

Manipulative Insincerity™ is what happens when you neither care nor challenge. It’s praise that is non-specific and insincere or criticism that is neither clear nor kind.

What does this have to do with code review?

The book is written for people in leadership positions but the lessons are universal. Fundamentally it’s about helping those around you be as successful as they can be.

As developers we rarely exist in a silo so communication is one of the most important tools we have. I’ve seen a lack of empathy for someone else’s opinion cause serious rifts within a team and this happens more than ever during code review, usually as comments on Pull Requests. So let’s look at how we can apply Radical Candor to code review.

Obnoxious Aggression

These are the type of PR comments that stick in our minds, go viral on twitter, and are typically deemed as being from ‘assholes’. A real one that sticks in my mind was a single comment on a large pull request:

Lodash templates blow.

It’s crass, it’s unconstructive. It was followed by the commenter rewriting the PR (without asking) to use a different templating language. It was also followed by the author of the original PR looking for a new job.

Manipulative Insincerity

This can transpire in lengthy PRs where someone only adds a single LGTM comment. Unless it’s been discussed offline it’s likely that the reviewer simply doesn’t care enough to invest their time in thoroughly reviewing the code. In this case their approval is both hollow and disrespectful. If you actually do come across a long PR with no faults whatsoever you should take the opportunity to offer some more constructive, positive feedback.

In its worst form this can also be represented by a lack of a comment. The reviewer sees something they know is risky but they keep quiet, perhaps thinking that the author will learn their lesson when it causes them problems later on.

Ruinous Empathy

You may be hitting this area if you try to sugarcoat some negative feedback for fear of hurting the feelings of the author. Pay attention to your use of “could”, “maybe”, “if you like”, “up to you”, particularly if your real feelings about it are stronger.

This can also be an issue when re-reviewing code: if the author has improved their code in response to a number of your comments but it still doesn’t reach the standard it should be at. It’s possible you will look for ways to praise their improvements rather than make further direct feedback.

What does the radical candor version look like?

Give positive feedback

Take time to acknowledge the good parts and comment if you learned something. This is something we often overlook because we become trained to simply spot defects.

Take time to guide the person, not just the code

This is about explaining the “why”. What was it you experienced/read/told that gave you the different perspective? Share those resources.

Use facts/experience rather than opinions

“I don’t like this pattern” vs “This pattern actually caused us issues in a previous project because of X”.

Use non-threatening language

Some people might be ok receiving feedback that borders on obnoxious aggression but they are likely to be in the minority. It can also be intimidating for others who see those comments.

Understand your own personality and act accordingly

My natural tendency is to be over polite and hold back if I don’t know the author that well. Because of that, if I doubt myself I typically err on the side of saying something. You might want to consider erring on the side of holding back If your personality is the opposite.

Removing legacy globals with ES6 Proxies

I found a nice pattern today for getting rid of those global configuration variables that you’re pretty sure aren’t used anymore but you’re a bit too scared to delete. You know the ones, they look like this:


window.GLOBAL_CONFIG = {
  env: 'dev',

They’re the cockroaches of large sites: they outlast developers, framework apocalyses, full rewrites. You know that something outside of your code base relies on them and it’s near impossible to figure out what. It’s easier to just leave it where it is and move on with your life.

Well, ES6 proxies actually make it a whole lot easier to find out which properties aren’t being used! Proxies allow you to put logic between someone trying to access a property and them actually receiving it. Here’s some actual code:

(function() {

  /* First we'll rename our `GLOBAL_CONFIG` object and make it private */
  var _config = {
    env: 'dev'

  /* If we don't support proxies let's just give them what they want! */
  if (!('Proxy' in window)) {
    window.GLOBAL_CONFIG = _config;

   * Alright, now we make a Proxy object.
   * get is a function that will be called every time we access
   * a property.
   * At this point, all we're going to do is return the original value.
  var myProxy = {
    get: function(target, name) {
      return _config[name];

  /* Finally, let's assign it back so there's no difference for the consuming code. */
  window.GLOBAL_CONFIG = new Proxy({}, myProxy);


And we’re done! We’ve written some code which does absolutely nothing!

var x = window.GLOBAL_CONFIG.env;
// log: "dev"

This is usually where I’d stop but in fact it gets a bit more fun when you add some logic to the myProxy object. For example we could log out which properties have been called:

var myProxy = {
  get: function(target, name) {
    console.log(`Someone tried to access GLOBAL_CONFIG.${name}!`);
    return _config[name];
var x = window.GLOBAL_CONFIG.env;
// log: Someone tried to access GLOBAL_CONFIG.env!

// log: "dev"

Reloading the page might give you some idea of who is accessing the object but, given that the calling code is probably outside of your own code base, it’s only going to get you so far.

Instead, let’s send that data somewhere! Your favourite analytics service will probably do the trick.

var myProxy = {
  get: function(target, name) {
    // Assume this method makes a http request somewhere
    track('global_config', name);

    return _config[name];

Now just deploy it for a bit and let your users tell you which properties are still being accessed! You might end up with some graphs like this:

Charts showing that a property is never accessed

And now we can delete the base_url property, safe in the knowledge that no one is using it.

Destructuring, rest properties and object shorthand

Destructuring and rest/spread parameters for Arrays is part of the es6 specification. Support for their use with Objects is, at the time of writing, a stage 2 proposal for future inclusion. Of course, you can use it today via a transpiler like Babel.

Object Shorthand is already part of the es6 specification and with a combination of these three features you can start to use some patterns which can lead to more reliable, less error prone code.

First, let’s dig in to how destructuring objects looks. We’ll take a simple config object and use destructuring to extract some values.

let config = {
    env: 'production',
    user: { name: 'ian' }

let { env } = config;

console.log(env); // 'production'

console.log(config); // { env: 'production', user: { name: 'ian' } }

The equivalent in es5 would be:

var config = {
    env: 'production',
    user: { name: 'ian' }

var env = config.env;

Note that the original config object never mutates. The benefits of immutability are well documented and, whilst this isn’t strictly immutable, starting to write in a way which maintains the original values allows you to reason about the code more easily.

We can also go one step further and destructure two levels deep:

let { env, user: { name } } = config;

console.log(name); // 'ian'

console.log(user); // err: user is not defined

Now let’s introduce rest properties:

let { env, ...newConfig } = config;

console.log(env); // 'production'

console.log(newConfig); // { user: { name: 'ian' } }

console.log(config); // { env: 'production', user: { name: 'ian' } }

Using those three dots creates a new object which represents everything that remains in the config after you have taken out the named variables.

Note that this will still create an empty object so you can rely on object methods working without knowing what the data might be:

let { env, user: { name }, ...newConfig } = config;

console.log(newConfig); // {}

These are solid primitives which can be built up into useful patterns. One way in which they can help immediately is by removing connascence. Connascence relates to the relationship between two components where a change in one would require a change in the other to maintain functionality. A way in which this has often transpired in my code is with argument ordering in functions, particularly functions with high arity.

Let’s take a typical analytics function:

function trackAnalytics(label, category, dimension, username, email) {
    window.track(label, category, dimension, username, email);

trackAnalytics('login', 'user', 'app1', 'ian', '');

Assuming that these functions are in different files, aside from the email address it’s pretty hard to tell from the call side what each parameter relates to. It also breaks if we get the order wrong.

trackAnalytics('user', 'login', 'app1', 'ian', ''); // Tracking is broken

A way in which this is typically resolved is by switching to passing a single object as a parameter and naming the values within it. Now you no longer need to care about understanding their role or the order in which they’re included.

    label: 'login',
    category: 'user',
    dimension: 'app1',
    username: 'ian',
    email: ''

Which means you can satisfy your OCD by ordering them alphabetically or in pyramid style.

    label: 'login',
    username: 'ian',
    category: 'user',
    dimension: 'app1',
    email: ''

That’s better. So passing this object removes the connascence and improves the call side but it has suddenly got worse on the function side:

function trackAnalytics(data) {
    window.track(data.label, data.category, data.dimension, data.username,;

We no longer know what’s inside data and in a function that was more complex we’d probably have to resort to documenting the function arguments in a jsdoc fashion. This can be pretty useful anyway but we can remove the need for it to some extent by using destructuring (note the braces within the arguments list).

function trackAnalytics({ label, category, dimension, username, email }) {
    window.track(label, category, dimension, username, email);

    label: 'login',
    category: 'user',
    dimension: 'app1',
    username: 'ian',
    email: ''

Now the order of the arguments no longer matters and we understand what the values represent on both sides of the function contract.

Often we may already have these values wrapped up in variables before we send them to the function. If that is the case we can take advantage of another feature: object shorthand. Object shorthand allows you to replace the key value pair by a single key. For example:

// Assume these would already exist
let label = 'login';
let category = 'user';
let dimension = 'app1';
let username = 'ian';
let email = '';


Which means we can go back to a more simple looking call, without any concern about order.

function trackAnalytics({ label, category, dimension, username, email }) {
    window.track(label, category, dimension, username, email);

trackAnalytics({ category, label, dimension, username, email }); // ✔
trackAnalytics({ category, username, email, label, dimension }); // ✔

At this point we’ve already made the code much more robust and resistant to errors whilst keeping the code very simple and readable.

Another trick we have with these three features is to destructure within arguments themselves. In our example, we have the username and email in our original config object so let’s take them from there.

let config = {
    env: 'production',
    user: { name: 'ian' }

function trackAnalytics(label, category, dimension, { env, user }) {
    if (env !== 'production') { return };

    window.track(label, category, dimension,,;

trackAnalytics('login', 'user', 'app1', config);

We can even take it one step further and remove the user object:

function trackAnalytics(label, category, dimension, { env, user: { name, email } }) {
    if (env !== 'production') { return };

    window.track(label, category, dimension, name, email);

Maybe that’s going a bit far… These are just tools for you to use however you see fit though.

Hopefully this serves as an example of how you can use these new features to write safer, simpler code. There’s a lot of sytactic sugar in the new JS features but coupled together they achieve things which would be significantly harder, or at least more verbose, to write in ES5.

What even is Vanilla JS these days?

Originally it was non-jQuery, right? Or did it come before that? Anyway the term definitely got popular when people were eschewing jQuery in the quest for lighter pages at the expense of a few browser bugs.

Zero dependency libraries became a thing, which meant each library had their own tiny abstraction of DOM selection utilities and polyfills for Array methods. None of which could be extracted into shared dependencies and cached separately of course but, hey, they were 10x lighter and 20x faster than jQuery so what was there to worry about?

Then jQuery fell way off the radar due to a surge in browsers becoming evergreen, our eagerness to drop older, painful, browsers, and the proliferation of sites like With jQuery out of the equation these days vanilla is much more likely to refer to the absence of frameworks like Ember, Angular, React or Backbone, of which only the latter requires jQuery.

In Paul Lewis’ recent article on the performance comparisons of frameworks he highlighted a vanillajs implementation of TodoMVC which was 16kb: significantly smaller than the other frameworks but certainly not tiny. Primarily it’s smaller as it is can be focused on this one specific purpose allowing for greater optimisation but making it somewhat throwaway after the life of the project. And, of course, it still has to reimplement a bunch of the same features that are present in other libs.

What makes this vanilla? Sure, it doesn’t have any dependencies but what makes up that 16kb?

It includes tiny abstractions for querySelectorAll and DOM events which you’d absolutely expect as developer conveniences. It include it’s own implementation of a micro templating library which focuses only on the todo template but still covers non-trivial html escaping.

It registers model.js, controller.js and view.js, it is todoMVC after all but it’s starting to look suspiciously like my-framework.js rather than vanillajs. In fact it’s really looking like a less-tested and less-jQuery snowflake version of backbone. This isn’t hating on the particular example on the todomvc page it just gets you wondering where the line is drawn between vanillajs and.. flavoured js?

Is it vanillajs if you don’t include a framework but you do include lots of tiny libs as dependencies? Is it vanillajs if it’s written in TypeScript? Is it wise to care about any of this? Is it a worthy goal?

Whilst your own implementation of these features can be smaller and more focused, certainly more performant, is chasing this title going to create a less buggy application? Will it be safer and more secure than a framework which has the benefit of a huge user base and collective intelligence? Are you going to have to reimplement features every time requirements change and could this lack of manoeuvrability end up causing costs to you and your user greater than the extra perf differences?

Anyone who I’ve worked with surely attest that I’m not a fan of debating terminology. It gets in the way of doing actual work and, truthfully, everyone else is better than me at it anyway. Vanillajs is a term that is gathering so much momentum though, and conflating so many ambiguous combinations that it either needs to be defined or descend into utter meaninglessness. And if it’s the latter we need to go back and update thousands of blog posts and slidedecks so maybe it’s best to just nip it in the bud now.