Open Roles at BuzzFeed UK

It’s not often that we open up enough Tech roles in London to merit a blog post but now that we are I want to share them as broadly as I can! This post is my attempt to go beyond the Job Description and explain why I’m personally excited about each of these opportunities. I genuinely believe each of them present a great career and growth opportunity.

We’re a team of nine in the UK: an Architect, 4 Staff Engineers, 2 Senior Engineers and 2 Software Engineers. That’s a little top heavy, right, but it gives us a fantastic environment for us to support new employees. People at BuzzFeed are incredibly giving with their time and their experience, far more than anywhere I’ve ever worked, and this is something we take a great source of pride in. We’d all be massively invested in your growth within the team.

I’d love to hear from you if any of the roles sound interesting. I’ve left links below where you can apply and if you have any further questions I’m happy to chat more about them. You can DM me via twitter (@ianfeather) or email me at ian.feather@buzzfeed.com.

These roles are correct as of Nov 2019. An up to date listing can always be found at https://www.buzzfeed.com/about/jobs.

Software Engineer (Frontend)

From the Job Description: BuzzFeed’s Site Group is looking for a frontend Software Engineer to help build a more accessible, performant and engaging web experience on buzzfeed.com. As a software engineer on the Site team you’ll be a key part of our ongoing project to build the next generation of buzzfeed.com using Next JS and React.

The role is perfect for someone fairly new in their career who wants to accelerate their learning whilst working on sites with significant scale. Five of the nine engineers in the UK have their background in Frontend development, and have built large portions of buzzfeed.com, so you’d be coming into an ideal environment for growth.

It’s also just an exciting time to be a Frontend Engineer at BuzzFeed! We’re in the process of rebuilding a large portion of the rendering services that power buzzfeed.com and we want your insight and perspective on how to do that. You might think from my description of the London team that we’re very hierarchical and you would be a small cog in this process but, because of the way we organise ourselves, every single engineer is able to have a significant impact on the way we write code and build products.

Here’s a link to the full Job Description.

Principal Engineer (Infrastructure)

From the Job Description: BuzzFeed’s Core Infrastructure Group is looking for a highly experienced engineer to help take the infrastructure that powers our sites and products to the next level. You will Influence the direction of the core infra group, ensuring we continue to solve the most important challenges to BuzzFeed’s infrastructure.

This is a really exciting opportunity for someone to step straight into a leadership role and influence the direction of our infrastructure group. It’s also a rare role: we only have a handful of Principal Engineers at BuzzFeed and typically promote into the position. If you’ve formed strong opinions about how organisations should deploy, run and observe software, and want to put them into practice at scale, then this could be ideal for you!

Real talk, our infrastructure is in a pretty good place right now. We want someone who can raise the bar. The position is opening following one of our leads moving internally to stand up a new team, and we want someone who can come in and spot the opportunities that we haven’t seen.

You’d get to have broad impact and think strategically but you’d also still be getting pretty hands on with code. Hopefully that appeals to you. In fact, getting hands on with code would be important because two of our engineers in the UK work in the infrastructure group and would lean on you for mentorship in this area.

Here’s a link to the full Job Description.

Associate/Junior Software Engineer (Infrastructure)

From the Job Description: In your first week you will be guided through fixing a bug in one of our services and deploying the change to production. In your first month you will learn how to build and provision our infrastructure. In your first three months you will work with your team to deliver meaningful improvements to our infrastructure. In your first six months you will facilitate a blameless retrospective.

Our infrastructure team is one of the highest performing we have in the organisation and one that cares deeply about engineering best practices. On the team you’ll be exposed directly to the challenges of scaling systems and engineering organisations, in an environment that really values correctness and diligence. I really can’t think of any better place to start your career.

Btw, you don’t need to be an “infrastructure engineer” to want to take this career path. You just need to be an inquisitive person who enjoys finding out how things work under the hood. If you’ve enjoyed doing that in your day to day work then you might find it interesting to pursue it full time.

Here’s a link to the full Job Description.

Senior Data Scientist / Data Scientist

From the Job Description: BuzzFeed is looking for data scientists to join its New York, London, and Minneapolis offices. We’re seeking passionate professionals who have a proven track record using data in a meaningful way - whether building product or supporting decision-making. You balance technical expertise, domain knowledge and a capacity to be deeply inquisitive.

All organisations these days say they’re “data driven”, it’s not something that really separates you from the crowd anymore. That doesn’t make it any less true though. If anything, BuzzFeed might be somewhat more unique by having always been data driven, it’s how the company knew what to pursue to drive the significant growth we’ve seen over the past 13 years. There’s a tremendous appetite for data across the organization. I see questions being asked by content creators and curators that push the data team to provide answers in ways that I haven’t seen elsewhere. On top of that, our data infrastructure is on solid footing, meaning that as a data scientist you are able to work on some of the most interesting problems in media, ranking and recommending content to audiences around the world.

We’ve never had someone from Data Science working out of the UK so this role is particularly exciting for me as I think you’ll be able to spot opportunities that we haven’t yet seen, working closely with the tech team but also being exposed to the work of the content teams. You’ll also make us much more efficient as a London tech team by allowing us to move more quickly on date engineering products.

Here’s a link to the full Job Description.

Better Technical Architecture Proposals

Get your ideas across and be more impactful by avoiding these mistakes

Known by many names (Architecture Proposals, RFCs, Design Documents, Architecture Review docs, Architecture Design Records…), these all have the same objective: to present your thoughts to a wider audience, to solicit feedback, and to persuade decision makers.

I’ve come to realise that their success depends heavily on how they are delivered, regardless of how good the proposal is. The reality is that being “right” often isn’t actually enough: even the most compelling and logical argument can be ruined by poor communication. To see your work lose its impact because of non-technical reasons is incredibly frustrating! In this article I’ll share some of the lessons I’ve learned along the way on how to ensure that it’s the content that does all the talking.

Firstly, let’s clarify what we mean by an Architecture Proposal. These documents are typically only written for significant architecture decisions or adoption of new technologies. Topics which could merit an architecture proposal might include “migrating to TypeScript” or “replacing our REST endpoints with GraphQL”.

It’s not guaranteed that an organisation will value these proposals, but they’re beneficial in a lot of ways. As well as being useful as a way to organise your own thoughts, they increase transparency throughout an organisation and a library of these documents enables new engineers to learn the context and thought process around how systems are architected.

Crafting a good proposal, getting consensus and seeing it realised can be both satisfying and impactful. On the other hand, being upset at the status quo whilst looking back at your failed proposals to address the problem is frustrating and makes you feel resigned to a lack of desire for change. I’ve had my share of both. Here are the mistakes I’ve made and the lessons I’ve learned along the way:

Make sure the context is included within the proposal

You’ll know if you’ve failed to do this when you share it with someone for the first time. Your message to them should be as simple as “Hey, I’d love your thoughts on this [link to doc].”

A few times I’ve caught myself constructing a message like “Hey, X asked me to put together some thoughts on Y. It’s still WIP right now. The main reason we’re doing this now is because of the deadline later in the year. Let me know what you think.”

Needing to include this context in the message is a clear signal that your document is missing context and needs revisiting. The reason why this is so important is because the proposal is almost certainly going to be passed onto others who don’t have this additional context and this could entirely change the way they perceive it. Put that context in the first paragraph!

You can’t contain how far your proposal will reach

If a proposal is interesting, and especially if it’s controversial, then people will share it further than you ever expect. Don’t expect to be able to tweak and refine it before it reaches that person. And don’t expect access rights to fix this! People will read it over each others’ shoulders.

Instead, accept that you can’t contain it and make sure that the premise and goals of your proposal are accessible to everyone. If there’s a certain person or department that you think might be resistant to the changes you’re proposing then make sure you explicitly call out their concerns in the document and proactively share it with them.

Share it with people you trust before going public

Maybe you can’t contain a document’s reach once it’s “public public” but you should certainly be able to within your closest group of colleagues. This step is important for catching any obvious issues and making sure that the proposal is technically sound. I like to think of it like code review: an opportunity to find bugs before it gets to production.

This group should be made up of people whose critical abilities you trust, and who you can expect to get honest and direct feedback from. Try to ensure it’s not made up entirely of Yes People. If you’re only getting positive feedback then push them to be harder on you!

Explicitly call out non-goals of the proposal

Goals and non-goals set the boundaries around the areas that you’re agreeing to accept criticism on. They can be a great way of acknowledging and pre-empting adjacent topics you might be expecting, they also make it easier for you to deal with comments such as “What about [insert unrelated hard problem here]??”

Non-goals are not a way of you opting out of responsibility though. Let’s say you’re writing a proposal to migrate all of your front end applications to React, a valid non-goal might be “This proposal does not aim to set coding standards for how we will write application code using React”. These standards can be figured out later if you decide to take forward the proposal so they’re not worth including as part of the core argument. An invalid non-goal might be “This proposal does not consider the impact on SEO”. In this case, presuming your site is public, being “SEO-friendly” is likely to be a constraint in your decision.

Use a shared vocabulary

Given that you can’t control how far the proposal will reach you should anticipate that it will be read by people who are outside your technical domain: product managers, designers, your CEO even; as well as people who don’t have the same native language as you. Where possible use language that can be understood by anyone. Be succinct and to the point, and skip any big words that don’t add value to the sentence. Avoid talking too much in the abstract; people are reading to get the specifics of your proposal.

You should aim to gear the language of sections towards their intended audience. Avoid using technical language in the introductory sections unless it’s otherwise impossible to convey your point. When you are in the purely technical sections you can use technical language unapologetically. If you think there’s any risk that certain concepts are familiar to only a small minority then link out to further reading on the subject.

I think that’s it!

I’ve definitely made more mistakes than the ones above but I may unfortunately have scrubbed them from my memory. Regardless, if you follow these you’ll be a step ahead of where I was. Remember that your number one goal is to connect your ideas with other people, so always keep your audience in mind and write the docs for them, not for yourself.

Modernizing a site with Netlify, CircleCI, Preact-CLI and AWS

Leaning into modern web tools to rebuild worthawatch.today

Worth a Watch is a site of mine that tells me which NBA games are worth watching from the night before: a vital service for any europe-based basketball fan!

I threw it together a couple of years ago when learning to use AWS lambda and the code was so hacky it was only ever going to be understandable to me. It was built using:

  • A static site hosted on S3
  • A lambda function responsible for returning the list of games
  • Dynamo DB for caching the upstream API responses
  • API Secrets kept out of source control by a .gitignore
  • A crude mustache implementation written in inline JS to render the UI
  • CSS written directly into a style tag in the head
  • Deployed by copying and pasting CLI commands on my laptop

Last summer I wanted to work on the site with a couple of friends but the logic to build and deploy it was impossible to share and explain. It needed a rebuild and doing so gave me an opportunity to lean more heavily into some of the modern tooling that’s now available. These are the steps I took along the way.

1. Move the static site to Netlify

If you haven’t heard of Netlify it’s a platform for serving static sites. There are a lot of optimisations going on under the hood to make it efficient at doing this but what really makes it stand out is how user friendly it is. Within minutes you can be up and running with a full continuous integration and deployment pipeline for your site. That means no more copy and pasting CLI commands!

It took about an hour from opening an account to having something set up, and most of that time was spent trying to understand how to update the DNS on my domain name.

I was able to configure deployments and give team members access rights so they could make changes and see them reflected on the site within seconds - way better than what I had before!

2. Move the UI code to Preact-CLI

The good part about my previous implementation was it required zero network requests and was very lightweight. The bad part was everything else.

It’s a very simple UI but even so I wanted to write it in (p)react just because it’s so pleasant to use. I chose Preact over React purely because it was lighter.

One thing I definitely didn’t want to manage was an elaborate build process. All I wanted was to be able to compile JS and CSS and serve it in an optimised format. So I picked up Preact-CLI which is a zero config build tool with all the right optimizations and server rendering built in. I could write modern JS, use CSS modules and drop in whatever other static assets I needed, Preact-CLI would serve them up statically or via a hot-reloading dev server. It worked really nicely out of the box.

The only thing I opted out of here was service workers. It’s something I wanted to have total control of - partly because their power scares me and partly because it was a good opportunity to learn how they work myself. I added this functionality much later on.

3. Move the lambda to a netlify function

Netlify can also host and deploy lambda functions for you so this was an obvious choice for me because I could manage everything in one place. I decided to split the previous lambda function in two and have the part hosted on netlify only speak to a database rather than the third party API (due to rate limit reasons). I’d get the scores into the database another way later.

Actually getting the code running on Netlify functions was a matter of one extra line of configuration. So easy. This was the only part where I ran into a Netlify gotcha though which cost me a decent amount of time and confusion:

Netlify has a nice UI where you can add environment variables so that your secrets don’t need to be in the code themselves. I added my AWS credentials there so that netlify could speak to dynamoDB. As soon as I had done this all my builds started crashing with a deploy error. I’d changed quite a few things so it wasn’t immediately obvious that it was due to the env variables.

Eventually I realised that adding my credentials as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY actually caused Netlify’s own credentials to be overwritten. Yeahhh :/ I prefixed them with MY_ and everything started to work nicely again.

4. Set up CircleCI for the import workflow

That was the “Site” work fully done: it worked well and was easy to manage. I then had to build the other side of the architecture (the import workflow) responsible for importing the scores.

The import workflow is made up of a third Party API, a serverless function running on a cron and DynamoDB to store the API results. I split the import part out because I wanted to lean more heavily on the third party API and I knew it would mean I’d be rate limited at 10 requests per minute. I didn’t want users to have to wait a minute for the page to load.

It didn’t make any sense to use Netlify to manage deployment of this serverless function and I already had it hosted in AWS so I chose to keep it there. I still wanted a way to build and deploy it that didn’t directly involve my laptop though. Enter CircleCI, a free Build and Deployment platform that again you can get set up with in minutes.

I created an account and a CircleCI config file, and within 30mins of trial and error I had a workflow to deploy both master and branch builds to stage and prod environments.

5. Set up Secret Management using AWS Parameter Store

I needed a better way to manage API tokens now that the build wasn’t happening on my laptop. This turned out to be incredibly easy in the end because AWS has a service for providing just this: the Parameter Store.

You can set secrets via the CLI, or via the AWS Console, and then fetch them using a really simple promise API with the aws-param-store npm package.

6. Retry requests when Rate Limited

I knew I was likely to be rate limited often so I wanted to be able to retry the request until it succeeded. This approach was possible because the requests were asynchronous to any actual user action.

I was tempted to write this logic myself but ultimately there was no need as this fetch-retried npm package did just the trick. It backs off exponentially between retries until the request has been fulfilled.

7. Use AWS SES to remind myself when the API Token expires

At this point we had a fully working system. The last remaining itch I wanted to scratch was token expiration. The API I was working with didn’t have a way to automate token renewals which meant that each month I had to remember to go to the UI and generate a new one.

I decided that one thing I could do was send myself an email reminder just before it was about to expire. Accomplishing this with SES was again fairly straightforward just by following a few online guides.

I created a new lambda which ran daily and calculated the remaining days left on the token. If it was close to expiring I sent an email using the SES library in the AWS-SDK npm package (my code). As everything was running in AWS I just had to grant my lambda function access to SES by extending the IAM role and updating my serverless.yml file.

And that’s it!

I essentially copy and pasted my way to a pretty robust architecture! I had rarely touched any of these tools before and was able to navigate them fairly easily by reading tutorials and blog posts. I was constantly impressed by how far along the tooling has come though, most of it was very user intuitive, and how quickly you can get a system up and running.

In total it took about a week of off-and-on work to get to this point and it ended up looking like the below (not including the email system):

The code has simplified a little bit since the initial build (I eventually switched APIs to one that without aggressive rate limiting) but the code is all more or less still there and free to browse:

And of course, if you need to know which NBA games were worth watching (spoiler-free!) you can do so at https://www.worthawatch.today :)

Radical Candor in Code Review

Applying leadership lessons to give more useful feedback

Recently I read Radical Candor by Kim Scott. It discusses how we can communicate more directly and effectively and it’s something BuzzFeed have integrated into our culture. I find myself looking for more opportunities to give direct feedback to my colleagues, both positive and negative, where previously I would have shied away. Kim defines radical candor as:

Radical Candor™ is the ability to Challenge Directly and show you Care Personally at the same time.

Both of these sides are crucial. If you just challenge directly but don’t care about the person then you come across as an asshole. If you care but aren’t prepared to offer any guidance then you’re not helping that person. To emphasise this point she designed the following diagram:

She explains the other three quadrants (taken from radicalcandor.com):

Obnoxious Aggression™ is what happens when you challenge but don’t care. It’s praise that doesn’t feel sincere or criticism that isn’t delivered kindly.

Ruinous Empathy™ is what happens when you care but don’t challenge. It’s praise that isn’t specific enough to help the person understand what was good or criticism that is sugarcoated and unclear.

Manipulative Insincerity™ is what happens when you neither care nor challenge. It’s praise that is non-specific and insincere or criticism that is neither clear nor kind.

What does this have to do with code review?

The book is written for people in leadership positions but the lessons are universal. Fundamentally it’s about helping those around you be as successful as they can be.

As developers we rarely exist in a silo so communication is one of the most important tools we have. I’ve seen a lack of empathy for someone else’s opinion cause serious rifts within a team and this happens more than ever during code review, usually as comments on Pull Requests. So let’s look at how we can apply Radical Candor to code review.

Obnoxious Aggression

These are the type of PR comments that stick in our minds, go viral on twitter, and are typically deemed as being from ‘assholes’. A real one that sticks in my mind was a single comment on a large pull request:

Lodash templates blow.

It’s crass, it’s unconstructive. It was followed by the commenter rewriting the PR (without asking) to use a different templating language. It was also followed by the author of the original PR looking for a new job.

Manipulative Insincerity

This can transpire in lengthy PRs where someone only adds a single LGTM comment. Unless it’s been discussed offline it’s likely that the reviewer simply doesn’t care enough to invest their time in thoroughly reviewing the code. In this case their approval is both hollow and disrespectful. If you actually do come across a long PR with no faults whatsoever you should take the opportunity to offer some more constructive, positive feedback.

In its worst form this can also be represented by a lack of a comment. The reviewer sees something they know is risky but they keep quiet, perhaps thinking that the author will learn their lesson when it causes them problems later on.

Ruinous Empathy

You may be hitting this area if you try to sugarcoat some negative feedback for fear of hurting the feelings of the author. Pay attention to your use of “could”, “maybe”, “if you like”, “up to you”, particularly if your real feelings about it are stronger.

This can also be an issue when re-reviewing code: if the author has improved their code in response to a number of your comments but it still doesn’t reach the standard it should be at. It’s possible you will look for ways to praise their improvements rather than make further direct feedback.

What does the radical candor version look like?

Give positive feedback

Take time to acknowledge the good parts and comment if you learned something. This is something we often overlook because we become trained to simply spot defects.

Take time to guide the person, not just the code

This is about explaining the “why”. What was it you experienced/read/told that gave you the different perspective? Share those resources.

Use facts/experience rather than opinions

“I don’t like this pattern” vs “This pattern actually caused us issues in a previous project because of X”.

Use non-threatening language

Some people might be ok receiving feedback that borders on obnoxious aggression but they are likely to be in the minority. It can also be intimidating for others who see those comments.

Understand your own personality and act accordingly

My natural tendency is to be over polite and hold back if I don’t know the author that well. Because of that, if I doubt myself I typically err on the side of saying something. You might want to consider erring on the side of holding back If your personality is the opposite.

Removing legacy globals with ES6 Proxies

You actually can get rid of your legacy window objects

I found a nice pattern today for getting rid of those global configuration variables that you’re pretty sure aren’t used anymore but you’re a bit too scared to delete. You know the ones, they look like this:

/* DO NOT CHANGE! WILL BREAK SOMETHING SOMEWHERE. TRUST ME. THX. */

window.GLOBAL_CONFIG = {
  env: 'dev',
  ...
}

They’re the cockroaches of large sites: they outlast developers, framework apocalyses, full rewrites. You know that something outside of your code base relies on them and it’s near impossible to figure out what. It’s easier to just leave it where it is and move on with your life.

Well, ES6 proxies actually make it a whole lot easier to find out which properties aren’t being used! Proxies allow you to put logic between someone trying to access a property and them actually receiving it. Here’s some actual code:

(function() {

  /* First we'll rename our `GLOBAL_CONFIG` object and make it private */
  var _config = {
    env: 'dev'
  };

  /* If we don't support proxies let's just give them what they want! */
  if (!('Proxy' in window)) {
    window.GLOBAL_CONFIG = _config;
    return;
  }

  /*
   * Alright, now we make a Proxy object.
   *
   * get is a function that will be called every time we access
   * a property.
   *
   * At this point, all we're going to do is return the original value.
   */
  var myProxy = {
    get: function(target, name) {
      return _config[name];
    }
  };

  /* Finally, let's assign it back so there's no difference for the consuming code. */
  window.GLOBAL_CONFIG = new Proxy({}, myProxy);

}());

And we’re done! We’ve written some code which does absolutely nothing!

var x = window.GLOBAL_CONFIG.env;
console.log(x);
// log: "dev"

This is usually where I’d stop but in fact it gets a bit more fun when you add some logic to the myProxy object. For example we could log out which properties have been called:

var myProxy = {
  get: function(target, name) {
    console.log(`Someone tried to access GLOBAL_CONFIG.${name}!`);
    return _config[name];
  }
};
var x = window.GLOBAL_CONFIG.env;
// log: Someone tried to access GLOBAL_CONFIG.env!

console.log(x);
// log: "dev"

Reloading the page might give you some idea of who is accessing the object but, given that the calling code is probably outside of your own code base, it’s only going to get you so far.

Instead, let’s send that data somewhere! Your favourite analytics service will probably do the trick.

var myProxy = {
  get: function(target, name) {
    // Assume this method makes a http request somewhere
    track('global_config', name);

    return _config[name];
  }
};

Now just deploy it for a bit and let your users tell you which properties are still being accessed! You might end up with some graphs like this:

Charts showing that a property is never accessed

And now we can delete the base_url property, safe in the knowledge that no one is using it.

Destructuring, rest properties and object shorthand

How you can use these features to write more maintainable code

Destructuring and rest/spread parameters for Arrays is part of the es6 specification. Support for their use with Objects is, at the time of writing, a stage 2 proposal for future inclusion. Of course, you can use it today via a transpiler like Babel.

Object Shorthand is already part of the es6 specification and with a combination of these three features you can start to use some patterns which can lead to more reliable, less error prone code.

First, let’s dig in to how destructuring objects looks. We’ll take a simple config object and use destructuring to extract some values.

let config = {
    env: 'production',
    user: { name: 'ian' }
};

let { env } = config;

console.log(env); // 'production'

console.log(config); // { env: 'production', user: { name: 'ian' } }

The equivalent in es5 would be:

var config = {
    env: 'production',
    user: { name: 'ian' }
};

var env = config.env;

Note that the original config object never mutates. The benefits of immutability are well documented and, whilst this isn’t strictly immutable, starting to write in a way which maintains the original values allows you to reason about the code more easily.

We can also go one step further and destructure two levels deep:

let { env, user: { name } } = config;

console.log(name); // 'ian'

console.log(user); // err: user is not defined

Now let’s introduce rest properties:

let { env, ...newConfig } = config;

console.log(env); // 'production'

console.log(newConfig); // { user: { name: 'ian' } }

console.log(config); // { env: 'production', user: { name: 'ian' } }

Using those three dots creates a new object which represents everything that remains in the config after you have taken out the named variables.

Note that this will still create an empty object so you can rely on object methods working without knowing what the data might be:

let { env, user: { name }, ...newConfig } = config;

console.log(newConfig); // {}

These are solid primitives which can be built up into useful patterns. One way in which they can help immediately is by removing connascence. Connascence relates to the relationship between two components where a change in one would require a change in the other to maintain functionality. A way in which this has often transpired in my code is with argument ordering in functions, particularly functions with high arity.

Let’s take a typical analytics function:

function trackAnalytics(label, category, dimension, username, email) {
    window.track(label, category, dimension, username, email);
}

trackAnalytics('login', 'user', 'app1', 'ian', 'test@test.com');

Assuming that these functions are in different files, aside from the email address it’s pretty hard to tell from the call side what each parameter relates to. It also breaks if we get the order wrong.

trackAnalytics('user', 'login', 'app1', 'ian', 'test@test.com'); // Tracking is broken

A way in which this is typically resolved is by switching to passing a single object as a parameter and naming the values within it. Now you no longer need to care about understanding their role or the order in which they’re included.

trackAnalytics({
    label: 'login',
    category: 'user',
    dimension: 'app1',
    username: 'ian',
    email: 'test@test.com'
});

Which means you can satisfy your OCD by ordering them alphabetically or in pyramid style.

trackAnalytics({
    label: 'login',
    username: 'ian',
    category: 'user',
    dimension: 'app1',
    email: 'test@test.com'
});

That’s better. So passing this object removes the connascence and improves the call side but it has suddenly got worse on the function side:

function trackAnalytics(data) {
    window.track(data.label, data.category, data.dimension, data.username, data.email);
}

We no longer know what’s inside data and in a function that was more complex we’d probably have to resort to documenting the function arguments in a jsdoc fashion. This can be pretty useful anyway but we can remove the need for it to some extent by using destructuring (note the braces within the arguments list).

function trackAnalytics({ label, category, dimension, username, email }) {
    window.track(label, category, dimension, username, email);
}

trackAnalytics({
    label: 'login',
    category: 'user',
    dimension: 'app1',
    username: 'ian',
    email: 'test@test.com'
});

Now the order of the arguments no longer matters and we understand what the values represent on both sides of the function contract.

Often we may already have these values wrapped up in variables before we send them to the function. If that is the case we can take advantage of another feature: object shorthand. Object shorthand allows you to replace the key value pair by a single key. For example:

// Assume these would already exist
let label = 'login';
let category = 'user';
let dimension = 'app1';
let username = 'ian';
let email = 'test@test.com';

trackAnalytics({
    label,
    category,
    dimension,
    username,
    email
});

Which means we can go back to a more simple looking call, without any concern about order.

function trackAnalytics({ label, category, dimension, username, email }) {
    window.track(label, category, dimension, username, email);
}

trackAnalytics({ category, label, dimension, username, email }); // ✔
trackAnalytics({ category, username, email, label, dimension }); // ✔

At this point we’ve already made the code much more robust and resistant to errors whilst keeping the code very simple and readable.

Another trick we have with these three features is to destructure within arguments themselves. In our example, we have the username and email in our original config object so let’s take them from there.

let config = {
    env: 'production',
    user: { name: 'ian' }
};

function trackAnalytics(label, category, dimension, { env, user }) {
    if (env !== 'production') { return };

    window.track(label, category, dimension, user.name, user.email);
}

trackAnalytics('login', 'user', 'app1', config);

We can even take it one step further and remove the user object:

function trackAnalytics(label, category, dimension, { env, user: { name, email } }) {
    if (env !== 'production') { return };

    window.track(label, category, dimension, name, email);
}

Maybe that’s going a bit far… These are just tools for you to use however you see fit though.

Hopefully this serves as an example of how you can use these new features to write safer, simpler code. There’s a lot of sytactic sugar in the new JS features but coupled together they achieve things which would be significantly harder, or at least more verbose, to write in ES5.

What even is Vanilla JS these days?

Without a framework are we just writing our own framework?

Originally it was non-jQuery, right? Or did it come before that? Anyway the term definitely got popular when people were eschewing jQuery in the quest for lighter pages at the expense of a few browser bugs.

Zero dependency libraries became a thing, which meant each library had their own tiny abstraction of DOM selection utilities and polyfills for Array methods. None of which could be extracted into shared dependencies and cached separately of course but, hey, they were 10x lighter and 20x faster than jQuery so what was there to worry about?

Then jQuery fell way off the radar due to a surge in browsers becoming evergreen, our eagerness to drop older, painful, browsers, and the proliferation of sites like youmightnotneedjquery.com. With jQuery out of the equation these days vanilla is much more likely to refer to the absence of frameworks like Ember, Angular, React or Backbone, of which only the latter requires jQuery.

In Paul Lewis’ recent article on the performance comparisons of frameworks he highlighted a vanillajs implementation of TodoMVC which was 16kb: significantly smaller than the other frameworks but certainly not tiny. Primarily it’s smaller as it is can be focused on this one specific purpose allowing for greater optimisation but making it somewhat throwaway after the life of the project. And, of course, it still has to reimplement a bunch of the same features that are present in other libs.

What makes this vanilla? Sure, it doesn’t have any dependencies but what makes up that 16kb?

It includes tiny abstractions for querySelectorAll and DOM events which you’d absolutely expect as developer conveniences. It include it’s own implementation of a micro templating library which focuses only on the todo template but still covers non-trivial html escaping.

It registers model.js, controller.js and view.js, it is todoMVC after all but it’s starting to look suspiciously like my-framework.js rather than vanillajs. In fact it’s really looking like a less-tested and less-jQuery snowflake version of backbone. This isn’t hating on the particular example on the todomvc page it just gets you wondering where the line is drawn between vanillajs and.. flavoured js?

Is it vanillajs if you don’t include a framework but you do include lots of tiny libs as dependencies? Is it vanillajs if it’s written in TypeScript? Is it wise to care about any of this? Is it a worthy goal?

Whilst your own implementation of these features can be smaller and more focused, certainly more performant, is chasing this title going to create a less buggy application? Will it be safer and more secure than a framework which has the benefit of a huge user base and collective intelligence? Are you going to have to reimplement features every time requirements change and could this lack of manoeuvrability end up causing costs to you and your user greater than the extra perf differences?

Anyone who I’ve worked with surely attest that I’m not a fan of debating terminology. It gets in the way of doing actual work and, truthfully, everyone else is better than me at it anyway. Vanillajs is a term that is gathering so much momentum though, and conflating so many ambiguous combinations that it either needs to be defined or descend into utter meaninglessness. And if it’s the latter we need to go back and update thousands of blog posts and slidedecks so maybe it’s best to just nip it in the bud now.