Understanding the costs of our abstractions

Since the beginning of February, there has been a big conversation about client-side JavaScript in the web development community.

There has been a salvo of blog posts discussing the performance implications of single page applications:

These posts and the resources linked within are worth reading. I think this is an interesting conversation, and I think the ideas extend far beyond front end web development. I spent a lot of time listening to podcasts in the car this past week, and there has been a common thread in many of the discussions that I've listened to: abstractions have (sometimes hidden) costs.

How can we do a better job of choosing the right abstractions as an industry?

I could try to write a bunch about incentive structures, scope creep and rising complexity. But I think most of what I have to say boils down to this: we should aspire to understand the benefits and costs of the abstractions that we build on top of.

Understanding abstractions

Most software projects are built atop a perilously tall stack of abstractions.

Joel Spolsky wrote a post entitled The Law of Leaky Abstractions in 2002 claiming:

All non-trivial abstractions, to some degree, are leaky.

To build context for this claim, Joel uses TCP as an example noting that under certain circumstances, the unreliable nature of the network will leak through the reliability guarantees that TCP provides. Unfortunately, Joel omits an important detail: TCP is not free, it comes with a lot of overhead.

TCP provides a variety of features including packet ordering: for the code that uses TCP, packets appear to arrive in the order they were sent. Many applications don't require all of the features that TCP provides, but many of those applications still use TCP (or even HTTP) under the hood. Check out these resources for more information about the tradeoffs that TCP makes in a couple of different applications:


We can apply the same thought process to React and similar client-side JavaScript abstractions. My oversimplified perspective: React aims to provide a declarative alternative to imperative web APIs for providing real-time interactivity.

The web API provides methods like Element.append() or enables mutation of innerHTML to directly modify the text contents of a document element. React enables programmers to specify the desired state of a document fragment and promises to make the necessary adjustments to the document. In many ways React with JSX imitates the style of many popular backend web frameworks (look at Rails, Django or almost any PHP application). There are several benefits to the declarative interface but I think the most important benefit is composability. Since React components behave like pure functions, they can be reused in myriad ways. But those benefits are not free, React can balloon bundle sizes and increase the time it takes for an application to become interactive.

If your application does not need real-time interactivity, the benefits of declarative components are low or your application does not leverage many of the features that React provides, you should probably seek a simpler alternative.

Eric Bailey describes their experience using a mental health portal that has an endless spinner due to a deadlock in client-side JavaScript. Can most users of a mental health portal wait for the latency of an HTTP request when they interact with the page? Probably. Maybe there is a chat feature in the portal where a client-side component library is valuable; but even still, it doesn't seem like a use case that necessitates React.

Python and high-level scripting languages

There are a few technologists that have derided the overuse of high-level scripting languages like Python and Ruby. Many of these folks come from the game development space where performance is paramount. Two of the loudest voices in the room are Jonathan Blow (see Preventing the Collapse of Civilization) and Casey Muratori (see How fast should an unoptimized terminal run?).

Casey recently started a "Performance-Aware Programming" series. So far, Casey has published a prologue that compares the performance of naiively adding two integers in Python with a variety of implementations in C, ultimately getting close to optimal with the use of SIMD instructions and multi-threading. Casey achieves a staggering ~8,000x speedup over the Python implementation. Casey wraps up the prologue by demonstrating and benchmarking alternative summation implementations in Python.

During these videos, Casey describes the interpreter overhead of Python as "waste". I think this discounts the value that high-level interpreted programming languages provide. As a self-described static typing fanatic, I still find Python to be significantly easier to use for creating rapid prototypes than languages like Rust, C++ or C. For getting up and running, Python and similar languages enable you to focus more on the problem that you're trying to solve and less on the mechanics of the language that you're using to solve that problem. Not to mention, Python has a huge batteries-included standard library and an enormous ecosystem of third-party libraries to lean on.

I believe that scripting languages are a good choice as long as you acknowledge the performance and maintainability tradeoffs that you are making. Often, this explicit acknowledgement or context is missing.

Leaving a paper trail

When we make decisions about the tools and technologies that we use to build applications in a professional setting, we should aim to leave a paper trail. Even if the rationale for using a particular technology is "we needed to get started and this was the most popular option," that is valuable context!

When we have that context, we can make informed decisions about how to move forward, whether that means forging ahead with the existing choices or making adjustments to set ourselves up for the future.

Of course, outside of a professional setting or when writing software for yourself, use whatever you want! If you want to experiment with React, experiment with React. If you want to write C++ in the comfort of your own home, do it! But when writing software professionally or for a larger audience, it's significantly more important to think about the implications of the decisions we make. When that consideration is missing, we're doing a disservice to our colleagues and our users.

Weeks 2 and 3 at RC

It has been a great 3 weeks at RC, so far. Not especially proud that I stopped writing for ~2 weeks, but it has been quite a whirlwind.

The most concrete thing that I've been working on is a project I'm calling Brainlove which is basically just random exploration of Brainfuck tooling. It's been really cool to see how mature the compile-to-wasm ecosystem has become. I still don't think that it would be my first choice for a professional setting, but the developer experience is much better than I was expecting. Deploying the debugger to Netlify as a static bundle was a breeze.

Beyond that, I've probably been spreading myself a little bit too thin in terms of what I'm working on:

  • Paired a lot with Andrew. Learning a lot about effective communication in these sessions. I've also really enjoyed reading Andrew's website and blog.
    • We reached a reasonable conclusion on the 2048 strategy project. We attempted a low-level implementation of board operations that turned out to be slower than the naiive list-based implementation. Really interesting exercise.
    • Wrote a "compiler" for compiling a very small expression language to Brainfuck. This was a mind bender. We both avoided seeking out external resources so it was a lot of fun to solve these problems from scratch.
  • Paired on a bunch of other things (ranging from Leetcode to an Octave LSP).
    • I think pairing has been the biggest benefit of being at RC so far. There are so many things I could explore, but I really want to make the most of being at RC right now, so I hope to continue emphasizing pairing.
  • I put together a small python script for generating LICENSE files. There are already a bunch of implementations of this, but I thought it would be a fun way to learn about how to package python libraries and scripts. There are a lot of outdated resources for python packaging, so this was surprisingly painful.
  • I hobbled together a URL shortener that runs on fly.io but is only accessible within my tailscale network. I used an existing Flask URL shortener that I had worked on with Rana (my wife) a while ago. Once I had a reasonable understanding of how everything fit together, this wasn't too bad. But my mental model of networking could probably use some work.
    • repo for the router code (I didn't write this)
    • repo for the URL shortener
    • post discussing go links on tailscale blog
  • Thought a lot about how to set up an election website that enables users to choose the voting system they use.
  • Made some progress on Crafting Interpreters in C++.
    • repo
    • Most of garbage collection is in place
    • Spent a good amount of time comparing performance with clox across both x86-64 and Apple silicon
  • Read a few chapters of DDIA
    • There's a small reading group for the book at RC where we've had some great discussion, so far. Planning to coordinate reading some papers as well!
  • Finished the available Protohackers problems in Python. These are a blast. I've really been enjoying Amos' series covering Advent of Code in Rust. I think it would be fun to put together a similar style series for Protohackers describing my approach to building out solutions (first in Python, then translating to Rust).

Not a ton of structure to this post, but I want to get back into the swing of writing regularly, so I'm lowering the bar a little bit. If you've made it this far, thanks for reading!