Around ten years ago, businesses realized that building desktop applications was not a resource-effective way to go. You must create multiple copies of the same software for every OS and hire teams to maintain and upgrade them.
You must install and update them manually, making it harder to use them across devices.
We needed a platform that allows us to build a product once and deliver it to all customers regardless of their platform of choice. The browser fits this description perfectly.
We can build an application once, and it will be available on every machine with a network connection. Updates are a matter of refreshing the page, and the team required to maintain it will be significantly smaller.
But there’s one problem.
We didn’t know how to build web applications.
We had no design patterns, architectures, or browser APIs good enough for large-scale development. We needed an abstraction over the browser. So a wave of tools came out, inspired by existing concepts like MVC, and translated them to the front-end.
Like we had controllers that handled requests, we got frameworks that utilized controllers to manage pages. But when we put the first frameworks to use, it turned out that this model was inefficient.
Pages were easy to handle when they were mostly static. But once we added more logic, they became unmanageable.
Performance wasn’t outstanding either because more logic meant more calls to the DOM’s APIs, forcing it to do slow calculations.
It took us a few more years until the component model, and the virtual DOM became the de facto standard.
With them, we set out to write large-scale UI applications. The two front-enders that used to write bootstrap classes and do responsive design turned into tens of engineers working on complex features, design systems, and visual interactions.
But knowing how to write code doesn’t mean we know how to write software.
Creating a product requires more than a framework and quick fingers. It requires tooling. We needed tools for testing, enforcing code standards, static types, and building. When you think of it, nowadays we have a library for each of these.
But they are built and maintained by different people with different visions of how front-end development should be done.
In other words, the JS toolchain is deeply fragmented. The thing I dread the most is having to set up an entire project from scratch and piece up the configuration puzzle so it can work.
The simplest example of our perils as JavaScript engineers is that eslint and prettier, two tools used together all the time, require additional config, so they’re not in conflict with each other.
But tooling fragmentation can go much deeper.
There are inconsistencies all the way down to the different module systems used in the browser and Node - ES modules, and CommonJS. Its differences go beyond mere syntax, and getting a project to work with native ES modules is a feat that can take you weeks.
Between me, my colleagues, and my friends, we’ve probably burned an entire man-year fighting with the JS toolchain.
Compare this with an ecosystem like Go’s, where the formatter, build tool, and testing framework are built into the language and “just work” without installing external dependencies.
We will never realize our community’s full potential until we stop wrestling with our tools.
When it comes to them, we’re still back in the times of the first front-end frameworks. We have to build software but don’t know how to make consistent tools for it.
New projects are coming out in quantities not that different from the frameworks during the notorious JavaScript fatigue. SWC and esbuild are trying to replace Webpack, while Bun and Deno are racing to fix the flaws in Node.
But it won’t be enough until we get a tool like Vite or Rome, that manages to encompass the whole toolchain and hide it behind a simple API.
Because much like we needed an abstraction over the DOM to work productively, we also need an abstraction over our tools.