You’ve got an initial grasp of the domain, you’ve settled on a tech stack, and now your fingers twitch to start writing this complex feature you’ve been working on in your head. You already see that tRPC integration and the client-side state weaves in front of your eyes.
But before we go there, there’s another pre-requisite. First, we need to set up the project and make sure that we can easily take it to production.
A Quick Aside
This is not just an article full of code examples, it’s a chapter of my upcoming book “The Full-Stack Tao” which I’m writing in public.
Here are all published chapters so far:
- 1. Start with the Domain
- 2. Picking a Tech Stack
- 3. Setting Up the Project
- 4. Clean Architecture in React
- 5. Building a Proper REST API
- 6. How to Style a React Application
Just Enough Accessory Tooling
To work productively on a codebase we need to be able to run it, test it, and deploy it in a good condition. And depending on our choice of language and framework we may not have these tools accessible to us out of the box.
The bare minimum we need is a linter, a formatter, a test tool, and a build tool.
Languages like Go have all of these available through a CLI that doesn’t require any external packages or configuration. But in JavaScript, for example, this is a non-trivial piece of work that’s easier to do when there’s little to no code in the repository.
If we leave this step as an afterthought, we’d end up with tech debt to take care of from the start.
Linting is important to avoid errors. Formatting ensures code consistency and readability. Testing (regardless if it’s unit testing, integration testing or both) ensures stability. And building allows you to actually deploy that project.
Simple Local Development
The overall idea of this whole chapter is to make the path to production as easy as possible. But a deployment starts way earlier than when you commit your code to a branch.
It starts on your own machine
The harder it is to run a project, the harder it will be to automate the process of building and deploying it. A codebase needs to have an up-to-date guide on how it’s set up, and ideally, it should consist of two steps.
- Setting environment variables.
- Running a single command.
Anything beyond these two steps has to be absolutely warranted.
In a company I worked at, we developed the enterpisiest of enterprise software. Setting the project up from start to finish in four hours was considered an absolute miracle. That made people reluctant to deploy or make bigger changes which, in turn, led to slower releases, long QA backlogs, and very slow iterations.
Additionally, you want to ensure that your program not only starts but starts in good condition. I can’t count how many times I’ve ran a React application only to see broken pages because an API key was missing.
Use a schema validation library to check if you have all the environment variables you need, and either prevent the application from starting or make sure it won’t be in a broken state because of that missing data.
This is true for any application - SPAs, REST APIs, and anything in between.
Prepare a “Live” Environment
As long as the product you’re working on has to live outside of your own machine, you have to prepare a live environment the moment you set up your repository. Even if it’s only an experiment, as long as you need some production-like place to deploy it, it’s better to create it sooner rather than later.
A big problem for greenfield projects is they can get built around the specifics of a local environment - environment variables, build process specifics, and runtime versions. By syncing your local setup (regardless if you’re using containers or not) with “production” you ensure you won’t have to tackle obscure problems down the road.
Imagine that your provider doesn’t support the latest Node version and you used a newer standard library function in your work. You will have to refactor. Or that you’re stuck with an older version of a library that can no longer run in newer environments. You will have to update, leading to even more refactoring, or you may have to reconsider your provider choice.
Not to mention the chance of weird bugs if your code is executing in two different environments. The sooner you get your local development’s state synced with prod, the better.
Make Deployments Easy
Being able to quickly iterate on features and release bug fixes is important for the quality of a software product. And this can only happen with a streamlined deployment process.
The harder it is, the less frequently you’d want to go through it. Not only will it take a lot of time, but it also comes with a lot of emotional resistance. A complicated manual deployment becomes one of those dreaded time-consuming tasks you keep pushing ahead in time.
Deploying frequently means you will be pushing smaller changes to production. There will be a smaller chance for problems, and even if there are any, rollbacks are going to be much easier. There will be less QA to do since you will be testing less functionality, and your PRs will get reviewed a lot faster if they don’t change 30 files.
You will be able to focus more on developing features and improving the quality of your product.
People arguing against frequent deployments usualy bring up stability. It’s not possible to put a half-baked feature out for users to see, and sometimes you just have to change 30 files at one go to get it in working condition.
But remember that a feature being in production doesn’t mean that it has to be available to your users.
Most of the software products you use have implemented feature flags for that purpose. They hide or turn off parts of the application to users but they are still there in the codebase. And even though this can be quite an extensive project on its own, a simple environment variable would be enough to check if a feature should be enabled or not.
Ideally, you want to a push in the repository’s main branch to trigger a deployment that’s either fully or at least semi-automated.
Reduce the Number of Environments
Our goal in this chapter is to simplify the deployment process as much as possible. A big factor in how fast we can ship things out into production is the number of environments we have.
Each environment requires some sort of maintenance - production is obvious, but staging has to represent it as closely as possible in order to be useful, and development needs to be kept in a stable state.
But that’s a lot of upkeep work.
My philosophy is to reduce the number of environments we have to the bare minimum that gives us the safety we need. In my ideal world, they should be two - local and production. We should have a very good reason to add anything beyond that.
Short-lived environments to test a pull request are fine. But long lived environments need to provide a safeguard or solve a problem.
Otherwise they add extra steps at no benefit. Companies working in critical fields should definitely have multiple environments. Organizations running complex systems that we can’t possibly run on a single machine need a place to test their services in a real-life environment.
But small businesses get little to no benefit from the longer path to production.
Favor Multirepo Approaches
Monorepos are great when we want to share functionality between multiple related components. We can still deploy them separately, but letting them coexist together removes the need for shared libraries and makes sweeping changes easier.
But they will stretch your CI/CD skills to their limits.
You will have to deal with tagging and additional monorepo tooling that adds extra complexity so you can avoid duplicating logic.
This leads to more difficult deployments and that’s not a trade-off that I make lightly. Simpler, stable deployments provide more value to a team than the comfort of sharing types between projects.
Additionally, a multirepo approach forces you to think of your product in terms of different services and components which is closer to how they behave in production. This in turn will make you think better about backwards compatibility and you’ll be making changes in a more natural way.
Is This All Worth It?
Three chapters in we haven’t written a single line of code and we’re caught in an endless setup phase where more and more things need to happen. But remember that software engineering is about building products, not writing code in an IDE.
Google says that programming is the act of making a computer execute a task you want. Writing an algorithm is an example of programming. Software engineering is maintaining a programming solution across a large time span.
We are in the software engineering business.
We will have to do all of the things mentioned above at some point. But if we start developing without these extra steps, we will only add more work to the refactoring backlog for the future. These things pile up and contribute to the bad condition of the project.
And codebases degrade because of many small mistakes, not a single big one.
To avoid this, we can start by asking ourselves - “How can I make sure that this can get to production as quickly as possible”. And everything we’ve discussed applies to existing projects too.