Extracting Microservices from a Monolith

October 27, 2021 7 minute read

When I first started dabbling in distributed systems, I realized that some things I would have to grasp only theoretically. I understood how to outline the boundaries between microservices and the fundamental principles behind development, deployment, and scalability.

I could read about consistency and high availability for days, going through various case studies and examples. But as someone who had been working predominantly on monolithic applications up until now, my question remained.

How do I extract a service from a monolithic application.

Microservices are a high-level architectural pattern, but the specifics echo at the code level of the application. A tightly coupled monolith where logic is entwined and boundaries are non-existent won’t be easy to refactor to microservices.

Software architecture is more than deployment and structural patterns. Your architecture will dictate the way you think about design. Refactoring to microservices without a complete rewrite would be impossible unless we implement the fundamental principles in the implementation.

The journey to a distributed system begins with small code changes. More specifically, putting boundaries, one of the core concepts in microservices. Each service should “own” a part of the domain. It should have its own storage and communicate with other services via the endpoints (or another form of contract) they provide.

Extracting Tightly Coupled Logic to a Microservice

Extracting a microservice from a monolith is possible only if it has strictly defined boundaries. The typical example in this scenario is the logic for orders and deliveries in an e-commerce application. But let’s explore a scenario in which we want to extract logic deeply ingrained in multiple places in the application like user preferences and analytics.

Often when a user interacts with a product, you will have to store and alter data about them. You will need to store information about the last viewed items in an online shop. You’ll update their delivery address when they make a new order or keep track of the last locations they’ve searched for on a travel website. The list is long.

The idea is that you will have a lot of places in your application where you will want to touch the user object. This functionality is often sprinkled all throughout the application, and you can find calls to the database in multiple controllers and modules.

The Tightly Coupled Monolith

On the code level, this looks something like the following example (it’s intentionally simplistic). We get search results based on some input and update the user’s last searched items in a fire-and-forget fashion.

const searchHandler = async (req: Request, res: Response) => {
    const { term } = req.query

    DynamoDBClient.put({
        TableName: 'users',
        Item: {
            ...
            RecentlySearchedTerms: [term]
        }
    })

    const results = await DynamoDBClient.query({
        ...
    })
}

We have identified that this account-related logic is a good candidate for our first microservice. But before we even think about infrastructure we need to decouple it in the monolith. We want to put all this functionality behind an abstraction.

By grouping it in a module we hide all the details around how accounts are updated (storage type, database info, etc.). We establish a contract and have control over the API that the account module exposes. This is purely a code-level change in our preparation.

Extracting a new module

On the code level, this means wrapping the functionality in an object and calling its functions.

import accountsService from '@modules/accounts/service'

const searchHandler = async (req: Request, res: Response) => {
    const { term } = req.query

    accountsService.updateRecentlySearched(term)

    const results = await DynamoDBClient.query({ ... })
}

// modules/accounts/service.ts
const accountsService = {
    updateRecentlySearched: (term) => {
        return dynamoDBClient.put({
            TableName: 'users',
            Item: {
                ...
                RecentlySearchedTerms: [term]
            }
        })
    }
}

We replace the inlined logic with method calls. Now we have to split up the storage, which warrants an article or two on its own, so we’ll only touch on this topic on a high level.

One of my most pressing questions when I first started dealing with microservices was what happens with foreign keys and relations once we split the data across different databases. Sometimes there won’t be any relations to consider, but if we have to, we have two options - duplicate or rely on global unique identifiers.

The way we handle duplication depends on the storage we use. A NoSQL database means we need to store nested objects and update them whenever the data changes. In an SQL database, we can duplicate entire tables to keep the relations in check and manage the syncing ourselves.

But bare in mind that the more duplication you have to manage, the more complexity you will have to deal with. Duplicating too much information may mean that a service’s store has grown beyond its responsibilities.

Alternatively, we could use some global unique identifiers like UUIDs and rely on them to collect related data between services. This is my preferred approach, and the chance to get a duplicate identifier is slim enough for us not to care.

Extracting the data store

You might be wondering if this doesn’t raise the complexity of our system too much. Until a moment ago, we just had some tightly coupled logic. A bit messy, maybe but nothing worth reading whitepapers about. Now suddenly, we have to deal with data consistency and all the problems around that.

Imagine a failure that puts different data stores out of sync. The challenges of distribution are not trivial, so make sure that dealing with them is more productive than dealing with the problems of a monolith.

To gain independence in work, deployment, and scale, we have to sacrifice something. It’s all a trade-off, as the saying goes. That’s why I tell people to pick the problems they can deal with instead of the benefits they want to have.

Now we have a nice solid boundary that doesn’t leak any details. The next step is implementing this logic in a separate service and deploying it. Together with the logic, we need to add a transport layer - whether that’s HTTP or gRPC or something else. We need to add health checks and logging. This service will connect to the storage that we created in the previous step.

It’s important not to forget some form of monitoring. Up until now, we had a single monolith and we probably had ways to find out if it fell. Now that we have another service, we mustn’t allow it to crash silently.

Add the new service

Once the new microservice is running, we can update the logic in the module to make HTTP calls to our new service instead of executing the logic itself. Why not make the HTTP calls directly in the controller and get rid of this module, you might ask? Because there still might be specifics that we want to abstract from the rest of the application.

// modules/accounts/service.ts
const createAccountsService = () => {
    const client = axios.create({
        baseURL: ACCOUNTS_SERVICE_URL,
        timeout: 1000,
        headers: { ... }
    });

    return {
        updateRecentlySearched: (term) => {
            return client.post('/recent-searches', { term })
                .catch(err => {
                    // Handle the error
                    // i.e. - log the error using a structured logger
                })
        }
    }
}

We may need to construct headers, set up authentication, and unpack the returned response in a friendly format. Network requests always require error handling that doesn’t have to be managed in the main application flow, especially if the call is of the “fire-and-forget” type.

This opens the door for a lot of interesting design decisions. We can just log the error and forget about it. We can choose to return an empty response object instead of null. We can also log something and throw a custom application error to be handled by the caller. But in all cases, we will encapsulate some complexity.

Then if the accounts service makes a breaking change to one of its endpoints, we need to update that in a single place.

Extracting Already Established Modules

In some other cases, you may have multiple domain responsibilities in the same monolith. They might even be owned by different developers or teams. Still, they all live in the same codebase and people keep stepping on each other’s toes.

The principle to extract such a module is the same - start with boundaries. Make sure the module you want to extract contains everything it needs to work - utility functions, business logic and other shared entities. Duplicate those that are shared between modules. Then decouple its storage and deploy it as a separate service.

If we want the new service to be called directly (not from the monolith like the previous example), then we need some sort of an API Gateway in front of both services. It can split traffic between them based on what the client is trying to access.

Extracting an established module

We don’t want to expose the complexity of our architecture to the outside world. The fact that we’re running microservices and we’ve split out a part of the functionality is ours to deal with. To everyone else, there should be a single entry-point they should call. Requests should then be routed to the right service to handle them.

An API Gateway also gives us centralized load balancing, rate limiting, authorization, and other benefits. As you may have sensed, we once again went into high complexity territory.

Operational & Organizational Complexity

You need to take into account the complexity of running a microservice architecture in production. To gain independent deployments, scaling, and development, you will acquire all the problems of distributed systems, and that’s not a decision to make lightly.

A non-technical consideration is whether your organization’s structure can handle such an architecture. In the end, Conway’s Law always wins. So if you’re a team of four developers, chances are running the monolith would be easier. If you’re a couple of teams trying to work on the same codebase, then decomposing it will probably yield better results.

As a general rule, the communication flow in your teams should be reflected in the communication flow of your systems.

Something else to keep in mind is the wall of complexity you will hit with the first couple of services. You will have to set up the underlying infrastructure, continuous delivery pipelines, alerts, and monitoring. That takes additional time. Once you pave the way, though, extracting the next few services will be much easier.

Tao of React

Learn proven practices about React application architecture, component design, testing and performance.