Hundreds of microservices work together behind the scenes to power household names like Amazon, Netflix, and Uber whose products serve hundreds of millions of users. But just because this design pattern works for these companies, doesn’t mean microservices are the right way to refactor all codebases. To avoid pitfalls later on, it often helps to invest the time and effort to thoroughly architect a solution in advance of software development work.
Software developers should first ask whether microservices are even an appropriate framework for a given product. Assuming the answer is yes, defining the right scope for each microservice is yet another important process, one that can sometimes be more art than science. A further decision then needs to be made about how microservices should communicate with each other and potentially with external callers. Then come other questions about data management, reliability, and much more that is relevant to any software architecture.
In this article, we’ll lay out the considerations around microservices architectures and some of the common paradigms for structuring such applications.
Microservices vs. Monoliths
Microservices could very well be the right fit for a development team; to make that call, however, it helps to contrast them with monolithic architectures as the most obvious alternative. A monolithic application consists of all components being packaged and deployed in one unit. Monoliths often still have internal modularity, but the key is that from the outside, they behave as singular entities. This means that they are relatively easy to develop, test, and deploy because there’s just one package to deal with.
Monoliths, however, have their drawbacks. Combining everything in one deployment may sound nice, but this means that every little change requires the entire application to be redeployed, including parts that were unchanged. Reliability becomes another issue because a failure in one part of the system is likely to take down the entire application since it’s one bundle. As teams scale, development is also challenging since working in parallel may not always be possible. This can also make monoliths slow to adopt new technologies and frameworks because migrations are an all-or-nothing affair.
Microservices architectures address these drawbacks by relying on modular components that are deployed separately, which eases continuous delivery, increases scalability, and enables teams to independently focus on each service. Services can get updated to new technologies when it makes sense to do so, and if there are production issues, an entire application won’t necessarily go down because one service failed. Tools like AWS, Docker, and Kubernetes make it even easier to deploy and orchestrate a large number of services at once.
There are still trade-offs when opting for a microservices design pattern. Microservices constitute what is fundamentally a distributed system, which comes with challenges around inter-process communication, partial failures, and more. Microservices can have added complexities around testing, logging, and data management because of use cases that span different services. As a rule of thumb, simple, lightweight applications often suit a monolith architecture, while more complex systems that large teams of developers own tend to favor a microservices approach. Sometimes, such as in the case of Netflix, applications start as monoliths until teams rearchitect them using microservices.
How Micro is a Microservice?
Even after committing to a microservices architecture, developers still need to choose the right scope for each service. Drawing the boundaries between microservices is not always obvious, but there are frameworks to help developers decompose applications. Some organizations subscribe to the single responsibility principle, an idea born out of object-oriented design that can also apply to microservices. Each service is meant to fulfill exactly one role and its functions narrowly align to that role. It’s also important that services are not redundant, as this increases the cost to maintain a system and introduces more ways for the system to break.
Other organizations practice what may be closer to “macroservices” than true microservices. Here, a single service may own several related responsibilities. Responsibilities tend to be grouped together by domain. For example, a single image service could combine services for image uploading, image compression, and image tagging. While it can be convenient to combine services in this way, one risk of this approach is that the services turn into monoliths themselves, erasing many of the benefits of microservices architectures.
In decomposition by business capability, software architecture mimics the business architecture. Services correspond to individual business capabilities such as order management, inventory management, customer management, and so on in an e-commerce business. There are other methods for defining microservices such as domain-driven design (DDD) subdomains or decomposing a collection of user stories into “verbs” and “nouns.” Each of these methods could be an article of its own, but the key is that whatever the method, there is some consistency behind how an organization scopes its microservices. This makes it easier to reason about new functionalities and whether they ought to be a new service or folded into an existing one.
Architecting the Interfaces Between Microservices
With clear boundaries in place between microservices, each service is positioned to execute on its own responsibilities, but the question remains: how will potentially hundreds of services communicate with each other? This is an essential question for microservices architectures that monolith architectures do not need to deal with by virtue of being self-contained.
While there are many ways to solve the inter-service communication problem, we’ll focus on two major categories each with their pros and cons.
RESTful APIs
RESTful APIs are a synchronous communication pattern between microservices. Services generally use HTTP requests to pass information back and forth, often in the form of JSON. This is a relatively simple and standard interaction model that has a very clear control flow developers can reason about. Each service exposes relevant API endpoints to other services and it is also easy to externalize an API if third parties need access.
Commonly, an API gateway serves as the orchestrator or aggregator for this kind of architecture. This is a single service that clients can call, after which the gateway reroutes a request to the appropriate microservices, all abstracted away from the caller. While the API gateway pattern is convenient for managing a host of microservices as if they were a monolith, this architectural style does introduce a single point of failure.
While it doesn’t solve the dependency on the API gateway, the circuit breaker pattern can help make a system more resilient. Suppose a service makes a request of another service, but that second service is down for whatever reason. With a circuit breaker in place, the first service would refrain from bombarding the second service with requests. Instead, the first service would only retry when it’s safe to do so.
Linking services together with RESTful APIs has the tradeoff of potentially coupling services tightly together. Microservices architectures usually strive for loose coupling between services so that they are completely modular, with the ability to mix and match as needed. Strict contracts between services, where one service may expect a specific response from another service, subvert this goal. For this reason, it’s crucial to think carefully about API schemas and avoid changing them haphazardly because of dependencies that might break. Another disadvantage is that a synchronous communication model can be slower than asynchronous messaging since the system blocks on each request, waiting on a response every time.
Event-Driven Architecture
Event-driven architecture, which may be most familiar from frontend programming and web applications, also has a place in backend services. In this architecture pattern, services asynchronously emit messages that others consume from an event stream. Apache Kafka is a popular event streaming platform that enables services to broadcast information globally and let others react to that event. Event streams tend to follow a push model but do not always. It’s also possible to schedule jobs so that microservices periodically pull information from each other, although this can be risky if there’s a misalignment of cadences between services such that there is a long delay.
An event stream makes it possible to maintain very loose coupling between services and keep clear domain boundaries. It’s arguably also easier to model a real-world domain with this approach versus the RESTful API pattern. Expanding the architecture is also relatively simple; a new microservice just has to listen for relevant events and react to them.
Event-driven architecture’s lack of a central orchestrator avoids what would otherwise be a single point of failure. That said, it can be harder to reason about or logically trace a particular event across a series of microservices. Also, because of the one-to-many nature of event consumption, if a single service has an error, many services that depend on it could be affected simultaneously and each has to deal with the consequences.
Data Management
Most microservices architectures follow the one database per service pattern, wherein each microservice makes exclusive use of its own private data store to process its requests. It’s critical to avoid shared databases because then an architecture may inadvertently become a distributed monolith since components are no longer isolated from each other.
Even though each microservice is meant to have its own database, some transactions may depend on data from more than one of these databases. The saga pattern helps solve this problem. Saga is a method for sequencing transactions across services, such that they execute in a particular order and can be rolled back if any one transaction fails. Sagas are coordinated either through orchestration or choreography. Orchestration uses an aggregator service to instruct other services on which transactions to complete. Choreography fits into the event-driven pattern, as each local transaction publishes events triggering subsequent transactions.
Command Query Responsibility Segregation (CQRS) is another microservices pattern for data management that gets commonly paired with event-driven architectures. The key here is to separate the process of reading data from the process of updating data. This dovetails with event-driven architecture described above because the event store is the command side of the data management system. In other words, data gets updated through an append-only log of events, and reading the data involves processing the event log to deduce the appropriate state.
One last point to mention, is that depending on the implementation, databases for microservices architecture may have eventual consistency, meaning that with enough time, all the data will update to the latest values. In asynchronous systems, because there are not strict guarantees on when services execute, it is possible that not enough time has passed and the data are not yet fully accurate. Data consistency is one of the tradeoffs to maintain high database availability.
Conclusion
Trade-offs have been a theme throughout this article, and are par for course when designing microservices architectures. Whether it’s first deciding to commit to microservices over monoliths, setting boundaries between microservices, or choosing an interaction model for services to communicate, there are lots of options each with unique costs and benefits. While there’s no silver bullet, perhaps the best strategy is to thoughtfully evaluate the problem space ahead of time and choose the best architecture for the job.
There’s a whole lot more to microservices beyond aligning on a design pattern. Cortex offers multiple products that help engineering teams manage microservices architectures. You can read more about best practices for microservices, like logging and writing good documentation, over on the Cortex blog.