Node.Js + Microservices: A Match Made in Tech Heaven? Maximize Your Impact with This Powerful Duo!

Maximize Impact with Node.Js + Microservices!
Abhishek Founder & CFO cisin.com
❝ In the world of custom software development, our currency is not just in code, but in the commitment to craft solutions that transcend expectations. We believe that financial success is not measured solely in profits, but in the value we bring to our clients through innovation, reliability, and a relentless pursuit of excellence. ❞


Contact us anytime to know more β€” Abhishek P., Founder & CFO CISIN

 

Microservices allow us to separate our components and build them separately. The functionality of a software product will not be affected by a fault with one element.

This article will explore the concept of microservices. We'll learn how to create a microservice using Node.js. And we'll discuss how microservices have changed software development.

Let's get started!

You'll also need the following:

  1. Install Node.js on your computer
  2. Node.js and JavaScript are essential for understanding the code.

You can start at the beginning if you are a newbie. Some people will want to jump right into the practical section.

As you need more knowledge of this architecture, I will guide you towards concepts. You can then develop the microservices in whatever technology you prefer.

You can use this to start your journey towards microservices.

Now let's start!


Monolithic application vs microservices

Monolithic application vs microservices

 


Monolithic Applications: Understanding the Difference

Monolithic applications are single-tiered applications in which the components all form one whole.

Imagine that you are building a library system, and the various components, like books and users, as well as their services and databases, have been merged into one.

If any member fails, the system must be brought down to fix the problem.

Monolithic applications cannot be scalable or flexible. You can only build one feature at a time or achieve continuous deployment.

Monolithic applications may not be cost-effective for management, but they can still be cost-effective. Developers recognized that they needed to design a software system where a single faulty component would not affect the whole system.


Understanding microservices

A monolithic software development pattern is needed to meet the needs of modern users. Each software feature in a microservice is isolated from the others.

In most cases, this separation occurs with the servers and databases. This architecture is loosely coupled and also known as a distributed application.

Imagine that we are building an online store. You'll also need to create models for the payment, customer, order, admin and cart features.

These features each have their servers and databases.

The REST API Framework will be used to communicate between our ecommerce services. We'll be developing our features for the store independently, so if a problem occurs, we will only have to fix that feature and not bring down our entire system.

Microservices allow for a more scalable application than monolithic ones. Microservices can be developed in any programming language.

Different languages can be used to create additional features within a single microservice.

Microservices are a great way to improve the developer experience. The productivity of new team members will increase because they will only need to learn part of the codebase.

Still, only the specific features are being worked on. Unit testing is also encouraged for microservices. You can create a test that tests any functionality.

Keep in mind; building microservices requires technical expertise. Integration and testing end-to-end can be complex.

Microservices are also very large, which can result in expensive maintenance. It's also challenging to convert software developed with monolithic architecture into a microservice. And it is difficult for the applications to find each other in a network.


Node.js is used for microservices

Node.js is used for microservices

 

Node.js, a programming language that is a great choice, has several advantages.

Node.js is an example of a real-time, event-driven application framework. The single-threading capabilities and asynchronous abilities of Node.js enable non-blocking.

Node.js allows developers to build microservices in an unbroken flow. They will also enjoy Node's scalability and speed.


Why NodeJS for Microservices?

Why NodeJS for Microservices?

 

There are many top-notch programming languages that you can use to build a microservice. NodeJS is one of the top programming languages.

Why is NodeJS the most suitable for this task?

  1. Asynchronous & Single-threaded: NodeJS executes code using the event loop, which allows asynchronous code execution, and enables the server to utilize a non blocking response mechanism.
  2. Event driven: The NodeJS event-driven architecture is based on the publish-subscribe pattern, which has been used in software development for many years. This pattern allows powerful applications, including real-time ones.
  3. Highly Scalable & Fast: The Runtime Environment is Built on Top of One of The Most Powerful JavaScript Engines, V8 JavaScript engine. This allows the code to be executed rapidly and allows the server to process up to 10,000 simultaneous requests.

Easy development: Duplicating code can be caused by creating multiple microservices. Node.js's microservices framework is easy to build because it abstracts the majority of the underlying systems.

This programming language makes it easy to create a microservice.


Traditional Way

Traditional Way

 

For a very long time, we have followed the monolithic architectural style. The majority of legacy applications have been implemented as monoliths.

All monolithic applications can be designed, built and deployed as one unit. The components share databases, business logic, memory, and computing power.

This is a large chunk of code with a tight coupling among services.

They are simpler to build, test, and understand.

You can see that the application has been built in a single piece. The same application can be used in several instances, backed by a load balancer.

It is possible to scale the deployment horizontally using the same unit.

What are the significant problems with monolithic construction?

  1. Fault tolerance - If the monolithic system fails, then it will fail. Small changes in an application can break the whole system.
  2. Time for deployment - The entire application must be rebuilt and redeployed after every change. Regardless of how tiny or vast the discrepancies are. You have to wait for the application to rebuild.
  3. When a monolithic app grows - It can take time to understand. You will have a large number of dependents and coupling. Each little code change can have a significant impact on the system.
  4. It isn't easy to parallelize and scale - Modules can't be scaled independently.
  5. High resource consumption - Memory footprint is large and requires more processing power.
  6. Single Point of Failure - Availability
  7. Maintenance is complex - this could make the system outdated.

Microservices architecture is a solution to these issues.


What are Microservices?

What are Microservices?

 

Microservices are small standalone services that can be distributed. Together, they form an application.

You can easily understand this concept by looking at the image below.

It is possible to create large applications by combining small services. They are all independent. The other services will continue to work even if one fails.

This new paradigm was first introduced in 2011. Still, it became popular with an article by Martin Fowler, his co-worker James Lewis and the publication of "Microservices".

Even before it became popular, Netflix was one of the first companies to adopt this technology (even before it became popular).

Now, major corporations such as Amazon and Uber are using the same. Microservices are one of today's most popular topics.

Why has this become so popular? Above, I mentioned some of the main problems with monolithic architecture. This microservices-based architecture will solve these issues and provide many other benefits.


Benefits of Microservices

  1. Technology Independence – Microservices will solve the technological barrier by integrating multiple languages. For each service, you can use any library or technology stack. Polyglot programming is the term used to describe this.
  2. The application is resilient -- If a microservice goes down, it doesn't affect the rest.
  3. Agile -- The development and deployment processes are made quickly.
  4. This architecture allows us to upgrade a part of an application. Some tools allow for updates of services to be performed with no downtime. ( Kubernetes )
  5. Scaling is much easier – Scalable isolated services - Duplicating parallel copies of microservices. You can then scale up/down quickly based on demand.
  6. Flexibility and rapid deployment -- Separate deployments are possible due to the loose coupling of processes. You can also allocate resources differently to different services.

Get a Free Estimation or Talk to Our Business Manager!


The Challenges of Microservices

  1. While a microservice may be simpler than the system, it is still more complicated.
  2. Microservices architecture is a primary security concern. The authentication, authorization, session management, and communication processes are complex and challenging.
  3. Latency can increase when multiple microservices communicate with one another via the network. This may result in slower response times and increased latency.
  4. Monoliths can easily be tested and debugged as complete systems. It is difficult to track and piece together service communications. This requires more infrastructure, tools and complexity.
  5. When there's a single database (relational), it benefits from ACID. In the architecture of microservices, we use multiple databases. Character is much more challenging to achieve.
  6. The costs of implementation are higher. Professionals with expertise are needed. Dev Ops, CI/CD, and infrastructure must be mature.

However, it is possible to mitigate some of these problems using the correct patterns and tools.

Okay. You have probably learned all you need to know about microservices. Let's first explore this ecosystem's core concepts before moving on to the practical.


The Microservice Architecture Concept

The Microservice Architecture Concept

 

Every developer needs to understand the fundamentals of microservices architecture.


Communication

Communication is one of the most challenging aspects of developing microservices. There will be many distributed services that try to communicate.

In a microservices architecture, services can be communicated in several different ways.

  1. Calls to API
  2. The Message-Based Communication
  3. Service Mesh

API Calls

API calls are a standard method of microservice communication. One service will send a request and then wait for the other to respond.

APIs are implemented with several technologies, such as REST, GraphQL and gRPC.

How can you find exemplary service if several services are running and auto-scaling is enabled?

When it comes to microservices, there is a term that's commonly used called Service Discovery. This is the process for finding out the location of a microservice to facilitate inter-service communications.

A central service, called the service registry, keeps track of all microservices within our system. This will allow a service that wants to connect over the network to find another service's IP address and location.

This registry should be updated if a microservice appears. Service instances may send heartbeats to the registry as a failsafe to confirm they're still available online.

Service discovery comes in two different types.

  1. Client-side Service Discovery – The consumer service (client) queries the registry of services, obtains the IP, chooses an instance and sends the request. The service must do load-balancing and then send the request to the microservice.
  2. Service Discovery on the Server -- The logic for service discovery, load-balancing and other functions are decoupled from the client. A proxy will handle requests and invoke the appropriate service (which is built into the registry). The load balancing occurs at the broker.

There are many service directories on the internet, such as Netflix Eureka and Hashicorp Consul.

HTTP is an asynchronous protocol. It's blocking (a thread is blocked) and will not allow the architecture to be fully utilized.

Read the blog- Why Node.js is fast gaining popularity as a development platform for mobile apps?


Message Brokers

On the other hand, message brokers are a great way to communicate.

The services do not communicate directly with one another, unlike HTTP. Instead, The services push messages into a broker, which other services can subscribe to.

The Message Broker is a middleman/third party that performs the communication.

The services communicate using messaging channels. It's non-blocking and asynchronous.

Numerous examples exist of technologies that allow asynchronous message delivery. Apache Kafka, RabbitMQ and other asynchronous messaging technologies are widely known and used.

To make the most of the message broker, we need to use a model called a Message queue. It is similar to a data structure queue.

The queue sits between two services who need to talk and will handle requests according to the First in, First out principle.

Most message brokers have more than one queue. Each queue can be given a unique name and used for service communications.

Messages can also be saved in this queueing system. If a service fails, the system can store messages to be forwarded when restored.

This can ensure reliable communication as well as guaranteed delivery.

The two main distribution methods of message brokers are Publisher/Subscriber and Point-to-Point messaging.

The relationship between sender and recipient in Point to Point is one-to-1. The queue only sends one message to a single recipient.

It is also known as the pub/sub modality and has a relationship of one to many.

Publishers and subscribers are the two terms that describe those who send messages. Publishers can post messages to message brokers without knowing who the subscribers are.

Subscribed users (advertised parties) are the only ones who will receive messages. It is similar to a radio broadcast.

This communication will be seen in our hands-on experiences.


Service Mesh

Service meshes are a dedicated and configurable infrastructure layer that manages inter-service communications. It is becoming increasingly popular in Kubernetes.

Kubernetes is a platform that allows you to distribute your microservices in a single unit. The idea is to add another container which handles the logic related to infrastructure within the application pod.

It is called Sidecar or Proxy.

Two elements make up a service mesh.

  1. Control plane -- Configuration and coordination.
  2. Data Plane -- This is the software that manages traffic forwarding.

The service mesh will take care of the entire communication logic, including features like load balancing and encryption.

It will also include authentication, authorization and monitoring.

You can choose from a few popular meshes, such as AWS APP Mesh and Istio.

This will be an extremely complex subject, so I'll write another article to explain it in more detail.

Instead of using in-memory callings, communication over the network adds latency to your system. The best way to minimize inter-service communications is by reducing them as much as possible.


API Gateway

How microservices communicate with each other. How does a client get access to these microservices, then?

Each microservice in the architecture exposes an array of endpoints that can be called independently. As a result, a direct client-to-microservice communication architecture might be used.

It could work for small applications based on microservices. This method will not work well when building large, complex applications.

  1. It takes a lot of work to scale microservices.
  2. It can be challenging to collect data from different services.
  3. A mismatch between the HTTP microservice and AMQP client
  4. More trips can increase the latency.
  5. The attack surface is more prominent due to security issues.

API Gateway centralizes microservice routes, all available through one endpoint. The API Gateway acts as a layer between clients and microservices.

The Facade pattern is similar.

API Gateway can handle cross-cutting issues like SSL termination, authorization, logging and caching. It can also limit rate, throttle, and more.

Instead of writing similar functionality into every microservice, these things can be dealt with centrally.

This can become a bottleneck for the application and be its single point of failure. You should therefore scale the API Gateways to multiple API Gateways.

BFF pattern (Backends for Frontends) is another solution. This will allow API Gateways to be separated by front-end applications.

It will segregate API Gateways based on the frontend apps.


Authentication and authorization mechanism

How to authenticate and authorize microservices is a significant concern in architecture.

  1. The Authentication/AuthN identifies the users.
  2. Authorization/AuthZ - Determine what a user is allowed to do

When implementing an Auth, you should consider single responsibility, single truth source, scalability, and cost.

To maintain the statelessness of the system, it is recommended that token-based authentication be used. JSON Web Token (jwt), an industry-standard RFC7519 method, is what we use most commonly.

There are many patterns when implementing an Auth system for your Microservice System. Below are a few of the most popular practices.

  1. Create a global authentication and authorization service
  2. Global authentication per service and delegation authorization

Global Authentication & Authorization Service

Implementing a central identity service is a widely-used solution to this problem. A dedicated microservice or 3rd-party (OpenID Connect server, OAuth2-based) can handle the authentication and authorization issues.

The user submits a request to our API Gateway. The Global Identity Provider/ User Service is called to verify the user's identity.

It then provides the JWT with the necessary payload if it is successful. This request will be executed generally without using API Gateway to minimize latency.

When a user requests resources, the user can include the stored token as part of the authentication header. The API gateway, or global Auth Service, can validate the ticket.

The API Gateway then forwards our authenticated request, including any payload in the header and body, to the downstream microservice.

You can choose from several options: Keycloak (Java), WSO2 Identity Server(.NET), and OAuth2orize. You can also use AWS Cognito for this.

There are some downsides to this method:

  1. It is an issue of business. Therefore, it should be done by something other than a global service.
  2. If we add the logic for authorization, it will make our API gateway more complex.
  3. The processing of requests is delayed.

Global Authentication and Delegation Authorization Per Service

Delegating the authorization business is another option.

Microservices based on logic

We have the central provider of identity for authentication. Each microservice is responsible for authorization.

This API gateway sends the JWT with each request to the microservice.

As its payload, the JWT contains the roles assigned to each user. The microservice will now be able to extract claims from the JWT and decide if the user allows the request.

It is known as a pass-through model.


Sidecar Authentication

You may remember discussing sidecar proxy servers in the Service Mesh Section. Sidecar proxies can be used for authentication offloading.

This can also be used for service-to-service communication with Mutual Transport Layer Security, JWT and Zero-Trust Security Principles.

It is a very complex subject to explain and discuss. This is why I plan to write a dedicated article covering the topic in depth.

You can tell us about your experience with Auth microservices in the comments section. You are always invited to contribute your ideas and experiences.


Fault Tolerance

What will happen if you lose one of your microservices?

Microservices are a good fit for continuous delivery. We must have a suitable backup strategy in place.

Common architectural and technical patterns also promote resilience and fault tolerance.

  1. Circuit Breaker -- Handle the fallback gracefully. It can prevent an application from continually attempting to perform a likely-to-fail operationβ€”popular third-party libraries Polly and Hystrix.
  2. Bulkhead Pattern -- Do not allow a fault in a single part to bring down the whole system. The elements of an application can be isolated and placed in pools so that even if one component fails, the rest will still function.
  3. Limiters of traffic -- Slow down peak volumes and ensure critical transactions.

Chaos testing can be used to find failure scenarios. We do this by making microservices fail and then attempting to recover.

You should also consider adding a Log Microservice to your system in order to understand the cause of an error. The ELK stack enables features for distributed logging.

We cannot achieve the full potential of the microservice architecture if we don't consider fault tolerance mechanisms.


Repository Architecture

We should use a version control system, such as GIT, when developing an application. How can you keep the microservices within our codebase?

Mono-repo architecture and poly-repo are the two major architectural types.

  1. Mono Repository --A microservices-based project is stored in one repository. The services will be in different directories.
  2. Multi-Repo – Each microservice has its repository.

Both of these designs have advantages and limitations.

Poly-repo architecture can benefit from fast IDE version control and GIT due to the relatively small codebase. This may also be used to simply establish a dedicated CI/CD pipeline for each repository.

This will also complicate development as a Node JS developer might need to change code in several repositories.

Code duplication is also a problem because keeping track of the code in other repositories takes time and effort.

The mono-repo will ease the lifecycle of development and onboarding. It's also easy to identify code duplicates.

This will cause the IDE to slow down when adding more modules.

The community is divided on which is the better option. You can choose either of them by considering these pros and cons.


Microservice Architecture: Best Practices

Microservice Architecture: Best Practices

 

  1. Each Microservice must have a separate data storage layer. It ensures that microservices are isolated from each other and reduces the risk of single-point failures.
  2. Maintain the code for a level of maturity similar to yours.
  3. Build each Microservice separately
  4. Treat your application as stateless -- For it to scale horizontally, you must make sure that the app is not sharing anything and has no states. Data that must persist in the application should be saved into a database.
  5. Domain-Driven Design – The microservices must be built based on business functionality, not technology. The services should be independent and isolated.
  6. Use (SemVer), Semantic Versioning, to update your service meaningfully.

Get a Free Estimation or Talk to Our Business Manager!


The conclusion of the article is:

Microservices offer flexibility and performance benefits that are impossible with monolithic applications. Node.js' event-driven architecture makes it an ideal choice for microservices.

It is fast, highly scalable, and simple to maintain. We cisin a web development company ready to help you.