Month: November 2025

Communication Patterns in Microservices – REST, gRPC, and Event-Driven Messaging

Introduction

In the last article, we discussed how microservices stay decoupled through strong boundaries and data ownership. But independence doesn’t mean isolation, services still need to talk to each other.

And how they talk defines everything: latency, reliability, scalability, and even user experience.

That’s where communication patterns come into play which is the architectural backbone connecting distributed systems.

We’ll explore three key approaches used across modern microservice ecosystems:

  1. REST (HTTP) – Simple, request/response communication
  2. gRPC – Fast, strongly-typed, binary communication
  3. Event-Driven Messaging — Asynchronous, loosely coupled communication

1. REST — The Universal Language of Microservices

REST (Representational State Transfer) uses HTTP as its foundation. It’s simple, human-readable, and widely supported.

Article content

When to Use REST

  • Simplicity and familiarity matter most
  • You need request/response semantics
  • Perfect for user-facing APIs and CRUD operations

Limitations

  • Verbose (JSON over HTTP)
  • Slower under high load
  • Lacks streaming and strong typing
  • Each hop adds the network latency

2. gRPC — High-Performance, Typed Communication

gRPC (Google Remote Procedure Call) is a modern alternative to REST which used the Protocol Buffers instead of JSON and HTTP/2 instead of HTTP/1.

This allows:

  • Smaller payloads (binary serialization)
  • Multiplexed requests over one connection
  • Strongly-typed service contracts
Article content

When to Use gRPC

  • Internal microservice communication within same org or network
  • High throughput and low latency required
  • Need for streaming

Limitations

  • Not browser-friendly. It requires client libraries required
  • Harder for external developers to consume directly
  • Needs schema management of the .proto files

Now lt’s have a quick comparison of both the protocols

Article content
Now let’s move to the event driven messaging

3. Event-Driven Messaging – Asynchronous Collaboration

While REST and gRPC are ,Event-Driven Messaging is asynchronous in which services communicate via events through a message broker (Kafka, RabbitMQ, Azure Service Bus, etc.).

Article content

In this setup:

  • Publishers emits events
  • Subscribers consume them independently
  • No one waits. Everything happens in parallel

Example (MetroX Platform):

When Payment Service publishes PaymentConfirmed event. Upon receivin this event, notification service sends an SMS or email and operations service updates dashboards.


Now let’s look at the different delivery semantics of their reliability.

Different brokers and systems offer different guarantees:

Article content

That’s why idempotency is vital:

Processing the same event multiple times should yield the same result.

Article content

Based on the requirement we have to choose the right communication protocol and they can be used together to achieve different functionalities.

Article content

Each pattern serves its own purpose,yet all cooperate through clear contracts, message schemas, and delivery guarantees.

That’s all today in the communication patterns.


 

Coming Next

Next in the Microservices Essentials series: 👉 The API Gateway Pattern – The Front Door of Microservices How to manage routing, versioning, and security across REST, gRPC, and event-driven systems.

Core Characteristics of a Microservice Architecture

In the previous article, we explored what microservices are — small, independent services built around business capabilities.

Today we are going to check the characteristics which truly defines a microservice-based system.

By simply breaking a monolith into smaller projects doesn’t make it microservices.

What defines a microservice architecture is how those services behave — their independence, boundaries, and resilience.

Let’s explore the core traits that make an architecture genuinely microservice-driven.

1. Independent Deployability

Each service should be developed, tested, and deployed without affecting others.That’s the foundation of speed and agility in microservice-based delivery.

Article content

e.g: Update the Transaction Service independently without redeploying Payment or Operations.


2. Loose Coupling and High Cohesion

Each service owns its logic, data, and domain rules (high cohesion).Services interact with others only through defined interfaces (APIs or events) — not direct DB calls (loose coupling).

Article content

3. Clear Domain Boundaries (Bounded Contexts)

Microservices should align with business domains — not arbitrary technical divisions.This concept, inspired by Domain-Driven Design (DDD), ensures every service speaks its own language and owns its model.

Article content

A bounded context keeps domain logic clear, avoiding shared dependencies or model confusion.


4. Decentralized Data Ownership

Every service manages its own database — even if that means using different storage technologies.This isolation enables autonomy and prevents data coupling.

Article content

This helps to developerd gaining a freedom to evolve schema, optimize queries, and maintain service integrity without external impact.


5. Scalability and Fault Isolation

Microservices scale individually based on workload. A Payment Service may run 5 replicas under heavy traffic, while the Operations Service stays small.

Failures are also contained — one service crashing shouldn’t bring others down.

Article content
Sequence Diagram

6. Observability and Monitoring

In a distributed world, visibility is everything. Each service should expose:

  • Logs (structured, centralized)
  • Metrics (performance, throughput)
  • Traces (end-to-end request paths)

Multiple tools are available which can be used such as Prometheus, Grafana, ELK Stack, Jaeger, OpenTelemetry.


Summary at a Glance

Article content

Real-World Snapshot — From the MetroX Platform

Here is the real world snapshot which is being developed by me for the sole purpose of Microservices demonstration.

Article content

These services operate independently, share no databases, and communicate via events — forming a cohesive yet decoupled ecosystem.


Coming Next

Next in the Microservices Essentials series: 👉 Communication Patterns in Microservices — REST, gRPC, and Event-Driven Messaging.

 

What Are Microservices?

Microservices turn a large, tightly coupled system into a collection of small, focused, and independently deployable services — each built around a specific business capability.

The Evolution — From Monoliths to Microservices

For years, applications were built as monoliths — a single deployable unit handling everything from user authentication to payments and reporting.That approach works well initially… until it doesn’t.

As features grow and teams expand, the monolith becomes harder to manage, deploy, and scale. Enter Microservices Architecture — a way to break that big block into smaller, self-contained services that work together through well-defined APIs.


Here’s a visual snapshot of the Monolithic system

Article content

In this model:

  • All features share the same database and deployment.
  • Any bug can impact unrelated modules.
  • Scalability and team agility are limited.

The Same System Reimagined as Microservices

Article content

Now, each service:

  • Has its own data, logic, and lifecycle
  • Can be deployed or scaled independently
  • Allows teams to work autonomously without conflicts

e.g: If the Payment Service needs an update for UPI integration, it can go live without redeploying the Transaction or Operations modules


The Core Idea

A microservice is a self-contained unit that:

  • Solves one clear business problem (e.g., payments, operations, analytics)
  • Has its own data store and API boundary
  • Communicates with others using lightweight protocols (HTTP, gRPC, events)
  • Is independently deployable and scalable

Think of it as a collection of small, specialized teams — each owning one product within a bigger ecosystem.


Why Microservices Matter

  • Independent Deployments — Release one service without affecting others.
  • Faster Innovation — Small, isolated codebases mean rapid iteration.
  • Targeted Scalability — Scale only what’s under load (e.g., Payments on festive days).
  • Resilience — One service failure doesn’t bring down the entire system.
  • Technology Flexibility — Choose the best stack per domain (C#, Node, Go, etc.).

Monolith vs Microservices — A Quick Comparison

Article content


How Microservices Communicate

Let’s see how a client-facing app (like a mobile user app or ticketing app) interacts with these services:

Article content

Here’s the real-world flow:

  • A user initiates a transaction via the app.
  • The Transaction Service emits an event → TransactionCreated.
  • The Payment Service processes and emits → PaymentConfirmed.
  • The Operations Service updates internal logs or shifts.
  • The Notification Service sends confirmation to the user.

This event-driven flow keeps everything loosely coupled and highly scalable.


When (Not) to Use Microservices

Article content

If you are unclear whether to go with Microservices or not then start with a well-structured monolith and gradually evolve into microservices when boundaries become clear.


Key Takeaways

  • Microservices = independent, domain-driven units working together via APIs or events.
  • They bring agility, scalability, and resilience but at the cost of added operational complexity.

Coming Next

Next in the Microservices Essentials series: Core Characteristics of a Microservice Architecture –> exploring autonomy, data ownership, observability, and domain boundaries.

Inter Process Communication in .net and Framework

Recently I was working on a .net Framework 4.8 application and there was one feature which I wanted to implement. But this feature has support only in the .net core 3.0 onwards.

To resolve this issue I decided to take an approach of inter process communication through TCP Listener and Client

Here is the code snippet for TCPListener written in .net 8

static async Task Main(string[] args)
{
    var listener = new TcpListener(IPAddress.Any, 12346);
    listener.Server.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
    listener.Start();
    Console.WriteLine("Server started and listening on port 12346...");

    while (true)
    {
        var client = await listener.AcceptTcpClientAsync();
        Console.WriteLine("Client connected...");
        _ = HandleClientAsync(client);
    }
}

In the above code, I have initialized the TcpListener object to listen the incoming message on any IP address on the port number 12346.

Then I wanted the listener to entertain all type of the sockets and to reuse the addresses.
Reuse address is used because the Listener and Client are running on the same machine.

Once all these options are set, listener is to be started.
Now we have to invoke the AcceptTcpClientAsync method on the listener so that it can accept the communication request from the TcpClient.

Once the connection is established, we have to handle the TcpClient’s request.
It has been handled in the HandleClientAsync method.

Here goes the code for handler.

private static async Task HandleClientAsync(TcpClient client)
{
    try
    {
        using var networkStream = client.GetStream();
        var buffer = new byte[4096];
        int bytesRead = await networkStream.ReadAsync(buffer, 0, buffer.Length);
        string receivedMessage = Encoding.UTF8.GetString(buffer, 0, bytesRead);
        Console.WriteLine($"Received message: {receivedMessage}");

        var response = string.Empty;
        //Specific logic for handling the receivedMessage goes here
        
        byte[] responseBytes = Encoding.UTF8.GetBytes(response);
        await networkStream.WriteAsync(responseBytes, 0, responseBytes.Length);
        Console.WriteLine("Response sent to client.");
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Exception: {ex.Message}");
    }
    finally
    {
        client.Close();
        Console.WriteLine("Client disconnected.");
    }
}

Along with TcpClient a NetworkStream is attached on which it writes data and read from it.
So in the above code, one has to get the stream from TcpClient and then has to read the data in a byte array.
Once data is read, it can be processed as required.
To send back the data to client, it is to be written on the same network stream.
TcpClient will read the data from this stream.

Here goes the code for client

static void Main(string[] args)
{
    string message = "ENCRYPT:Your data to encrypt";
    string response = SendMessage("127.0.0.1", 12346, message);
    Console.WriteLine("Response from server: " + response);
}

static string SendMessage(string server, int port, string message)
{
	try
	{
		using (var client = new TcpClient(server, port))
		{
			using (var stream = client.GetStream())
			{
				byte[] data = Encoding.UTF8.GetBytes(message);
				stream.Write(data, 0, data.Length);

				byte[] responseData = new byte[4096];
				int bytes = stream.Read(responseData, 0, responseData.Length);
				return Encoding.UTF8.GetString(responseData, 0, bytes);
			}
		}
	}
	catch (Exception ex)
	{
		Console.WriteLine($"Exception: {ex.Message}");
		return string.Empty;
	}
}

Communication Process

 

  • Server Initialization:The server starts listening on the loopback address and specified port.
  • Client Connection:The client initiates a connection to the server’s loopback address and port.
  • Message Sending:The client sends a message (e.g., “ENCRYPT:Your data to encrypt”. In my case the required encryption was not available in .net framework natively) to the server over the established TCP connection.
  • Server Processing:The server receives the message, processes it (encryption or decryption), and prepares a response.
  • Response Sending:The server sends the processed response back to the client over the same TCP connection.
  • Client Receiving:The client reads the server’s response and displays it.

 

By following this setup, the server will only accept connections from clients running on the same machine, ensuring local-only communication.

That’s all for now. Enjoy the coding.