New Feature: Sandbox Mode is now live!

Blog

Moving from JSON to gRPC for Faster Internal Communication

Moving from JSON to gRPC for Faster Internal Communication

Debojyoti SinghaDebojyoti Singha··Jan 5, 2026

For a long time, JSON has been our default choice for API responses. It's simple, widely understood, and works well for many use cases.

However, while building Keplars Mail Service, latency quickly became a priority, especially as our architecture evolved into a semi-microservices setup. As we began benchmarking internal service-to-service communication using Apache JMeter, the results made one thing clear: JSON over REST was becoming a bottleneck.

When simplicity meets scale:

In a monolithic system, JSON-based APIs are often "good enough." But as services become more distributed, the cost of serialization, payload size, and network overhead starts to add up.

Our benchmarks showed higher-than-expected response times between internal services. While nothing was broken, the system didn't feel as fast or efficient as we wanted it to be, especially for an infrastructure product where predictable performance matters.

That pushed us to explore alternatives.


Experimenting with gRPC and Protobuf:

We started experimenting with gRPC and Protocol Buffers for internal APIs. The goal wasn't to replace everything overnight, but to validate whether a different communication model could meaningfully improve performance.

The results were better than expected:

  • Latency dropped significantly
  • Payload sizes became much smaller
  • Serialization and deserialization overhead was reduced
  • Service-to-service communication felt noticeably faster and more consistent

Even with a partial migration, the improvement was clear.


A measured rollout:

Not all internal APIs have been migrated yet, and that's intentional. gRPC introduces its own trade-offs, including tooling, debugging complexity, and stricter contracts.

For Keplars, the current approach is pragmatic:

  • Use gRPC where performance and efficiency matter most
  • Keep simpler interfaces where JSON remains sufficient
  • Avoid unnecessary complexity unless there's a clear benefit

This balance allows us to improve performance without sacrificing developer productivity.


What this means for Keplars users:

These changes happen entirely behind the scenes, but they directly support our core goals:

  • Faster internal processing
  • More predictable system behavior
  • Better performance as traffic grows

It's another step toward keeping Keplars fast, reliable, and boring in the best possible way.


Conclusion:

Switching from JSON to gRPC for internal communication wasn't something we initially planned, but it's turned out to be a meaningful improvement. The performance gains, even from a partial adoption, have already justified the change.

As we continue to evolve Keplars' architecture, we're staying focused on pragmatic improvements: migrate where it matters, keep things simple where possible, and always measure the impact.

gRPC won't solve every performance problem, and it's not the right choice for every API. But for high-frequency, low-latency internal communication between services, the benefits are real and measurable.

We'll continue expanding gRPC usage where it makes sense, while staying intentional about where complexity is introduced.

We'd also love to hear from others who've made a similar move from JSON/REST to gRPC for internal systems. Learning from real-world experiences is always valuable.

More engineering updates coming soon as we continue refining Keplars' core infrastructure.

Keplars