
Building an Interactive TCP Proxy in Rust
Learn how to build a full-duplex TCP proxy with real-time mode switching in Rust. This article covers the challenges of bidirectional forwarding, async I/O with Tokio, and creating an interactive terminal interface for network testing.
Thu, 27th November 2025
Read MoreSpring streaming response made easy
In this short article, we'll get into stream large size of data through stream as an alternative to the traditional endpoints.
Wed, 10th September 2025
Read MoreJava developers have been writing reactive code for years to solve a concurrency problem that the platform itself could never handle well. That might finally be changing. With Java 21 and Project Loom, Virtual Threads bring lightweight concurrency natively to the JVM — and the question is no longer whether reactive works. It does. The real question is whether it should still be the default choice.
Traditional Java threads — known as platform threads — are mapped 1:1 to OS kernel threads. This design is simple but fundamentally limited at scale:
The 1:1 mapping means your application's concurrency ceiling is directly tied to the OS's ability to manage threads — and that ceiling is surprisingly low.
In a typical I/O-heavy application:
This is the thread exhaustion problem — and it's exactly what drove the industry toward reactive programming.
Reactive programming solved the scalability problem by using fewer threads, non-blocking I/O, and async pipelines. Frameworks like Spring WebFlux and Project Reactor became the standard answer. But that solution came with a real bill.
What should be simple sequential logic — get a user, get their orders, compute a total — turns into a pipeline of nested operators:
The code may be efficient, but it is rarely easy to follow. Business logic gets buried inside flatMap chains and lambda expressions.
When something goes wrong, the stack trace is almost useless:
Your actual business code is absent. You're hunting for a bug that could be anywhere in the reactive pipeline.
Testing reactive code requires a different mental model and dedicated tooling. A simple assertion becomes a full subscription lifecycle:
Every test requires StepVerifier, explicit subscription management, and careful handling of the async timeline. That cost accumulates across a codebase — in maintenance, onboarding, debugging, and refactoring.
Project Loom introduces Virtual Threads: lightweight threads managed entirely by the JVM, not the OS. Previewed in Java 19 and production-ready since Java 21, they fundamentally change the cost model of concurrency.
A Virtual Thread still looks like a thread to the developer. You create it, start it, and block inside it the same way. The difference is in how the runtime schedules it.
When a Virtual Thread performs a blocking operation, the JVM parks it and frees the underlying carrier thread to handle other work. When the operation completes, the Virtual Thread is resumed — transparently, without any callback or operator.
This is the key insight: in the old model, blocking was expensive because it tied up an OS thread. In the Virtual Thread model, blocking is cheap because the JVM decouples the logical unit of work from the physical thread running it.
| Feature | Platform Threads | Virtual Threads |
|---|---|---|
| Managed by | OS | JVM |
| Max quantity | Thousands | Millions |
| Memory per thread | ~1 MB | ~few KB |
| Blocking I/O | Wastes the thread | JVM parks & resumes |
| Code style required | Synchronous | Synchronous ✅ |
| Stack traces | Readable | Readable ✅ |
No new paradigm. No operators to memorize. Just Java.
The simplicity gap becomes obvious when comparing equivalent code side by side.
The Virtual Threads version reads like pseudocode. It's sequential, readable, and requires zero mental overhead to understand. Both handle I/O efficiently — the Virtual Threads version just does it without asking anything extra from the developer.
Performance is where the conversation gets nuanced. Three independent benchmark studies provide a clear picture.
The benchmark story is not "Virtual Threads always win." The real takeaway is:
For most teams, that changes the default choice.
The most important thing Project Loom does is not just improve performance — it restores developer ergonomics without sacrificing scalability.
For years, teams accepted reactive complexity because the alternative was poor scalability with platform threads. Now there is a third path: keep the simple, synchronous programming model and still handle high concurrency efficiently.
Reactive is not dead. But it is no longer the forced default for every concurrency problem in Java. That is a healthier place for the ecosystem.
Building scalable Java applications no longer requires adopting a completely different programming model. Virtual Threads give you the performance of async with the simplicity of sync — and for a large class of applications, that is the right trade-off.
If your system is mostly I/O-bound and your goal is software that is both scalable and maintainable, Virtual Threads are now the better default.
Not because reactive failed. But because Java finally learned how to make simple code concurrent again.