
Spring streaming response made easy
In this short article, we'll get into stream large size of data through stream as an alternative to the traditional endpoints.
Wed, 10th September 2025
Read MoreBlossoming Intelligence: How to Run Spring AI Locally with Ollama
In this short article, we'll look at how easy it is to create a chat bot backend powered by Spring and Olama using the llama 3 model.
Sat, 11th May 2024
Read MoreBuilding network testing tools often reveals the complexity hidden beneath simple requirements. This article documents the creation of an interactive TCP proxy in Rust that allows real-time simulation of various network conditions—from latency injection to complete connection blocking.
Testing application resilience under adverse network conditions traditionally requires either modifying production infrastructure or using complex simulation tools. What if you could simply place a proxy between your application and any service, then dynamically inject failures, latency, or bandwidth constraints?
That's the goal: a lightweight TCP proxy with an interactive menu that lets you switch between different network behaviors on the fly.
The proxy is built using:
These weren't arbitrary choices. Tokio provides mature async I/O primitives, Crossterm handles terminal control across platforms, and Rust's type system prevents the subtle bugs that plague network code.
The system consists of three concurrent components:
All components share access to the current proxy mode through Arc<Mutex<ProxyMode>>, allowing real-time behavior changes without restarting.
The initial implementation seemed logical—read from client, forward to server, read from server, forward to client:
This code compiles. It looks reasonable. It doesn't work.
When you test it with a simple HTTP request, the connection hangs indefinitely. The client sends a request and waits for a response, but the proxy is stuck waiting to read MORE from the client instead of reading the server's response.
This is half-duplex communication—only one direction operates at a time. TCP requires full-duplex communication where both directions operate independently and simultaneously.
Think of it like a phone call. If you could only listen OR speak (but not both at the same time), conversations would be impossible. TCP connections work the same way—data must flow in both directions concurrently.
The fix requires splitting each connection into independent read and write halves, then running two concurrent tasks:
Now both directions run simultaneously. When a client sends an HTTP request, the client→server task forwards it immediately while the server→client task waits for the response. True bidirectional forwarding.
The proxy supports five operational modes:
Normal operation—traffic flows transparently without modification.
Blocks all incoming connections immediately.
Use case: Testing application behavior when services are completely unavailable.
Adds artificial delay to each packet in both directions.
Use case: Simulating geographic distance or slow networks (e.g., 200ms for intercontinental connections).
Causes connections to timeout after a specified duration.
Use case: Testing retry logic and timeout handling in applications.
Limits bandwidth by calculating required sleep time based on bytes transferred.
Use case: Simulating slow mobile connections (e.g., 10KB/s for 3G).
The menu requires blocking I/O for keyboard input, while the proxy requires async I/O for network operations. These don't mix.
Solution: Isolate the menu in a dedicated blocking thread using tokio::task::spawn_blocking:
This prevents blocking operations from interfering with async network I/O.
The menu uses Crossterm's raw mode for real-time keyboard input:
Raw mode is enabled when showing the menu and disabled when prompting for input values, preventing terminal state corruption.
Multiple connection handlers need to read the current mode while the menu updates it. This requires thread-safe shared state:
Each connection handler reads the current mode before each transfer:
When the user changes the mode via the menu, all handlers immediately see the update on their next read operation. This enables dynamic behavior changes without dropping existing connections.
Ctrl+C handling runs in a separate async task to ensure clean shutdown:
This works regardless of whether the user is in the menu or the proxy is handling connections.
The 8KB buffer size balances throughput and memory usage. Larger buffers improve throughput for high-bandwidth transfers but increase memory consumption.
Each connection spawns 2 async tasks. Tokio's work-stealing scheduler efficiently handles thousands of concurrent connections on modern hardware.
Modes apply to new read operations, not mid-transfer. If a large file transfer is in progress when you switch from Allow to Latency mode, the current transfer completes at full speed. The next read operation will apply the new latency.
Throttling uses sleep delays rather than token bucket or leaky bucket algorithms. This provides approximate bandwidth limiting suitable for testing but not precise rate control.
The proxy operates at the TCP level. It cannot inspect or modify:
This is both a limitation and a feature—protocol independence means the proxy works with any TCP-based protocol.
Uses fixed 8KB buffers. Very large messages are automatically chunked by TCP, but this isn't optimized for any specific protocol's message boundaries.
The proxy was validated using:
The complete implementation fits in approximately 250 lines of Rust code:
The critical insight was recognizing that TCP proxying requires truly concurrent bidirectional forwarding, not sequential request-response handling. The initial half-duplex implementation would work for simple ping-pong protocols but fails for real-world TCP traffic patterns.
The interactive menu approach, while requiring careful thread management between blocking and async contexts, provides significant usability benefits over configuration files when manually testing application resilience.
Building this proxy revealed how seemingly simple requirements—"forward TCP traffic"—hide significant complexity. The bidirectional forwarding challenge demonstrates why understanding the underlying protocol semantics matters more than writing code that compiles.
The result is a practical tool for testing network resilience that runs anywhere Rust does, requires no configuration, and lets you inject failures interactively. Whether you're testing timeout handling, simulating geographic latency, or practicing chaos engineering, having a programmable proxy in your toolkit makes these tasks straightforward.