Designing Real-time Systems: WebSockets, SSE, and Knowing Which to Use

Real-time features are everywhere — chat, notifications, live dashboards. Here's the architecture behind Localstreet's real-time systems and the decision framework we use.

Real-time features feel magical to users and terrifying to engineers. The fear is understandable — you’re maintaining persistent connections, dealing with reconnection logic, scaling across servers, and debugging issues that only happen under load.

At The Localstreet, we have several real-time systems: vendor order notifications, in-app buyer-vendor chat, live delivery tracking, and a dashboard that shows vendors their store activity as it happens. Each uses a different technology, for good reasons.

The Three Options and When to Use Each

Before writing any code, pick the right primitive.

HTTP Polling

Not actually real-time, but valid for some use cases.

Use when: Updates happen infrequently (every few minutes), simplicity matters more than latency, and you don’t control the infrastructure.

Cost: Unnecessary requests when there’s no new data. Adds latency equal to your poll interval.

Server-Sent Events (SSE)

HTTP-based one-directional push from server to client. Underused and underrated.

Use when: Server pushes updates to client, but client doesn’t need to send data in the same connection. Order notifications, activity feeds, live scores.

Advantages: Works over plain HTTP/2 (no protocol upgrade), automatic reconnection built into browsers, simpler to implement and scale than WebSockets.

WebSockets

Full-duplex, persistent connection. Both client and server can send messages at any time.

Use when: Client and server need to communicate bidirectionally in real-time. Chat, collaborative editing, multiplayer features.

Cost: More complex to scale (connection affinity), requires handling reconnections, heavier infrastructure.

Our Architecture: Order Notifications with SSE

Vendor order notifications are one-directional: the server tells the vendor about a new order. SSE is perfect here.

// server: order notification stream
import { EventEmitter } from 'events';
export const orderBus = new EventEmitter();

app.get('/api/vendor/:vendorId/notifications/stream', authenticate, (req, res) => {
  const { vendorId } = req.params;

  // SSE headers
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');
  res.setHeader('X-Accel-Buffering', 'no'); // disable Nginx buffering

  // Send a comment every 25s as a keepalive (prevents proxy timeouts)
  const keepalive = setInterval(() => res.write(': keepalive\n\n'), 25000);

  const sendOrder = (order: Order) => {
    if (order.vendorId === vendorId) {
      res.write(`event: new-order\n`);
      res.write(`data: ${JSON.stringify(order)}\n\n`);
    }
  };

  orderBus.on('order', sendOrder);

  req.on('close', () => {
    clearInterval(keepalive);
    orderBus.off('order', sendOrder);
  });
});
// client: consuming the SSE stream
function useOrderStream(vendorId: string) {
  const [orders, setOrders] = useState<Order[]>([]);

  useEffect(() => {
    const source = new EventSource(`/api/vendor/${vendorId}/notifications/stream`);

    source.addEventListener('new-order', (e) => {
      const order = JSON.parse(e.data) as Order;
      setOrders((prev) => [order, ...prev]);
      playNotificationSound();
    });

    // EventSource reconnects automatically on error — no manual retry needed
    return () => source.close();
  }, [vendorId]);

  return orders;
}

The key operational detail: the X-Accel-Buffering: no header. Without it, Nginx buffers SSE events and your “real-time” updates arrive in batches. This burned us in production.

Scaling SSE and WebSockets Across Multiple Servers

Single-server SSE/WebSocket implementations are easy. Multi-server is where most tutorials stop.

The problem: when an order is placed, it triggers on Server A. The vendor’s SSE connection is open on Server B. Server A emits to its local EventEmitter, Server B’s listener never fires.

Solution: Redis Pub/Sub as the cross-server event bus

import { createClient } from 'redis';

const publisher = createClient({ url: process.env.REDIS_URL });
const subscriber = publisher.duplicate();

await Promise.all([publisher.connect(), subscriber.connect()]);

// When an order is placed (any server):
export const publishOrder = async (order: Order) => {
  await publisher.publish('orders', JSON.stringify(order));
};

// Each server subscribes and emits to its local EventEmitter:
await subscriber.subscribe('orders', (message) => {
  const order = JSON.parse(message) as Order;
  orderBus.emit('order', order); // local connections on this server receive it
});

Now every server receives every order event via Redis and forwards it to any SSE/WebSocket clients connected locally.

In-App Chat with Socket.io

Vendor-buyer chat is bidirectional — both parties send and receive. WebSockets via Socket.io.

// Socket.io with Redis adapter for horizontal scaling
import { Server } from 'socket.io';
import { createAdapter } from '@socket.io/redis-adapter';

const io = new Server(httpServer, {
  cors: { origin: process.env.ALLOWED_ORIGINS?.split(',') },
  transports: ['websocket', 'polling'], // polling fallback for proxied environments
});

io.adapter(createAdapter(publisher, subscriber));

io.on('connection', (socket) => {
  const userId = socket.handshake.auth.userId;

  // Join user-specific room
  socket.join(`user:${userId}`);

  socket.on('send-message', async (data: MessagePayload) => {
    // Persist to DB
    const msg = await Message.create({
      senderId: userId,
      receiverId: data.to,
      content: data.content,
      orderId: data.orderId,
    });

    // Deliver to recipient (works across servers via Redis adapter)
    io.to(`user:${data.to}`).emit('message', msg);

    // Deliver to sender's other devices too
    socket.to(`user:${userId}`).emit('message', msg);
  });
});

The socket.join(userRoom) pattern handles multiple devices — if the same user has the app open on two devices, both receive the message.

Delivery Tracking: Blending WebSockets and HTTP

Live delivery tracking has an interesting read pattern: many customers watching one driver. Broadcasting driver location to thousands of individual WebSocket connections is inefficient.

Our approach: the driver app sends GPS coordinates to the server via HTTP POST every 5 seconds (mobile battery optimisation — persistent WS connection drains battery). The server stores latest position in Redis and broadcasts to all watchers via Socket.io room:

// Driver location update (HTTP POST, every 5s from mobile)
app.post('/api/driver/:driverId/location', authenticate, async (req, res) => {
  const { lat, lng } = req.body;
  const driverId = req.params.driverId;

  // Store in Redis with 30s TTL (detect offline drivers)
  await redis.setEx(`driver:${driverId}:location`, 30, JSON.stringify({ lat, lng }));

  // Broadcast to all customers watching this driver
  io.to(`tracking:${driverId}`).emit('location-update', { lat, lng, timestamp: Date.now() });

  res.sendStatus(204);
});

// Customer connects to watch a delivery
socket.on('watch-delivery', (driverId: string) => {
  socket.join(`tracking:${driverId}`);
});

The HTTP → Redis → WebSocket bridge gives us driver location updates without the overhead of a persistent WebSocket connection on the mobile side.

Lessons Learned

  1. SSE before WebSockets. If your use case is server→client only, use SSE. It’s simpler, HTTP/2-native, and auto-reconnects. You’ll thank yourself when you’re debugging at 2am.

  2. Always Redis Pub/Sub for multi-server. Don’t try to route connections to the same server — it’s a scaling nightmare. Redis Pub/Sub is the standard solution and it works.

  3. Handle reconnections gracefully on the client. Networks drop. On reconnect, fetch the last N events from the DB and apply them — don’t assume you received everything. For chat, we fetch messages from the last messageId the client acknowledged.

  4. Rate-limit your real-time endpoints. A malicious client can exhaust your event emitter listeners or flood your Redis channel. Limit connections per user and messages per second.

  5. Test with realistic concurrency. SSE/WebSocket bugs almost never show in development with one connection. Use a load testing tool (k6, Artillery) with hundreds of concurrent connections before going to production.

← Back to all articles