Clustering

Parallelizing a Kito application using multiple processes

Kito allows you to run multiple instances of your application across different system processes using Node.js’ native cluster module. This enables you to fully utilize all CPU cores and significantly improve throughput and scalability. Clustering is particularly helpful in production environments where you need to handle large amounts of concurrent traffic without modifying your application logic.

How Clustering Works

When clustering is enabled, the primary (master) process spawns multiple worker processes. Each worker runs the exact same Kito application, and they all share the same network interface (ports). Node.js automatically load-balances incoming connections across workers, ensuring efficient distribution.

Key benefits:

  • Local horizontal scaling: spreads load across all CPU cores.
  • Higher throughput: more concurrent requests served in parallel.
  • Fault isolation: if a worker crashes, the master can restart it.
  • Simple model: requires minimal changes to your server code.

Clustering Setup

This example follows a common pattern: the primary process spawns one worker per CPU core, and each worker simply imports the Kito server.

Important Notes

Why reusePort: true?

When running in cluster mode, only the primary process should handle port binding. Kito automatically detects that clustering is being used: enabling reusePort: true prevents each worker from attempting to bind the same port, allowing Node.js to manage shared port access internally.

Automatic Worker Restart

Adding a listener for cluster.on("exit") lets you automatically restart any worker that crashes. This increases resilience without needing external tools like PM2 or systemd.

Stateful vs Stateless Design

For best results, avoid storing in-memory state inside workers. Instead, use external stores such as Redis or databases. Otherwise, each worker will maintain isolated state.

Production Environments

Although clustering can be done manually, production deployments often combine it with:

  • PM2 for clustering + monitoring
  • Docker using multiple container replicas
  • Orchestrators like Kubernetes, Nomad, ECS

Kito remains lightweight and fully compatible with all these strategies.