)
Ever built a feature that users love so much they're all trying to use it at once? Maybe it's an AI-powered image generator, a data export tool, or a real-time collaboration feature. The problem: these operations are expensive, and if too many users hit them simultaneously, your servers might buckle under the load.
Traditional rate limiting would simply reject requests when limits are hit. But we could be more user-friendly by allowing the users to queue and let them wait their turn instead of showing them a "try again later" message.
Enter the semaphore: a classic synchronization primitive that limits concurrent access to resources without denying access. Today, we'll explore three different approaches to implementing semaphores, starting simple and evolving to a robust distributed solution using Redis.
Approach 1: The simple lock
Let's start with a basic TypeScript semaphore that works within a single process:
class SimpleSemaphore {
private permits: number;
private waitingQueue: Array<() => void> = [];
constructor(permits: number) {
this.permits = permits;
}
async acquire(): Promise<void> {
if (this.permits > 0) {
this.permits--;
return;
}
// No permits available, wait in queue
return new Promise<void>((resolve) => {
this.waitingQueue.push(resolve);
});
}
release(): void {
if (this.waitingQueue.length > 0) {
const next = this.waitingQueue.shift();
next(); // Resolve the waiting promise
} else {
this.permits++;
}
}
}
// Usage
const semaphore = new LocalSemaphore(3);
async function expensiveOperation() {
await semaphore.acquire();
try {
console.log("Doing expensive work...");
} finally {
semaphore.release();
}
}
This works great for single-process applications, but what happens when you scale horizontally? Each instance of your application will have its own semaphore, completely unaware of the others. If you deploy 5 instances with a semaphore limit of 3, you could end up with 15 concurrent operations instead of 3.
Approach 2: Counting semaphore with polling
Time to go distributed! Our first Redis approach uses a simple counter and polling strategy:
import Redis from "ioredis";
class CountingSemaphore {
private redis: Redis;
private key: string;
private maxPermits: number;
constructor(redis: Redis, key: string, maxPermits: number) {
this.redis = redis;
this.key = key;
this.maxPermits = maxPermits;
}
async acquire(): Promise<void> {
while (true) {
const current = await this.redis.get(this.key);
const count = current ? parseInt(current, 10) : 0;
if (count < this.maxPermits) {
// Try to acquire a permit
const newCount = await this.redis.incr(this.key);
if (newCount <= this.maxPermits) {
return; // Successfully acquired
} else {
// Oops, we went over the limit, decrement back
await this.redis.decr(this.key);
}
}
// No permits available, wait and try again
await new Promise((resolve) => setTimeout(resolve, 100));
}
}
async release(): Promise<void> {
await this.redis.decr(this.key);
}
}
// Usage
const redis = new Redis();
const semaphore = new CountingSemaphore(redis, "my-semaphore", 3);
async function expensiveOperation() {
await semaphore.acquire();
try {
console.log("Doing expensive work...");
} finally {
await semaphore.release();
}
}
This approach works across multiple processes, but it has a significant drawback: the polling loop. Even with a 100ms delay, this creates unnecessary Redis traffic and CPU usage. Plus, there's a race condition where multiple processes might increment the counter simultaneously, potentially exceeding the limit.
Approach 3: List-based semaphore with BLPOP
Let's eliminate the polling with Redis's blocking list operations. This approach uses a list to represent available permits and BLPOP
to block until a permit becomes available:
import Redis from "ioredis";
class ListSemaphore {
private redis: Redis;
private key: string;
private maxPermits: number;
constructor(redis: Redis, key: string, maxPermits: number) {
this.redis = redis;
this.key = key;
this.maxPermits = maxPermits;
this.initializePermits();
}
private async initializePermits(): Promise<void> {
// Initialize the list with permits if it doesn't exist
const exists = await this.redis.exists(this.key);
if (!exists) {
const permits = Array.from(
{ length: this.maxPermits },
(_, i) => `permit-${i}`
);
await this.redis.lpush(this.key, ...permits);
}
}
async acquire(): Promise<void> {
// BLPOP blocks until an element is available in the list
await this.redis.blpop(this.key, 0);
}
async release(): Promise<void> {
// Return a permit to the list
await this.redis.lpush(this.key, `permit-${Date.now()}`);
}
}
// Usage
const redis = new Redis();
const semaphore = new ListSemaphore(redis, "my-semaphore", 3);
async function expensiveOperation() {
await semaphore.acquire();
try {
console.log("Doing expensive work...");
} finally {
await semaphore.release();
}
}
This is the most elegant solution! BLPOP
blocks the connection until a permit becomes available, eliminating the need for polling. When a permit is released, it's immediately available to the next waiting process.
Conclusion
We've journeyed from a simple local semaphore to a sophisticated distributed solution:
Local semaphores work great for single-process applications but don't scale horizontally
Redis counting semaphores solve the distribution problem but introduce polling overhead
Redis list semaphores provide the best of both worlds: distributed coordination with efficient blocking
The list-based approach with BLPOP
is the clear winner in terms of simplicity and effectiveness. It's efficient, fair (FIFO), and scales beautifully across multiple application instances. Your users get a smooth experience - no more "service unavailable" errors, just a brief wait in line for their turn to use that expensive feature they love.
Homework
Timeout support
If your expensive operations take a long time, you might not want to wait forever! Consider implementing a timeout in acquire.
Deadlock prevention
Personally I only write bug-free code that works every time and never crashes, but just in case you don't, consider adding a cleanup job that regularly reclaims lost permits (that failed to call release).
Do you need help with your application development or data platform? Feel free to send me a message!
Stay up to date!
Subscribe to our newsletter, de Wolkskrant, to get the latest tools, trends and tips from the industry.
SubscribeMore articles