Drinking Philosophers: Generalized Resource Contention

    The Drinking Philosophers Problem The Drinking Philosophers Problem is a generalization of the classic Dining Philosophers Problem, proposed by K. M. Chandy and J. Misra in 1984. Unlike the dining version where philosophers share forks with immediate neighbors in a circle, drinking philosophers share bottles with arbitrary neighbors based on a conflict graph. This makes it much more realistic for modeling real-world resource allocation. The Scenario The drinking party has: ...

    August 12, 2025 · 12 min · Rafiul Alam

    Go Concurrency Pattern: The Ticket Seller

    ← Bank Account Drama | Series Overview | Login Counter → The Problem: Selling Tickets That Don’t Exist A concert has 1,000 tickets. Multiple goroutines handle sales concurrently. Each seller checks if tickets remain, and if yes, decrements the count and sells one. Sounds simple, right? Two sellers check simultaneously. Both see “1 ticket remaining.” Both sell. You’ve just sold ticket number -1. Congratulations, you’ve discovered the check-then-act race condition, one of the most common concurrency bugs in the wild. ...

    February 5, 2025 · 9 min · Rafiul Alam

    Go Concurrency Pattern: The Login Counter

    ← Ticket Seller | Series Overview | Monte Carlo Pi → The Problem: Counting What You Can’t See Count concurrent users. On login: increment. On logout: decrement. Simple, right? Now add reality: The counter is distributed across multiple servers A user’s session times out on one server but they’re active on another The increment message arrives after the decrement message Network partitions split your cluster Servers crash mid-operation Suddenly, this “simple” counter becomes a distributed systems nightmare. Welcome to distributed counting, where even addition is hard. ...

    January 24, 2025 · 9 min · Rafiul Alam

    Go Concurrency Pattern: The Bank Account Drama

    ← Login Counter | Series Overview | Ticket Seller → The Problem: Money Vanishing Into Thin Air Two people share a bank account with $100. Both check the balance at the same time, see $100, and both withdraw $100. The bank just lost $100. This isn’t a hypothetical-race conditions in financial systems have caused real monetary losses. The bank account drama illustrates the fundamental challenge of concurrent programming: read-modify-write operations are not atomic. What seems like a simple operation actually involves multiple steps, and when multiple goroutines execute these steps concurrently, chaos ensues. ...

    January 18, 2025 · 7 min · Rafiul Alam

    Mastering Go Concurrency: The Coffee Shop Guide to Goroutines

    Go Concurrency Patterns Series: Series Overview | Goroutine Basics | Channel Fundamentals Introduction: Welcome to Go Coffee Shop Imagine running a busy coffee shop. You have customers placing orders, baristas making drinks, shared equipment like espresso machines and milk steamers, and the constant challenge of managing it all efficiently. This is exactly what concurrent programming in Go is like - and goroutines are your baristas! In this comprehensive guide, we’ll explore Go’s concurrency patterns through the lens of running a coffee shop. By the end, you’ll understand not just how to write concurrent Go code, but why these patterns work and when to use them. ...

    December 15, 2024 · 30 min · Rafiul Alam

    Worker Pool Pattern in Go

    Go Concurrency Patterns Series: ← Request/Response | Series Overview | Mutex Patterns → What is the Worker Pool Pattern? The Worker Pool pattern manages a fixed number of worker goroutines that process jobs from a shared queue. This pattern is essential for controlling resource usage, preventing system overload, and ensuring predictable performance under varying loads. Key Components: Job Queue: Channel containing work to be processed Worker Pool: Fixed number of worker goroutines Result Channel: Optional channel for collecting results Dispatcher: Coordinates job distribution to workers Real-World Use Cases Image Processing: Resize/compress images with limited CPU cores Database Operations: Limit concurrent database connections API Rate Limiting: Control outbound API call rates File Processing: Process files with bounded I/O operations Web Scraping: Limit concurrent HTTP requests Background Jobs: Process queued tasks with resource limits Basic Worker Pool Implementation package main import ( "fmt" "math/rand" "sync" "time" ) // Job represents work to be processed type Job struct { ID int Data interface{} } // Result represents the outcome of processing a job type Result struct { JobID int Output interface{} Error error } // WorkerPool manages a pool of workers type WorkerPool struct { workerCount int jobQueue chan Job resultQueue chan Result quit chan bool wg sync.WaitGroup } // NewWorkerPool creates a new worker pool func NewWorkerPool(workerCount, jobQueueSize int) *WorkerPool { return &WorkerPool{ workerCount: workerCount, jobQueue: make(chan Job, jobQueueSize), resultQueue: make(chan Result, jobQueueSize), quit: make(chan bool), } } // Start initializes and starts all workers func (wp *WorkerPool) Start() { for i := 0; i < wp.workerCount; i++ { wp.wg.Add(1) go wp.worker(i) } } // worker processes jobs from the job queue func (wp *WorkerPool) worker(id int) { defer wp.wg.Done() for { select { case job := <-wp.jobQueue: fmt.Printf("Worker %d processing job %d\n", id, job.ID) result := wp.processJob(job) wp.resultQueue <- result case <-wp.quit: fmt.Printf("Worker %d stopping\n", id) return } } } // processJob simulates job processing func (wp *WorkerPool) processJob(job Job) Result { // Simulate work time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // Process the job (example: square the number) if num, ok := job.Data.(int); ok { return Result{ JobID: job.ID, Output: num * num, } } return Result{ JobID: job.ID, Error: fmt.Errorf("invalid job data"), } } // Submit adds a job to the queue func (wp *WorkerPool) Submit(job Job) { wp.jobQueue <- job } // Results returns the result channel func (wp *WorkerPool) Results() <-chan Result { return wp.resultQueue } // Stop gracefully shuts down the worker pool func (wp *WorkerPool) Stop() { close(wp.quit) wp.wg.Wait() close(wp.jobQueue) close(wp.resultQueue) } func main() { // Create worker pool with 3 workers pool := NewWorkerPool(3, 10) pool.Start() defer pool.Stop() // Submit jobs go func() { for i := 1; i <= 10; i++ { job := Job{ ID: i, Data: i * 10, } pool.Submit(job) } }() // Collect results for i := 0; i < 10; i++ { result := <-pool.Results() if result.Error != nil { fmt.Printf("Job %d failed: %v\n", result.JobID, result.Error) } else { fmt.Printf("Job %d result: %v\n", result.JobID, result.Output) } } } Advanced Worker Pool with Context package main import ( "context" "fmt" "sync" "time" ) // ContextJob includes context for cancellation type ContextJob struct { ID string Data interface{} Context context.Context } // ContextResult includes timing and context information type ContextResult struct { JobID string Output interface{} Error error Duration time.Duration WorkerID int } // AdvancedWorkerPool supports context cancellation and monitoring type AdvancedWorkerPool struct { workerCount int jobQueue chan ContextJob resultQueue chan ContextResult ctx context.Context cancel context.CancelFunc wg sync.WaitGroup metrics *PoolMetrics } // PoolMetrics tracks worker pool performance type PoolMetrics struct { mu sync.RWMutex jobsProcessed int64 jobsFailed int64 totalDuration time.Duration activeWorkers int } func (pm *PoolMetrics) RecordJob(duration time.Duration, success bool) { pm.mu.Lock() defer pm.mu.Unlock() if success { pm.jobsProcessed++ } else { pm.jobsFailed++ } pm.totalDuration += duration } func (pm *PoolMetrics) SetActiveWorkers(count int) { pm.mu.Lock() defer pm.mu.Unlock() pm.activeWorkers = count } func (pm *PoolMetrics) GetStats() (processed, failed int64, avgDuration time.Duration, active int) { pm.mu.RLock() defer pm.mu.RUnlock() processed = pm.jobsProcessed failed = pm.jobsFailed active = pm.activeWorkers if pm.jobsProcessed > 0 { avgDuration = pm.totalDuration / time.Duration(pm.jobsProcessed) } return } // NewAdvancedWorkerPool creates a new advanced worker pool func NewAdvancedWorkerPool(ctx context.Context, workerCount, queueSize int) *AdvancedWorkerPool { poolCtx, cancel := context.WithCancel(ctx) return &AdvancedWorkerPool{ workerCount: workerCount, jobQueue: make(chan ContextJob, queueSize), resultQueue: make(chan ContextResult, queueSize), ctx: poolCtx, cancel: cancel, metrics: &PoolMetrics{}, } } // Start begins processing with all workers func (awp *AdvancedWorkerPool) Start() { awp.metrics.SetActiveWorkers(awp.workerCount) for i := 0; i < awp.workerCount; i++ { awp.wg.Add(1) go awp.worker(i) } // Start metrics reporter go awp.reportMetrics() } // worker processes jobs with context support func (awp *AdvancedWorkerPool) worker(id int) { defer awp.wg.Done() for { select { case job := <-awp.jobQueue: start := time.Now() result := awp.processContextJob(job, id) duration := time.Since(start) awp.metrics.RecordJob(duration, result.Error == nil) select { case awp.resultQueue <- result: case <-awp.ctx.Done(): return } case <-awp.ctx.Done(): fmt.Printf("Worker %d shutting down\n", id) return } } } // processContextJob handles job processing with context func (awp *AdvancedWorkerPool) processContextJob(job ContextJob, workerID int) ContextResult { start := time.Now() // Check if job context is already cancelled select { case <-job.Context.Done(): return ContextResult{ JobID: job.ID, Error: job.Context.Err(), Duration: time.Since(start), WorkerID: workerID, } default: } // Simulate work that respects context cancellation workDone := make(chan interface{}, 1) workErr := make(chan error, 1) go func() { // Simulate processing time time.Sleep(time.Duration(50+rand.Intn(100)) * time.Millisecond) if num, ok := job.Data.(int); ok { workDone <- num * num } else { workErr <- fmt.Errorf("invalid data type") } }() select { case result := <-workDone: return ContextResult{ JobID: job.ID, Output: result, Duration: time.Since(start), WorkerID: workerID, } case err := <-workErr: return ContextResult{ JobID: job.ID, Error: err, Duration: time.Since(start), WorkerID: workerID, } case <-job.Context.Done(): return ContextResult{ JobID: job.ID, Error: job.Context.Err(), Duration: time.Since(start), WorkerID: workerID, } case <-awp.ctx.Done(): return ContextResult{ JobID: job.ID, Error: awp.ctx.Err(), Duration: time.Since(start), WorkerID: workerID, } } } // Submit adds a job to the queue func (awp *AdvancedWorkerPool) Submit(job ContextJob) error { select { case awp.jobQueue <- job: return nil case <-awp.ctx.Done(): return awp.ctx.Err() } } // Results returns the result channel func (awp *AdvancedWorkerPool) Results() <-chan ContextResult { return awp.resultQueue } // reportMetrics periodically reports pool statistics func (awp *AdvancedWorkerPool) reportMetrics() { ticker := time.NewTicker(2 * time.Second) defer ticker.Stop() for { select { case <-ticker.C: processed, failed, avgDuration, active := awp.metrics.GetStats() fmt.Printf("Pool Stats - Processed: %d, Failed: %d, Avg Duration: %v, Active Workers: %d\n", processed, failed, avgDuration, active) case <-awp.ctx.Done(): return } } } // Stop gracefully shuts down the worker pool func (awp *AdvancedWorkerPool) Stop() { awp.cancel() awp.wg.Wait() close(awp.jobQueue) close(awp.resultQueue) } func main() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() pool := NewAdvancedWorkerPool(ctx, 4, 20) pool.Start() defer pool.Stop() // Submit jobs with individual timeouts go func() { for i := 1; i <= 15; i++ { jobCtx, jobCancel := context.WithTimeout(ctx, 200*time.Millisecond) job := ContextJob{ ID: fmt.Sprintf("job-%d", i), Data: i * 5, Context: jobCtx, } if err := pool.Submit(job); err != nil { fmt.Printf("Failed to submit job %d: %v\n", i, err) jobCancel() break } // Cancel some jobs early to demonstrate cancellation if i%5 == 0 { go func() { time.Sleep(50 * time.Millisecond) jobCancel() }() } else { defer jobCancel() } } }() // Collect results resultCount := 0 for result := range pool.Results() { resultCount++ if result.Error != nil { fmt.Printf("Job %s failed (worker %d): %v (took %v)\n", result.JobID, result.WorkerID, result.Error, result.Duration) } else { fmt.Printf("Job %s completed (worker %d): %v (took %v)\n", result.JobID, result.WorkerID, result.Output, result.Duration) } if resultCount >= 15 { break } } } Dynamic Worker Pool package main import ( "context" "fmt" "sync" "sync/atomic" "time" ) // DynamicWorkerPool can scale workers up and down based on load type DynamicWorkerPool struct { minWorkers int maxWorkers int currentWorkers int64 jobQueue chan Job resultQueue chan Result ctx context.Context cancel context.CancelFunc wg sync.WaitGroup workerControl chan int // +1 to add worker, -1 to remove worker metrics *DynamicMetrics } // DynamicMetrics tracks load and performance for scaling decisions type DynamicMetrics struct { mu sync.RWMutex queueLength int64 avgProcessingTime time.Duration lastScaleTime time.Time scaleUpThreshold int scaleDownThreshold int } func (dm *DynamicMetrics) UpdateQueueLength(length int) { atomic.StoreInt64(&dm.queueLength, int64(length)) } func (dm *DynamicMetrics) GetQueueLength() int { return int(atomic.LoadInt64(&dm.queueLength)) } func (dm *DynamicMetrics) ShouldScaleUp(currentWorkers int, maxWorkers int) bool { dm.mu.RLock() defer dm.mu.RUnlock() return currentWorkers < maxWorkers && dm.GetQueueLength() > dm.scaleUpThreshold && time.Since(dm.lastScaleTime) > 5*time.Second } func (dm *DynamicMetrics) ShouldScaleDown(currentWorkers int, minWorkers int) bool { dm.mu.RLock() defer dm.mu.RUnlock() return currentWorkers > minWorkers && dm.GetQueueLength() < dm.scaleDownThreshold && time.Since(dm.lastScaleTime) > 10*time.Second } func (dm *DynamicMetrics) RecordScale() { dm.mu.Lock() defer dm.mu.Unlock() dm.lastScaleTime = time.Now() } // NewDynamicWorkerPool creates a new dynamic worker pool func NewDynamicWorkerPool(ctx context.Context, minWorkers, maxWorkers, queueSize int) *DynamicWorkerPool { poolCtx, cancel := context.WithCancel(ctx) return &DynamicWorkerPool{ minWorkers: minWorkers, maxWorkers: maxWorkers, currentWorkers: 0, jobQueue: make(chan Job, queueSize), resultQueue: make(chan Result, queueSize), ctx: poolCtx, cancel: cancel, workerControl: make(chan int, maxWorkers), metrics: &DynamicMetrics{ scaleUpThreshold: queueSize / 2, scaleDownThreshold: queueSize / 4, }, } } // Start initializes the pool with minimum workers func (dwp *DynamicWorkerPool) Start() { // Start with minimum workers for i := 0; i < dwp.minWorkers; i++ { dwp.addWorker() } // Start the scaler go dwp.scaler() // Start queue monitor go dwp.queueMonitor() } // addWorker creates and starts a new worker func (dwp *DynamicWorkerPool) addWorker() { workerID := atomic.AddInt64(&dwp.currentWorkers, 1) dwp.wg.Add(1) go func(id int64) { defer dwp.wg.Done() defer atomic.AddInt64(&dwp.currentWorkers, -1) fmt.Printf("Worker %d started\n", id) for { select { case job := <-dwp.jobQueue: start := time.Now() result := dwp.processJob(job) duration := time.Since(start) fmt.Printf("Worker %d processed job %d in %v\n", id, job.ID, duration) select { case dwp.resultQueue <- result: case <-dwp.ctx.Done(): return } case <-dwp.ctx.Done(): fmt.Printf("Worker %d stopping\n", id) return } } }(workerID) } // processJob simulates job processing func (dwp *DynamicWorkerPool) processJob(job Job) Result { // Simulate variable processing time time.Sleep(time.Duration(50+rand.Intn(200)) * time.Millisecond) if num, ok := job.Data.(int); ok { return Result{ JobID: job.ID, Output: num * 2, } } return Result{ JobID: job.ID, Error: fmt.Errorf("invalid job data"), } } // scaler monitors load and adjusts worker count func (dwp *DynamicWorkerPool) scaler() { ticker := time.NewTicker(3 * time.Second) defer ticker.Stop() for { select { case <-ticker.C: currentWorkers := int(atomic.LoadInt64(&dwp.currentWorkers)) queueLength := dwp.metrics.GetQueueLength() fmt.Printf("Scaler check - Workers: %d, Queue: %d\n", currentWorkers, queueLength) if dwp.metrics.ShouldScaleUp(currentWorkers, dwp.maxWorkers) { fmt.Printf("Scaling up: adding worker (current: %d)\n", currentWorkers) dwp.addWorker() dwp.metrics.RecordScale() } else if dwp.metrics.ShouldScaleDown(currentWorkers, dwp.minWorkers) { fmt.Printf("Scaling down: removing worker (current: %d)\n", currentWorkers) // Signal one worker to stop by closing context // In a real implementation, you might use a more sophisticated approach dwp.metrics.RecordScale() } case <-dwp.ctx.Done(): return } } } // queueMonitor tracks queue length for scaling decisions func (dwp *DynamicWorkerPool) queueMonitor() { ticker := time.NewTicker(1 * time.Second) defer ticker.Stop() for { select { case <-ticker.C: queueLength := len(dwp.jobQueue) dwp.metrics.UpdateQueueLength(queueLength) case <-dwp.ctx.Done(): return } } } // Submit adds a job to the queue func (dwp *DynamicWorkerPool) Submit(job Job) error { select { case dwp.jobQueue <- job: return nil case <-dwp.ctx.Done(): return dwp.ctx.Err() } } // Results returns the result channel func (dwp *DynamicWorkerPool) Results() <-chan Result { return dwp.resultQueue } // Stop gracefully shuts down the pool func (dwp *DynamicWorkerPool) Stop() { dwp.cancel() dwp.wg.Wait() close(dwp.jobQueue) close(dwp.resultQueue) } func main() { ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() pool := NewDynamicWorkerPool(ctx, 2, 6, 20) pool.Start() defer pool.Stop() // Submit jobs in bursts to trigger scaling go func() { // Initial burst for i := 1; i <= 10; i++ { job := Job{ID: i, Data: i * 10} if err := pool.Submit(job); err != nil { fmt.Printf("Failed to submit job %d: %v\n", i, err) break } } time.Sleep(8 * time.Second) // Second burst for i := 11; i <= 25; i++ { job := Job{ID: i, Data: i * 10} if err := pool.Submit(job); err != nil { fmt.Printf("Failed to submit job %d: %v\n", i, err) break } } time.Sleep(5 * time.Second) // Final smaller batch for i := 26; i <= 30; i++ { job := Job{ID: i, Data: i * 10} if err := pool.Submit(job); err != nil { fmt.Printf("Failed to submit job %d: %v\n", i, err) break } } }() // Collect results resultCount := 0 for result := range pool.Results() { resultCount++ if result.Error != nil { fmt.Printf("Job %d failed: %v\n", result.JobID, result.Error) } else { fmt.Printf("Job %d completed: %v\n", result.JobID, result.Output) } if resultCount >= 30 { break } } } Best Practices Right-Size the Pool: Match worker count to available resources Monitor Performance: Track queue length, processing times, and throughput Handle Backpressure: Implement proper queue management Graceful Shutdown: Ensure all workers complete current jobs Error Handling: Isolate worker failures from the pool Resource Cleanup: Properly close channels and cancel contexts Load Balancing: Distribute work evenly across workers Common Pitfalls Too Many Workers: Creating more workers than CPU cores for CPU-bound tasks Unbounded Queues: Memory issues with unlimited job queues Worker Leaks: Not properly shutting down workers Blocking Operations: Long-running jobs blocking other work No Backpressure: Not handling queue overflow situations Testing Worker Pools package main import ( "context" "testing" "time" ) func TestWorkerPool(t *testing.T) { ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() pool := NewAdvancedWorkerPool(ctx, 2, 5) pool.Start() defer pool.Stop() // Submit test jobs jobCount := 5 for i := 1; i <= jobCount; i++ { job := ContextJob{ ID: fmt.Sprintf("test-%d", i), Data: i, Context: ctx, } if err := pool.Submit(job); err != nil { t.Fatalf("Failed to submit job: %v", err) } } // Collect results results := make(map[string]ContextResult) for i := 0; i < jobCount; i++ { select { case result := <-pool.Results(): results[result.JobID] = result case <-time.After(2 * time.Second): t.Fatal("Timeout waiting for results") } } // Verify all jobs completed if len(results) != jobCount { t.Errorf("Expected %d results, got %d", jobCount, len(results)) } // Verify results are correct for i := 1; i <= jobCount; i++ { jobID := fmt.Sprintf("test-%d", i) result, exists := results[jobID] if !exists { t.Errorf("Missing result for job %s", jobID) continue } if result.Error != nil { t.Errorf("Job %s failed: %v", jobID, result.Error) continue } expected := i * i if result.Output != expected { t.Errorf("Job %s: expected %d, got %v", jobID, expected, result.Output) } } } The Worker Pool pattern is essential for building scalable, resource-efficient concurrent applications in Go. It provides controlled concurrency, predictable resource usage, and excellent performance characteristics for both CPU-bound and I/O-bound workloads. ...

    August 21, 2024 · 12 min · Rafiul Alam

    WaitGroup Pattern in Go

    Go Concurrency Patterns Series: ← Mutex Patterns | Series Overview | Once Pattern → What is the WaitGroup Pattern? The WaitGroup pattern uses sync.WaitGroup to coordinate the completion of multiple goroutines. It acts as a counter that blocks until all registered goroutines have finished executing, making it perfect for implementing barriers and waiting for parallel tasks to complete. Key Operations: Add(n): Increment the counter by n Done(): Decrement the counter by 1 (usually called with defer) Wait(): Block until counter reaches zero Real-World Use Cases Parallel Processing: Wait for all workers to complete Batch Operations: Process multiple items concurrently Service Initialization: Wait for all services to start Data Collection: Gather results from multiple sources Cleanup Operations: Ensure all cleanup tasks finish Testing: Coordinate test goroutines Basic WaitGroup Usage package main import ( "fmt" "math/rand" "sync" "time" ) // Task represents work to be done type Task struct { ID int Name string } // processTask simulates processing a task func processTask(task Task, wg *sync.WaitGroup) { defer wg.Done() // Always call Done when goroutine finishes fmt.Printf("Starting task %d: %s\n", task.ID, task.Name) // Simulate work duration := time.Duration(rand.Intn(1000)) * time.Millisecond time.Sleep(duration) fmt.Printf("Completed task %d: %s (took %v)\n", task.ID, task.Name, duration) } func main() { tasks := []Task{ {1, "Process images"}, {2, "Send emails"}, {3, "Update database"}, {4, "Generate reports"}, {5, "Backup files"}, } var wg sync.WaitGroup fmt.Println("Starting parallel task processing...") // Start all tasks for _, task := range tasks { wg.Add(1) // Increment counter for each goroutine go processTask(task, &wg) } // Wait for all tasks to complete wg.Wait() fmt.Println("All tasks completed!") } WaitGroup with Error Handling package main import ( "fmt" "math/rand" "sync" "time" ) // Result represents the outcome of a task type Result struct { TaskID int Data interface{} Error error } // TaskProcessor handles tasks with error collection type TaskProcessor struct { wg sync.WaitGroup results chan Result errors []error mu sync.Mutex } // NewTaskProcessor creates a new task processor func NewTaskProcessor(bufferSize int) *TaskProcessor { return &TaskProcessor{ results: make(chan Result, bufferSize), } } // processTaskWithError simulates task processing that might fail func (tp *TaskProcessor) processTaskWithError(taskID int, data interface{}) { defer tp.wg.Done() fmt.Printf("Processing task %d\n", taskID) // Simulate work time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond) // Simulate random failures if rand.Float32() < 0.3 { err := fmt.Errorf("task %d failed", taskID) tp.results <- Result{TaskID: taskID, Error: err} // Collect error tp.mu.Lock() tp.errors = append(tp.errors, err) tp.mu.Unlock() fmt.Printf("Task %d failed\n", taskID) return } // Success result := fmt.Sprintf("Result from task %d", taskID) tp.results <- Result{TaskID: taskID, Data: result} fmt.Printf("Task %d completed successfully\n", taskID) } // ProcessTasks processes multiple tasks and collects results func (tp *TaskProcessor) ProcessTasks(taskCount int) ([]Result, []error) { // Start all tasks for i := 1; i <= taskCount; i++ { tp.wg.Add(1) go tp.processTaskWithError(i, fmt.Sprintf("data-%d", i)) } // Close results channel when all tasks complete go func() { tp.wg.Wait() close(tp.results) }() // Collect results var results []Result for result := range tp.results { results = append(results, result) } tp.mu.Lock() errors := make([]error, len(tp.errors)) copy(errors, tp.errors) tp.mu.Unlock() return results, errors } func main() { processor := NewTaskProcessor(10) fmt.Println("Starting task processing with error handling...") results, errors := processor.ProcessTasks(8) fmt.Printf("\nProcessing complete!\n") fmt.Printf("Successful tasks: %d\n", len(results)-len(errors)) fmt.Printf("Failed tasks: %d\n", len(errors)) if len(errors) > 0 { fmt.Println("\nErrors:") for _, err := range errors { fmt.Printf(" - %v\n", err) } } fmt.Println("\nResults:") for _, result := range results { if result.Error == nil { fmt.Printf(" Task %d: %v\n", result.TaskID, result.Data) } } } Nested WaitGroups for Hierarchical Tasks package main import ( "fmt" "sync" "time" ) // Department represents a department with multiple teams type Department struct { Name string Teams []Team } // Team represents a team with multiple workers type Team struct { Name string Workers []string } // processDepartment processes all teams in a department func processDepartment(dept Department, wg *sync.WaitGroup) { defer wg.Done() fmt.Printf("Department %s starting work\n", dept.Name) var teamWG sync.WaitGroup // Process all teams in parallel for _, team := range dept.Teams { teamWG.Add(1) go processTeam(team, &teamWG) } // Wait for all teams to complete teamWG.Wait() fmt.Printf("Department %s completed all work\n", dept.Name) } // processTeam processes all workers in a team func processTeam(team Team, wg *sync.WaitGroup) { defer wg.Done() fmt.Printf(" Team %s starting work\n", team.Name) var workerWG sync.WaitGroup // Process all workers in parallel for _, worker := range team.Workers { workerWG.Add(1) go processWorker(worker, &workerWG) } // Wait for all workers to complete workerWG.Wait() fmt.Printf(" Team %s completed all work\n", team.Name) } // processWorker simulates worker processing func processWorker(worker string, wg *sync.WaitGroup) { defer wg.Done() fmt.Printf(" Worker %s working...\n", worker) time.Sleep(time.Duration(100+rand.Intn(200)) * time.Millisecond) fmt.Printf(" Worker %s finished\n", worker) } func main() { departments := []Department{ { Name: "Engineering", Teams: []Team{ { Name: "Backend", Workers: []string{"Alice", "Bob", "Charlie"}, }, { Name: "Frontend", Workers: []string{"Diana", "Eve"}, }, }, }, { Name: "Marketing", Teams: []Team{ { Name: "Digital", Workers: []string{"Frank", "Grace"}, }, { Name: "Content", Workers: []string{"Henry", "Ivy", "Jack"}, }, }, }, } var deptWG sync.WaitGroup fmt.Println("Starting company-wide project...") // Process all departments in parallel for _, dept := range departments { deptWG.Add(1) go processDepartment(dept, &deptWG) } // Wait for all departments to complete deptWG.Wait() fmt.Println("Company-wide project completed!") } WaitGroup with Timeout package main import ( "context" "fmt" "sync" "time" ) // TimedTaskRunner runs tasks with timeout support type TimedTaskRunner struct { timeout time.Duration } // NewTimedTaskRunner creates a new timed task runner func NewTimedTaskRunner(timeout time.Duration) *TimedTaskRunner { return &TimedTaskRunner{timeout: timeout} } // RunWithTimeout runs tasks with a timeout func (ttr *TimedTaskRunner) RunWithTimeout(tasks []func()) error { ctx, cancel := context.WithTimeout(context.Background(), ttr.timeout) defer cancel() var wg sync.WaitGroup done := make(chan struct{}) // Start all tasks for i, task := range tasks { wg.Add(1) go func(taskID int, taskFunc func()) { defer wg.Done() fmt.Printf("Starting task %d\n", taskID) taskFunc() fmt.Printf("Completed task %d\n", taskID) }(i+1, task) } // Wait for completion in separate goroutine go func() { wg.Wait() close(done) }() // Wait for either completion or timeout select { case <-done: fmt.Println("All tasks completed successfully") return nil case <-ctx.Done(): fmt.Println("Tasks timed out") return ctx.Err() } } // simulateTask creates a task that takes a specific duration func simulateTask(duration time.Duration, name string) func() { return func() { fmt.Printf(" %s working for %v\n", name, duration) time.Sleep(duration) fmt.Printf(" %s finished\n", name) } } func main() { runner := NewTimedTaskRunner(2 * time.Second) // Test with tasks that complete within timeout fmt.Println("Test 1: Tasks that complete within timeout") tasks1 := []func(){ simulateTask(300*time.Millisecond, "Quick task 1"), simulateTask(500*time.Millisecond, "Quick task 2"), simulateTask(400*time.Millisecond, "Quick task 3"), } if err := runner.RunWithTimeout(tasks1); err != nil { fmt.Printf("Error: %v\n", err) } fmt.Println("\nTest 2: Tasks that exceed timeout") tasks2 := []func(){ simulateTask(800*time.Millisecond, "Slow task 1"), simulateTask(1500*time.Millisecond, "Slow task 2"), simulateTask(2000*time.Millisecond, "Very slow task"), } if err := runner.RunWithTimeout(tasks2); err != nil { fmt.Printf("Error: %v\n", err) } } Dynamic WaitGroup Management package main import ( "fmt" "sync" "time" ) // DynamicTaskManager manages tasks that can spawn other tasks type DynamicTaskManager struct { wg sync.WaitGroup taskChan chan func() quit chan struct{} active sync.WaitGroup } // NewDynamicTaskManager creates a new dynamic task manager func NewDynamicTaskManager() *DynamicTaskManager { return &DynamicTaskManager{ taskChan: make(chan func(), 100), quit: make(chan struct{}), } } // Start begins processing tasks func (dtm *DynamicTaskManager) Start() { go dtm.taskProcessor() } // taskProcessor processes tasks from the channel func (dtm *DynamicTaskManager) taskProcessor() { for { select { case task := <-dtm.taskChan: dtm.active.Add(1) go func() { defer dtm.active.Done() task() }() case <-dtm.quit: return } } } // AddTask adds a task to be processed func (dtm *DynamicTaskManager) AddTask(task func()) { select { case dtm.taskChan <- task: case <-dtm.quit: } } // Wait waits for all active tasks to complete func (dtm *DynamicTaskManager) Wait() { dtm.active.Wait() } // Stop stops the task manager func (dtm *DynamicTaskManager) Stop() { close(dtm.quit) dtm.Wait() } // recursiveTask demonstrates a task that spawns other tasks func recursiveTask(manager *DynamicTaskManager, depth int, maxDepth int, id string) func() { return func() { fmt.Printf("Task %s (depth %d) starting\n", id, depth) time.Sleep(100 * time.Millisecond) if depth < maxDepth { // Spawn child tasks for i := 0; i < 2; i++ { childID := fmt.Sprintf("%s.%d", id, i+1) manager.AddTask(recursiveTask(manager, depth+1, maxDepth, childID)) } } fmt.Printf("Task %s (depth %d) completed\n", id, depth) } } func main() { manager := NewDynamicTaskManager() manager.Start() defer manager.Stop() fmt.Println("Starting dynamic task processing...") // Add initial tasks that will spawn more tasks for i := 0; i < 3; i++ { taskID := fmt.Sprintf("root-%d", i+1) manager.AddTask(recursiveTask(manager, 0, 2, taskID)) } // Wait for all tasks (including dynamically created ones) to complete manager.Wait() fmt.Println("All tasks completed!") } Best Practices Always Use defer: Call Done() with defer to ensure it’s called even if panic occurs Add Before Starting: Call Add() before starting goroutines to avoid race conditions Don’t Reuse WaitGroups: Create new WaitGroup for each batch of operations Handle Panics: Use recover in goroutines to prevent panic from affecting WaitGroup Avoid Negative Counters: Don’t call Done() more times than Add() Use Timeouts: Combine with context for timeout handling Consider Alternatives: Use channels for complex coordination scenarios Common Pitfalls 1. Race Condition with Add/Done // Bad: Race condition func badExample() { var wg sync.WaitGroup for i := 0; i < 5; i++ { go func() { wg.Add(1) // Race: might be called after Wait() defer wg.Done() // do work }() } wg.Wait() // Might not wait for all goroutines } // Good: Add before starting goroutines func goodExample() { var wg sync.WaitGroup for i := 0; i < 5; i++ { wg.Add(1) // Add before starting goroutine go func() { defer wg.Done() // do work }() } wg.Wait() } 2. Forgetting to Call Done // Bad: Missing Done() call func badTask(wg *sync.WaitGroup) { // do work if someCondition { return // Forgot to call Done()! } wg.Done() } // Good: Always use defer func goodTask(wg *sync.WaitGroup) { defer wg.Done() // Always called // do work if someCondition { return // Done() still called } } Testing WaitGroup Patterns package main import ( "sync" "testing" "time" ) func TestWaitGroupCompletion(t *testing.T) { var wg sync.WaitGroup completed := make([]bool, 5) for i := 0; i < 5; i++ { wg.Add(1) go func(index int) { defer wg.Done() time.Sleep(10 * time.Millisecond) completed[index] = true }(i) } wg.Wait() // Verify all tasks completed for i, done := range completed { if !done { t.Errorf("Task %d did not complete", i) } } } func TestWaitGroupWithTimeout(t *testing.T) { var wg sync.WaitGroup done := make(chan struct{}) wg.Add(1) go func() { defer wg.Done() time.Sleep(50 * time.Millisecond) }() go func() { wg.Wait() close(done) }() select { case <-done: // Success case <-time.After(100 * time.Millisecond): t.Error("WaitGroup did not complete within timeout") } } The WaitGroup pattern is essential for coordinating goroutines in Go. It provides a simple yet powerful way to wait for multiple concurrent operations to complete, making it perfect for parallel processing, batch operations, and synchronization barriers. ...

    August 14, 2024 · 9 min · Rafiul Alam

    Semaphore Pattern in Go

    Go Concurrency Patterns Series: ← Rate Limiter | Series Overview | Actor Model → What is the Semaphore Pattern? A semaphore is a synchronization primitive that maintains a count of available resources and controls access to them. It allows a specified number of goroutines to access a resource concurrently while blocking others until resources become available. Types: Binary Semaphore: Acts like a mutex (0 or 1) Counting Semaphore: Allows N concurrent accesses Weighted Semaphore: Resources have different weights/costs Real-World Use Cases Connection Pools: Limit database/HTTP connections Resource Management: Control access to limited resources Download Managers: Limit concurrent downloads API Rate Limiting: Control concurrent API calls Worker Pools: Limit concurrent workers Memory Management: Control memory-intensive operations Basic Semaphore Implementation package main import ( "context" "fmt" "sync" "time" ) // Semaphore implements a counting semaphore type Semaphore struct { ch chan struct{} } // NewSemaphore creates a new semaphore with given capacity func NewSemaphore(capacity int) *Semaphore { return &Semaphore{ ch: make(chan struct{}, capacity), } } // Acquire acquires a resource from the semaphore func (s *Semaphore) Acquire() { s.ch <- struct{}{} } // TryAcquire tries to acquire a resource without blocking func (s *Semaphore) TryAcquire() bool { select { case s.ch <- struct{}{}: return true default: return false } } // AcquireWithContext acquires a resource with context cancellation func (s *Semaphore) AcquireWithContext(ctx context.Context) error { select { case s.ch <- struct{}{}: return nil case <-ctx.Done(): return ctx.Err() } } // Release releases a resource back to the semaphore func (s *Semaphore) Release() { <-s.ch } // Available returns the number of available resources func (s *Semaphore) Available() int { return cap(s.ch) - len(s.ch) } // Used returns the number of used resources func (s *Semaphore) Used() int { return len(s.ch) } // Capacity returns the total capacity func (s *Semaphore) Capacity() int { return cap(s.ch) } // simulateWork simulates work that requires a resource func simulateWork(id int, duration time.Duration, sem *Semaphore) { fmt.Printf("Worker %d: Requesting resource...\n", id) sem.Acquire() fmt.Printf("Worker %d: Acquired resource (available: %d/%d)\n", id, sem.Available(), sem.Capacity()) // Simulate work time.Sleep(duration) sem.Release() fmt.Printf("Worker %d: Released resource (available: %d/%d)\n", id, sem.Available(), sem.Capacity()) } func main() { // Create semaphore with capacity of 3 sem := NewSemaphore(3) fmt.Println("=== Basic Semaphore Demo ===") fmt.Printf("Semaphore capacity: %d\n\n", sem.Capacity()) var wg sync.WaitGroup // Start 6 workers, but only 3 can work concurrently for i := 1; i <= 6; i++ { wg.Add(1) go func(id int) { defer wg.Done() simulateWork(id, time.Duration(1+id%3)*time.Second, sem) }(i) time.Sleep(200 * time.Millisecond) // Stagger starts } wg.Wait() fmt.Printf("\nFinal state - Available: %d/%d\n", sem.Available(), sem.Capacity()) } Advanced Semaphore with Timeout and Context package main import ( "context" "fmt" "sync" "sync/atomic" "time" ) // AdvancedSemaphore provides additional features like metrics and timeouts type AdvancedSemaphore struct { ch chan struct{} capacity int // Metrics totalAcquires int64 totalReleases int64 timeouts int64 cancellations int64 // Monitoring mu sync.RWMutex waitingGoroutines int } // NewAdvancedSemaphore creates a new advanced semaphore func NewAdvancedSemaphore(capacity int) *AdvancedSemaphore { return &AdvancedSemaphore{ ch: make(chan struct{}, capacity), capacity: capacity, } } // Acquire acquires a resource (blocking) func (as *AdvancedSemaphore) Acquire() { as.incrementWaiting() defer as.decrementWaiting() as.ch <- struct{}{} atomic.AddInt64(&as.totalAcquires, 1) } // TryAcquire tries to acquire without blocking func (as *AdvancedSemaphore) TryAcquire() bool { select { case as.ch <- struct{}{}: atomic.AddInt64(&as.totalAcquires, 1) return true default: return false } } // AcquireWithTimeout acquires with a timeout func (as *AdvancedSemaphore) AcquireWithTimeout(timeout time.Duration) error { as.incrementWaiting() defer as.decrementWaiting() select { case as.ch <- struct{}{}: atomic.AddInt64(&as.totalAcquires, 1) return nil case <-time.After(timeout): atomic.AddInt64(&as.timeouts, 1) return fmt.Errorf("timeout after %v", timeout) } } // AcquireWithContext acquires with context cancellation func (as *AdvancedSemaphore) AcquireWithContext(ctx context.Context) error { as.incrementWaiting() defer as.decrementWaiting() select { case as.ch <- struct{}{}: atomic.AddInt64(&as.totalAcquires, 1) return nil case <-ctx.Done(): atomic.AddInt64(&as.cancellations, 1) return ctx.Err() } } // Release releases a resource func (as *AdvancedSemaphore) Release() { <-as.ch atomic.AddInt64(&as.totalReleases, 1) } // incrementWaiting increments waiting goroutines counter func (as *AdvancedSemaphore) incrementWaiting() { as.mu.Lock() as.waitingGoroutines++ as.mu.Unlock() } // decrementWaiting decrements waiting goroutines counter func (as *AdvancedSemaphore) decrementWaiting() { as.mu.Lock() as.waitingGoroutines-- as.mu.Unlock() } // GetStats returns semaphore statistics func (as *AdvancedSemaphore) GetStats() map[string]interface{} { as.mu.RLock() waiting := as.waitingGoroutines as.mu.RUnlock() return map[string]interface{}{ "capacity": as.capacity, "available": as.Available(), "used": as.Used(), "waiting": waiting, "total_acquires": atomic.LoadInt64(&as.totalAcquires), "total_releases": atomic.LoadInt64(&as.totalReleases), "timeouts": atomic.LoadInt64(&as.timeouts), "cancellations": atomic.LoadInt64(&as.cancellations), } } // Available returns available resources func (as *AdvancedSemaphore) Available() int { return cap(as.ch) - len(as.ch) } // Used returns used resources func (as *AdvancedSemaphore) Used() int { return len(as.ch) } // Capacity returns total capacity func (as *AdvancedSemaphore) Capacity() int { return as.capacity } // ResourceManager demonstrates semaphore usage for resource management type ResourceManager struct { semaphore *AdvancedSemaphore resources []string } // NewResourceManager creates a new resource manager func NewResourceManager(resources []string) *ResourceManager { return &ResourceManager{ semaphore: NewAdvancedSemaphore(len(resources)), resources: resources, } } // UseResource uses a resource with timeout func (rm *ResourceManager) UseResource(ctx context.Context, userID string, timeout time.Duration) error { fmt.Printf("User %s: Requesting resource...\n", userID) // Try to acquire with timeout if err := rm.semaphore.AcquireWithTimeout(timeout); err != nil { fmt.Printf("User %s: Failed to acquire resource: %v\n", userID, err) return err } defer rm.semaphore.Release() resourceIndex := rm.semaphore.Used() - 1 resourceName := rm.resources[resourceIndex] fmt.Printf("User %s: Using resource '%s'\n", userID, resourceName) // Simulate resource usage select { case <-time.After(time.Duration(1+len(userID)%3) * time.Second): fmt.Printf("User %s: Finished using resource '%s'\n", userID, resourceName) return nil case <-ctx.Done(): fmt.Printf("User %s: Resource usage cancelled\n", userID) return ctx.Err() } } // GetStats returns resource manager statistics func (rm *ResourceManager) GetStats() map[string]interface{} { return rm.semaphore.GetStats() } func main() { resources := []string{"Database-1", "Database-2", "API-Gateway"} manager := NewResourceManager(resources) fmt.Println("=== Advanced Semaphore Demo ===") fmt.Printf("Available resources: %v\n\n", resources) // Start monitoring go func() { ticker := time.NewTicker(1 * time.Second) defer ticker.Stop() for range ticker.C { stats := manager.GetStats() fmt.Printf(" Stats: Used=%d/%d, Waiting=%d, Timeouts=%d\n", stats["used"], stats["capacity"], stats["waiting"], stats["timeouts"]) } }() var wg sync.WaitGroup // Simulate users requesting resources users := []string{"Alice", "Bob", "Charlie", "Diana", "Eve", "Frank"} for i, user := range users { wg.Add(1) go func(userID string, delay time.Duration) { defer wg.Done() time.Sleep(delay) // Stagger requests ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() // Some users have shorter timeouts timeout := 3 * time.Second if len(userID)%2 == 0 { timeout = 1 * time.Second } err := manager.UseResource(ctx, userID, timeout) if err != nil { fmt.Printf(" User %s failed: %v\n", userID, err) } }(user, time.Duration(i*300)*time.Millisecond) } wg.Wait() // Final statistics fmt.Println("\n=== Final Statistics ===") stats := manager.GetStats() for key, value := range stats { fmt.Printf("%s: %v\n", key, value) } } Weighted Semaphore Implementation package main import ( "context" "fmt" "sync" "time" ) // WeightedSemaphore allows acquiring resources with different weights type WeightedSemaphore struct { mu sync.Mutex capacity int64 current int64 waiters []waiter } // waiter represents a goroutine waiting for resources type waiter struct { weight int64 ready chan struct{} } // NewWeightedSemaphore creates a new weighted semaphore func NewWeightedSemaphore(capacity int64) *WeightedSemaphore { return &WeightedSemaphore{ capacity: capacity, waiters: make([]waiter, 0), } } // Acquire acquires resources with given weight func (ws *WeightedSemaphore) Acquire(weight int64) { ws.mu.Lock() if ws.current+weight <= ws.capacity && len(ws.waiters) == 0 { // Can acquire immediately ws.current += weight ws.mu.Unlock() return } // Need to wait ready := make(chan struct{}) ws.waiters = append(ws.waiters, waiter{weight: weight, ready: ready}) ws.mu.Unlock() <-ready } // TryAcquire tries to acquire resources without blocking func (ws *WeightedSemaphore) TryAcquire(weight int64) bool { ws.mu.Lock() defer ws.mu.Unlock() if ws.current+weight <= ws.capacity && len(ws.waiters) == 0 { ws.current += weight return true } return false } // AcquireWithContext acquires resources with context cancellation func (ws *WeightedSemaphore) AcquireWithContext(ctx context.Context, weight int64) error { ws.mu.Lock() if ws.current+weight <= ws.capacity && len(ws.waiters) == 0 { // Can acquire immediately ws.current += weight ws.mu.Unlock() return nil } // Need to wait ready := make(chan struct{}) ws.waiters = append(ws.waiters, waiter{weight: weight, ready: ready}) ws.mu.Unlock() select { case <-ready: return nil case <-ctx.Done(): // Remove from waiters list ws.mu.Lock() for i, w := range ws.waiters { if w.ready == ready { ws.waiters = append(ws.waiters[:i], ws.waiters[i+1:]...) break } } ws.mu.Unlock() return ctx.Err() } } // Release releases resources with given weight func (ws *WeightedSemaphore) Release(weight int64) { ws.mu.Lock() defer ws.mu.Unlock() ws.current -= weight ws.notifyWaiters() } // notifyWaiters notifies waiting goroutines that can now proceed func (ws *WeightedSemaphore) notifyWaiters() { for i := 0; i < len(ws.waiters); { w := ws.waiters[i] if ws.current+w.weight <= ws.capacity { // This waiter can proceed ws.current += w.weight close(w.ready) // Remove from waiters ws.waiters = append(ws.waiters[:i], ws.waiters[i+1:]...) } else { i++ } } } // GetStats returns current statistics func (ws *WeightedSemaphore) GetStats() map[string]interface{} { ws.mu.Lock() defer ws.mu.Unlock() return map[string]interface{}{ "capacity": ws.capacity, "current": ws.current, "available": ws.capacity - ws.current, "waiters": len(ws.waiters), } } // Task represents a task with resource requirements type Task struct { ID string Weight int64 Duration time.Duration } // TaskProcessor processes tasks using weighted semaphore type TaskProcessor struct { semaphore *WeightedSemaphore } // NewTaskProcessor creates a new task processor func NewTaskProcessor(capacity int64) *TaskProcessor { return &TaskProcessor{ semaphore: NewWeightedSemaphore(capacity), } } // ProcessTask processes a task func (tp *TaskProcessor) ProcessTask(ctx context.Context, task Task) error { fmt.Printf("Task %s: Requesting %d units of resource...\n", task.ID, task.Weight) if err := tp.semaphore.AcquireWithContext(ctx, task.Weight); err != nil { fmt.Printf("Task %s: Failed to acquire resources: %v\n", task.ID, err) return err } defer tp.semaphore.Release(task.Weight) stats := tp.semaphore.GetStats() fmt.Printf("Task %s: Acquired %d units (available: %d/%d)\n", task.ID, task.Weight, stats["available"], stats["capacity"]) // Simulate task processing select { case <-time.After(task.Duration): fmt.Printf("Task %s: Completed\n", task.ID) return nil case <-ctx.Done(): fmt.Printf("Task %s: Cancelled\n", task.ID) return ctx.Err() } } // GetStats returns processor statistics func (tp *TaskProcessor) GetStats() map[string]interface{} { return tp.semaphore.GetStats() } func main() { // Create weighted semaphore with capacity of 10 units processor := NewTaskProcessor(10) fmt.Println("=== Weighted Semaphore Demo ===") fmt.Println("Total capacity: 10 units") // Define tasks with different resource requirements tasks := []Task{ {"Small-1", 2, 2 * time.Second}, {"Medium-1", 4, 3 * time.Second}, {"Large-1", 6, 4 * time.Second}, {"Small-2", 1, 1 * time.Second}, {"Small-3", 2, 2 * time.Second}, {"Medium-2", 5, 3 * time.Second}, {"Large-2", 8, 5 * time.Second}, } // Start monitoring go func() { ticker := time.NewTicker(500 * time.Millisecond) defer ticker.Stop() for range ticker.C { stats := processor.GetStats() fmt.Printf(" Resources: %d/%d used, %d waiters\n", stats["current"], stats["capacity"], stats["waiters"]) } }() var wg sync.WaitGroup // Process tasks concurrently for i, task := range tasks { wg.Add(1) go func(t Task, delay time.Duration) { defer wg.Done() time.Sleep(delay) // Stagger task starts ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() err := processor.ProcessTask(ctx, t) if err != nil { fmt.Printf(" Task %s failed: %v\n", t.ID, err) } }(task, time.Duration(i*200)*time.Millisecond) } wg.Wait() // Final statistics fmt.Println("\n=== Final Statistics ===") stats := processor.GetStats() for key, value := range stats { fmt.Printf("%s: %v\n", key, value) } } Semaphore-based Connection Pool package main import ( "context" "fmt" "sync" "time" ) // Connection represents a database connection type Connection struct { ID int InUse bool LastUsed time.Time } // ConnectionPool manages database connections using semaphore type ConnectionPool struct { connections []*Connection semaphore *AdvancedSemaphore mu sync.Mutex } // NewConnectionPool creates a new connection pool func NewConnectionPool(size int) *ConnectionPool { connections := make([]*Connection, size) for i := 0; i < size; i++ { connections[i] = &Connection{ ID: i + 1, InUse: false, LastUsed: time.Now(), } } return &ConnectionPool{ connections: connections, semaphore: NewAdvancedSemaphore(size), } } // GetConnection acquires a connection from the pool func (cp *ConnectionPool) GetConnection(ctx context.Context) (*Connection, error) { if err := cp.semaphore.AcquireWithContext(ctx); err != nil { return nil, err } cp.mu.Lock() defer cp.mu.Unlock() // Find an available connection for _, conn := range cp.connections { if !conn.InUse { conn.InUse = true conn.LastUsed = time.Now() return conn, nil } } // This shouldn't happen if semaphore is working correctly cp.semaphore.Release() return nil, fmt.Errorf("no available connections") } // ReturnConnection returns a connection to the pool func (cp *ConnectionPool) ReturnConnection(conn *Connection) { cp.mu.Lock() conn.InUse = false conn.LastUsed = time.Now() cp.mu.Unlock() cp.semaphore.Release() } // GetStats returns pool statistics func (cp *ConnectionPool) GetStats() map[string]interface{} { cp.mu.Lock() defer cp.mu.Unlock() inUse := 0 for _, conn := range cp.connections { if conn.InUse { inUse++ } } semStats := cp.semaphore.GetStats() return map[string]interface{}{ "total_connections": len(cp.connections), "in_use": inUse, "available": len(cp.connections) - inUse, "semaphore_stats": semStats, } } // DatabaseService simulates a service using the connection pool type DatabaseService struct { pool *ConnectionPool } // NewDatabaseService creates a new database service func NewDatabaseService(poolSize int) *DatabaseService { return &DatabaseService{ pool: NewConnectionPool(poolSize), } } // ExecuteQuery simulates executing a database query func (ds *DatabaseService) ExecuteQuery(ctx context.Context, userID string, query string) error { fmt.Printf("User %s: Requesting database connection for query: %s\n", userID, query) conn, err := ds.pool.GetConnection(ctx) if err != nil { fmt.Printf("User %s: Failed to get connection: %v\n", userID, err) return err } defer ds.pool.ReturnConnection(conn) fmt.Printf("User %s: Using connection %d\n", userID, conn.ID) // Simulate query execution queryDuration := time.Duration(500+len(query)*10) * time.Millisecond select { case <-time.After(queryDuration): fmt.Printf("User %s: Query completed on connection %d\n", userID, conn.ID) return nil case <-ctx.Done(): fmt.Printf("User %s: Query cancelled on connection %d\n", userID, conn.ID) return ctx.Err() } } // GetStats returns service statistics func (ds *DatabaseService) GetStats() map[string]interface{} { return ds.pool.GetStats() } func main() { // Create database service with 3 connections service := NewDatabaseService(3) fmt.Println("=== Connection Pool Demo ===") fmt.Println("Pool size: 3 connections") // Start monitoring go func() { ticker := time.NewTicker(1 * time.Second) defer ticker.Stop() for range ticker.C { stats := service.GetStats() fmt.Printf(" Pool: %d/%d in use, %d available\n", stats["in_use"], stats["total_connections"], stats["available"]) } }() var wg sync.WaitGroup // Simulate multiple users making database queries users := []struct { id string query string }{ {"Alice", "SELECT * FROM users"}, {"Bob", "SELECT * FROM orders WHERE user_id = 123"}, {"Charlie", "UPDATE users SET last_login = NOW()"}, {"Diana", "SELECT COUNT(*) FROM products"}, {"Eve", "INSERT INTO logs (message) VALUES ('test')"}, {"Frank", "SELECT * FROM analytics WHERE date > '2024-01-01'"}, } for i, user := range users { wg.Add(1) go func(userID, query string, delay time.Duration) { defer wg.Done() time.Sleep(delay) // Stagger requests ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() err := service.ExecuteQuery(ctx, userID, query) if err != nil { fmt.Printf(" User %s query failed: %v\n", userID, err) } }(user.id, user.query, time.Duration(i*300)*time.Millisecond) } wg.Wait() // Final statistics fmt.Println("\n=== Final Statistics ===") stats := service.GetStats() for key, value := range stats { if key == "semaphore_stats" { fmt.Printf("%s:\n", key) semStats := value.(map[string]interface{}) for k, v := range semStats { fmt.Printf(" %s: %v\n", k, v) } } else { fmt.Printf("%s: %v\n", key, value) } } } Best Practices Choose Right Capacity: Set semaphore capacity based on available resources Always Release: Use defer to ensure resources are released Handle Context: Support cancellation in long-running operations Monitor Usage: Track semaphore statistics and resource utilization Avoid Deadlocks: Don’t acquire multiple semaphores in different orders Use Timeouts: Prevent indefinite blocking with timeouts Consider Weighted: Use weighted semaphores for resources with different costs Common Pitfalls Resource Leaks: Forgetting to release acquired resources Deadlocks: Circular dependencies between semaphores Starvation: Large requests blocking smaller ones indefinitely Over-allocation: Setting capacity higher than actual resources Under-utilization: Setting capacity too low for available resources The Semaphore pattern is essential for managing limited resources in concurrent applications. It provides controlled access to resources, prevents overload, and ensures fair resource distribution among competing goroutines. ...

    August 7, 2024 · 12 min · Rafiul Alam

    Request/Response Pattern in Go

    Go Concurrency Patterns Series: ← Pub/Sub Pattern | Series Overview | Worker Pool → What is the Request/Response Pattern? The Request/Response pattern enables synchronous communication between goroutines, where a sender waits for a response from a receiver. This pattern is essential for RPC-style communication, database queries, API calls, and any scenario where you need to get a result back from an operation. Key Components: Request: Contains data and a response channel Response: Contains result data and/or error information Requester: Sends request and waits for response Responder: Processes request and sends response Real-World Use Cases Database Operations: Query execution with results API Gateways: Forwarding requests to microservices Cache Systems: Get/Set operations with confirmation File Operations: Read/Write with status feedback Validation Services: Input validation with results Authentication: Login requests with tokens Basic Request/Response Implementation package main import ( "fmt" "math/rand" "time" ) // Request represents a request with a response channel type Request struct { ID string Data interface{} Response chan Response } // Response represents the response to a request type Response struct { ID string Result interface{} Error error } // Server processes requests type Server struct { requests chan Request quit chan bool } // NewServer creates a new server func NewServer() *Server { return &Server{ requests: make(chan Request), quit: make(chan bool), } } // Start begins processing requests func (s *Server) Start() { go func() { for { select { case req := <-s.requests: s.processRequest(req) case <-s.quit: return } } }() } // processRequest handles a single request func (s *Server) processRequest(req Request) { // Simulate processing time time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) // Process the request (example: double the number) var response Response response.ID = req.ID if num, ok := req.Data.(int); ok { response.Result = num * 2 } else { response.Error = fmt.Errorf("invalid data type") } // Send response back req.Response <- response } // SendRequest sends a request and waits for response func (s *Server) SendRequest(id string, data interface{}) (interface{}, error) { responseChan := make(chan Response, 1) request := Request{ ID: id, Data: data, Response: responseChan, } s.requests <- request // Wait for response response := <-responseChan return response.Result, response.Error } // Stop shuts down the server func (s *Server) Stop() { close(s.quit) } func main() { server := NewServer() server.Start() defer server.Stop() // Send multiple requests for i := 1; i <= 5; i++ { result, err := server.SendRequest(fmt.Sprintf("req-%d", i), i*10) if err != nil { fmt.Printf("Request %d failed: %v\n", i, err) } else { fmt.Printf("Request %d result: %v\n", i, result) } } } Request/Response with Timeout package main import ( "context" "fmt" "math/rand" "time" ) // TimedRequest includes context for timeout handling type TimedRequest struct { ID string Data interface{} Response chan TimedResponse Context context.Context } // TimedResponse includes timing information type TimedResponse struct { ID string Result interface{} Error error Duration time.Duration Timestamp time.Time } // TimedServer processes requests with timeout support type TimedServer struct { requests chan TimedRequest quit chan bool } func NewTimedServer() *TimedServer { return &TimedServer{ requests: make(chan TimedRequest, 10), quit: make(chan bool), } } func (ts *TimedServer) Start() { go func() { for { select { case req := <-ts.requests: go ts.processTimedRequest(req) case <-ts.quit: return } } }() } func (ts *TimedServer) processTimedRequest(req TimedRequest) { start := time.Now() // Check if context is already cancelled select { case <-req.Context.Done(): ts.sendResponse(req, nil, req.Context.Err(), start) return default: } // Simulate work with random duration workDuration := time.Duration(rand.Intn(200)) * time.Millisecond select { case <-time.After(workDuration): // Work completed if num, ok := req.Data.(int); ok { ts.sendResponse(req, num*2, nil, start) } else { ts.sendResponse(req, nil, fmt.Errorf("invalid data type"), start) } case <-req.Context.Done(): // Context cancelled during work ts.sendResponse(req, nil, req.Context.Err(), start) } } func (ts *TimedServer) sendResponse(req TimedRequest, result interface{}, err error, start time.Time) { response := TimedResponse{ ID: req.ID, Result: result, Error: err, Duration: time.Since(start), Timestamp: time.Now(), } select { case req.Response <- response: case <-req.Context.Done(): // Client no longer waiting } } // SendRequestWithTimeout sends a request with a timeout func (ts *TimedServer) SendRequestWithTimeout(id string, data interface{}, timeout time.Duration) (interface{}, error) { ctx, cancel := context.WithTimeout(context.Background(), timeout) defer cancel() responseChan := make(chan TimedResponse, 1) request := TimedRequest{ ID: id, Data: data, Response: responseChan, Context: ctx, } select { case ts.requests <- request: case <-ctx.Done(): return nil, ctx.Err() } select { case response := <-responseChan: fmt.Printf("Request %s completed in %v\n", response.ID, response.Duration) return response.Result, response.Error case <-ctx.Done(): return nil, ctx.Err() } } func (ts *TimedServer) Stop() { close(ts.quit) } func main() { server := NewTimedServer() server.Start() defer server.Stop() // Send requests with different timeouts requests := []struct { id string data int timeout time.Duration }{ {"fast", 10, 300 * time.Millisecond}, {"medium", 20, 150 * time.Millisecond}, {"slow", 30, 50 * time.Millisecond}, // This might timeout } for _, req := range requests { result, err := server.SendRequestWithTimeout(req.id, req.data, req.timeout) if err != nil { fmt.Printf("Request %s failed: %v\n", req.id, err) } else { fmt.Printf("Request %s result: %v\n", req.id, result) } } } Future/Promise Pattern package main import ( "context" "fmt" "sync" "time" ) // Future represents a value that will be available in the future type Future struct { mu sync.Mutex done chan struct{} result interface{} err error computed bool } // NewFuture creates a new future func NewFuture() *Future { return &Future{ done: make(chan struct{}), } } // Set sets the future's value func (f *Future) Set(result interface{}, err error) { f.mu.Lock() defer f.mu.Unlock() if f.computed { return // Already set } f.result = result f.err = err f.computed = true close(f.done) } // Get waits for and returns the future's value func (f *Future) Get() (interface{}, error) { <-f.done return f.result, f.err } // GetWithTimeout waits for the value with a timeout func (f *Future) GetWithTimeout(timeout time.Duration) (interface{}, error) { select { case <-f.done: return f.result, f.err case <-time.After(timeout): return nil, fmt.Errorf("timeout waiting for future") } } // GetWithContext waits for the value with context cancellation func (f *Future) GetWithContext(ctx context.Context) (interface{}, error) { select { case <-f.done: return f.result, f.err case <-ctx.Done(): return nil, ctx.Err() } } // IsReady returns true if the future has been computed func (f *Future) IsReady() bool { f.mu.Lock() defer f.mu.Unlock() return f.computed } // AsyncService demonstrates async operations with futures type AsyncService struct { workers chan struct{} } func NewAsyncService(maxWorkers int) *AsyncService { return &AsyncService{ workers: make(chan struct{}, maxWorkers), } } // ProcessAsync starts async processing and returns a future func (as *AsyncService) ProcessAsync(data interface{}) *Future { future := NewFuture() go func() { // Acquire worker slot as.workers <- struct{}{} defer func() { <-as.workers }() // Simulate processing time.Sleep(time.Duration(100+rand.Intn(200)) * time.Millisecond) // Process data if num, ok := data.(int); ok { future.Set(num*num, nil) } else { future.Set(nil, fmt.Errorf("invalid data type")) } }() return future } func main() { service := NewAsyncService(3) // Start multiple async operations futures := make([]*Future, 5) for i := 0; i < 5; i++ { fmt.Printf("Starting async operation %d\n", i+1) futures[i] = service.ProcessAsync((i + 1) * 10) } // Wait for all results fmt.Println("\nWaiting for results...") for i, future := range futures { result, err := future.Get() if err != nil { fmt.Printf("Operation %d failed: %v\n", i+1, err) } else { fmt.Printf("Operation %d result: %v\n", i+1, result) } } // Example with timeout fmt.Println("\nTesting timeout...") timeoutFuture := service.ProcessAsync(100) result, err := timeoutFuture.GetWithTimeout(50 * time.Millisecond) if err != nil { fmt.Printf("Timeout example failed: %v\n", err) } else { fmt.Printf("Timeout example result: %v\n", result) } } Batch Request/Response package main import ( "fmt" "sync" "time" ) // BatchRequest represents multiple requests processed together type BatchRequest struct { ID string Items []interface{} Response chan BatchResponse } // BatchResponse contains results for all items in a batch type BatchResponse struct { ID string Results []BatchResult Error error } // BatchResult represents the result of processing one item type BatchResult struct { Index int Result interface{} Error error } // BatchProcessor processes requests in batches for efficiency type BatchProcessor struct { requests chan BatchRequest batchSize int batchWindow time.Duration quit chan bool } func NewBatchProcessor(batchSize int, batchWindow time.Duration) *BatchProcessor { return &BatchProcessor{ requests: make(chan BatchRequest, 100), batchSize: batchSize, batchWindow: batchWindow, quit: make(chan bool), } } func (bp *BatchProcessor) Start() { go func() { batch := make([]BatchRequest, 0, bp.batchSize) timer := time.NewTimer(bp.batchWindow) timer.Stop() for { select { case req := <-bp.requests: batch = append(batch, req) if len(batch) == 1 { timer.Reset(bp.batchWindow) } if len(batch) >= bp.batchSize { bp.processBatch(batch) batch = batch[:0] timer.Stop() } case <-timer.C: if len(batch) > 0 { bp.processBatch(batch) batch = batch[:0] } case <-bp.quit: if len(batch) > 0 { bp.processBatch(batch) } return } } }() } func (bp *BatchProcessor) processBatch(batch []BatchRequest) { fmt.Printf("Processing batch of %d requests\n", len(batch)) var wg sync.WaitGroup for _, req := range batch { wg.Add(1) go func(r BatchRequest) { defer wg.Done() bp.processRequest(r) }(req) } wg.Wait() } func (bp *BatchProcessor) processRequest(req BatchRequest) { results := make([]BatchResult, len(req.Items)) for i, item := range req.Items { // Simulate processing each item time.Sleep(10 * time.Millisecond) if num, ok := item.(int); ok { results[i] = BatchResult{ Index: i, Result: num * 3, } } else { results[i] = BatchResult{ Index: i, Error: fmt.Errorf("invalid item type at index %d", i), } } } response := BatchResponse{ ID: req.ID, Results: results, } req.Response <- response } // SendBatchRequest sends a batch request and waits for response func (bp *BatchProcessor) SendBatchRequest(id string, items []interface{}) ([]BatchResult, error) { responseChan := make(chan BatchResponse, 1) request := BatchRequest{ ID: id, Items: items, Response: responseChan, } bp.requests <- request response := <-responseChan return response.Results, response.Error } func (bp *BatchProcessor) Stop() { close(bp.quit) } func main() { processor := NewBatchProcessor(3, 100*time.Millisecond) processor.Start() defer processor.Stop() // Send individual batch requests go func() { results, err := processor.SendBatchRequest("batch1", []interface{}{1, 2, 3, 4, 5}) if err != nil { fmt.Printf("Batch 1 failed: %v\n", err) return } fmt.Println("Batch 1 results:") for _, result := range results { if result.Error != nil { fmt.Printf(" Item %d error: %v\n", result.Index, result.Error) } else { fmt.Printf(" Item %d result: %v\n", result.Index, result.Result) } } }() go func() { results, err := processor.SendBatchRequest("batch2", []interface{}{10, 20, 30}) if err != nil { fmt.Printf("Batch 2 failed: %v\n", err) return } fmt.Println("Batch 2 results:") for _, result := range results { if result.Error != nil { fmt.Printf(" Item %d error: %v\n", result.Index, result.Error) } else { fmt.Printf(" Item %d result: %v\n", result.Index, result.Result) } } }() // Wait for processing time.Sleep(500 * time.Millisecond) } Best Practices Always Use Timeouts: Prevent indefinite blocking Handle Context Cancellation: Support graceful cancellation Buffer Response Channels: Avoid blocking responders Error Handling: Always include error information in responses Resource Cleanup: Ensure channels and goroutines are cleaned up Monitoring: Track request/response times and success rates Backpressure: Handle situations when responders are overwhelmed Common Pitfalls Deadlocks: Not buffering response channels Goroutine Leaks: Not handling context cancellation Memory Leaks: Not closing channels properly Blocking Operations: Long-running operations without timeouts Lost Responses: Not handling channel closure Testing Request/Response package main import ( "context" "testing" "time" ) func TestRequestResponse(t *testing.T) { server := NewTimedServer() server.Start() defer server.Stop() // Test successful request result, err := server.SendRequestWithTimeout("test1", 42, 200*time.Millisecond) if err != nil { t.Fatalf("Request failed: %v", err) } if result != 84 { t.Errorf("Expected 84, got %v", result) } // Test timeout _, err = server.SendRequestWithTimeout("test2", 42, 10*time.Millisecond) if err == nil { t.Error("Expected timeout error") } } func TestFuture(t *testing.T) { future := NewFuture() // Test that future is not ready initially if future.IsReady() { t.Error("Future should not be ready initially") } // Set value in goroutine go func() { time.Sleep(50 * time.Millisecond) future.Set("test result", nil) }() // Get value result, err := future.Get() if err != nil { t.Fatalf("Future failed: %v", err) } if result != "test result" { t.Errorf("Expected 'test result', got %v", result) } // Test that future is ready after setting if !future.IsReady() { t.Error("Future should be ready after setting") } } The Request/Response pattern is essential for building synchronous communication systems in Go. It provides the foundation for RPC systems, database operations, and any scenario where you need to wait for a result from an asynchronous operation. ...

    July 31, 2024 · 10 min · Rafiul Alam

    Rate Limiter Pattern in Go

    Go Concurrency Patterns Series: ← Circuit Breaker | Series Overview | Semaphore Pattern → What is the Rate Limiter Pattern? Rate limiting controls the rate at which operations are performed, preventing system overload and ensuring fair resource usage. It’s essential for protecting services from abuse, managing resource consumption, and maintaining system stability under load. Common Algorithms: Token Bucket: Allows bursts up to bucket capacity Fixed Window: Fixed number of requests per time window Sliding Window: Smooth rate limiting over time Leaky Bucket: Constant output rate regardless of input Real-World Use Cases API Rate Limiting: Prevent API abuse and ensure fair usage Database Throttling: Control database query rates File Processing: Limit file processing rate Network Operations: Control bandwidth usage Background Jobs: Throttle job processing User Actions: Prevent spam and abuse Token Bucket Rate Limiter package main import ( "context" "fmt" "sync" "time" ) // TokenBucket implements the token bucket rate limiting algorithm type TokenBucket struct { mu sync.Mutex capacity int // Maximum number of tokens tokens int // Current number of tokens refillRate int // Tokens added per second lastRefill time.Time // Last refill time } // NewTokenBucket creates a new token bucket rate limiter func NewTokenBucket(capacity, refillRate int) *TokenBucket { return &TokenBucket{ capacity: capacity, tokens: capacity, // Start with full bucket refillRate: refillRate, lastRefill: time.Now(), } } // Allow checks if a request should be allowed func (tb *TokenBucket) Allow() bool { tb.mu.Lock() defer tb.mu.Unlock() tb.refill() if tb.tokens > 0 { tb.tokens-- return true } return false } // AllowN checks if n requests should be allowed func (tb *TokenBucket) AllowN(n int) bool { tb.mu.Lock() defer tb.mu.Unlock() tb.refill() if tb.tokens >= n { tb.tokens -= n return true } return false } // Wait waits until a token is available func (tb *TokenBucket) Wait(ctx context.Context) error { for { if tb.Allow() { return nil } select { case <-time.After(time.Millisecond * 10): continue case <-ctx.Done(): return ctx.Err() } } } // refill adds tokens based on elapsed time func (tb *TokenBucket) refill() { now := time.Now() elapsed := now.Sub(tb.lastRefill) tokensToAdd := int(elapsed.Seconds() * float64(tb.refillRate)) if tokensToAdd > 0 { tb.tokens += tokensToAdd if tb.tokens > tb.capacity { tb.tokens = tb.capacity } tb.lastRefill = now } } // GetStats returns current bucket statistics func (tb *TokenBucket) GetStats() (tokens, capacity int) { tb.mu.Lock() defer tb.mu.Unlock() tb.refill() return tb.tokens, tb.capacity } func main() { // Create a token bucket: 5 tokens capacity, 2 tokens per second refill rate limiter := NewTokenBucket(5, 2) fmt.Println("=== Token Bucket Rate Limiter Demo ===") // Test burst capability fmt.Println("\n--- Testing Burst Capability ---") for i := 1; i <= 7; i++ { allowed := limiter.Allow() tokens, capacity := limiter.GetStats() fmt.Printf("Request %d: %s (tokens: %d/%d)\n", i, allowedStatus(allowed), tokens, capacity) } // Wait for refill fmt.Println("\n--- Waiting 3 seconds for refill ---") time.Sleep(3 * time.Second) // Test after refill fmt.Println("\n--- Testing After Refill ---") for i := 1; i <= 4; i++ { allowed := limiter.Allow() tokens, capacity := limiter.GetStats() fmt.Printf("Request %d: %s (tokens: %d/%d)\n", i, allowedStatus(allowed), tokens, capacity) } // Test AllowN fmt.Println("\n--- Testing AllowN (requesting 3 tokens) ---") allowed := limiter.AllowN(3) tokens, capacity := limiter.GetStats() fmt.Printf("Bulk request: %s (tokens: %d/%d)\n", allowedStatus(allowed), tokens, capacity) } func allowedStatus(allowed bool) string { if allowed { return " ALLOWED" } return " DENIED" } Sliding Window Rate Limiter package main import ( "fmt" "sync" "time" ) // SlidingWindow implements sliding window rate limiting type SlidingWindow struct { mu sync.Mutex requests []time.Time limit int // Maximum requests per window window time.Duration // Time window duration } // NewSlidingWindow creates a new sliding window rate limiter func NewSlidingWindow(limit int, window time.Duration) *SlidingWindow { return &SlidingWindow{ requests: make([]time.Time, 0), limit: limit, window: window, } } // Allow checks if a request should be allowed func (sw *SlidingWindow) Allow() bool { sw.mu.Lock() defer sw.mu.Unlock() now := time.Now() sw.cleanOldRequests(now) if len(sw.requests) < sw.limit { sw.requests = append(sw.requests, now) return true } return false } // cleanOldRequests removes requests outside the current window func (sw *SlidingWindow) cleanOldRequests(now time.Time) { cutoff := now.Add(-sw.window) // Find first request within window start := 0 for i, req := range sw.requests { if req.After(cutoff) { start = i break } start = len(sw.requests) // All requests are old } // Keep only recent requests if start > 0 { copy(sw.requests, sw.requests[start:]) sw.requests = sw.requests[:len(sw.requests)-start] } } // GetStats returns current window statistics func (sw *SlidingWindow) GetStats() (current, limit int, window time.Duration) { sw.mu.Lock() defer sw.mu.Unlock() sw.cleanOldRequests(time.Now()) return len(sw.requests), sw.limit, sw.window } // GetRequestTimes returns timestamps of requests in current window func (sw *SlidingWindow) GetRequestTimes() []time.Time { sw.mu.Lock() defer sw.mu.Unlock() sw.cleanOldRequests(time.Now()) result := make([]time.Time, len(sw.requests)) copy(result, sw.requests) return result } func main() { // Create sliding window: 3 requests per 2 seconds limiter := NewSlidingWindow(3, 2*time.Second) fmt.Println("=== Sliding Window Rate Limiter Demo ===") fmt.Println("Limit: 3 requests per 2 seconds") // Test requests over time for i := 1; i <= 8; i++ { allowed := limiter.Allow() current, limit, window := limiter.GetStats() fmt.Printf("Request %d: %s (current: %d/%d in %v window)\n", i, allowedStatus(allowed), current, limit, window) if i == 4 { fmt.Println("--- Waiting 1 second ---") time.Sleep(1 * time.Second) } else if i == 6 { fmt.Println("--- Waiting 1.5 seconds ---") time.Sleep(1500 * time.Millisecond) } else { time.Sleep(200 * time.Millisecond) } } // Show request timeline fmt.Println("\n--- Request Timeline ---") requests := limiter.GetRequestTimes() now := time.Now() for i, req := range requests { age := now.Sub(req) fmt.Printf("Request %d: %v ago\n", i+1, age.Round(time.Millisecond)) } } Fixed Window Rate Limiter package main import ( "fmt" "sync" "time" ) // FixedWindow implements fixed window rate limiting type FixedWindow struct { mu sync.Mutex limit int // Maximum requests per window window time.Duration // Window duration currentCount int // Current window request count windowStart time.Time // Current window start time } // NewFixedWindow creates a new fixed window rate limiter func NewFixedWindow(limit int, window time.Duration) *FixedWindow { return &FixedWindow{ limit: limit, window: window, windowStart: time.Now(), } } // Allow checks if a request should be allowed func (fw *FixedWindow) Allow() bool { fw.mu.Lock() defer fw.mu.Unlock() now := time.Now() // Check if we need to start a new window if now.Sub(fw.windowStart) >= fw.window { fw.currentCount = 0 fw.windowStart = now } if fw.currentCount < fw.limit { fw.currentCount++ return true } return false } // GetStats returns current window statistics func (fw *FixedWindow) GetStats() (current, limit int, windowRemaining time.Duration) { fw.mu.Lock() defer fw.mu.Unlock() now := time.Now() elapsed := now.Sub(fw.windowStart) if elapsed >= fw.window { return 0, fw.limit, fw.window } return fw.currentCount, fw.limit, fw.window - elapsed } func main() { // Create fixed window: 3 requests per 2 seconds limiter := NewFixedWindow(3, 2*time.Second) fmt.Println("=== Fixed Window Rate Limiter Demo ===") fmt.Println("Limit: 3 requests per 2 seconds") // Test requests over time for i := 1; i <= 10; i++ { allowed := limiter.Allow() current, limit, remaining := limiter.GetStats() fmt.Printf("Request %d: %s (current: %d/%d, window resets in: %v)\n", i, allowedStatus(allowed), current, limit, remaining.Round(time.Millisecond)) time.Sleep(400 * time.Millisecond) } } Advanced Rate Limiter with Multiple Algorithms package main import ( "context" "fmt" "sync" "time" ) // RateLimiterType represents different rate limiting algorithms type RateLimiterType int const ( TokenBucketType RateLimiterType = iota SlidingWindowType FixedWindowType ) // RateLimiter interface for different rate limiting algorithms type RateLimiter interface { Allow() bool Wait(ctx context.Context) error GetStats() map[string]interface{} } // MultiRateLimiter combines multiple rate limiters type MultiRateLimiter struct { limiters []RateLimiter names []string } // NewMultiRateLimiter creates a new multi-algorithm rate limiter func NewMultiRateLimiter() *MultiRateLimiter { return &MultiRateLimiter{ limiters: make([]RateLimiter, 0), names: make([]string, 0), } } // AddLimiter adds a rate limiter with a name func (mrl *MultiRateLimiter) AddLimiter(name string, limiter RateLimiter) { mrl.limiters = append(mrl.limiters, limiter) mrl.names = append(mrl.names, name) } // Allow checks if request is allowed by all limiters func (mrl *MultiRateLimiter) Allow() bool { for _, limiter := range mrl.limiters { if !limiter.Allow() { return false } } return true } // Wait waits until all limiters allow the request func (mrl *MultiRateLimiter) Wait(ctx context.Context) error { for _, limiter := range mrl.limiters { if err := limiter.Wait(ctx); err != nil { return err } } return nil } // GetStats returns stats from all limiters func (mrl *MultiRateLimiter) GetStats() map[string]interface{} { stats := make(map[string]interface{}) for i, limiter := range mrl.limiters { stats[mrl.names[i]] = limiter.GetStats() } return stats } // Enhanced TokenBucket with RateLimiter interface type EnhancedTokenBucket struct { *TokenBucket } func (etb *EnhancedTokenBucket) GetStats() map[string]interface{} { tokens, capacity := etb.TokenBucket.GetStats() return map[string]interface{}{ "type": "token_bucket", "tokens": tokens, "capacity": capacity, "rate": etb.refillRate, } } // Enhanced SlidingWindow with RateLimiter interface type EnhancedSlidingWindow struct { *SlidingWindow } func (esw *EnhancedSlidingWindow) Wait(ctx context.Context) error { for { if esw.Allow() { return nil } select { case <-time.After(time.Millisecond * 10): continue case <-ctx.Done(): return ctx.Err() } } } func (esw *EnhancedSlidingWindow) GetStats() map[string]interface{} { current, limit, window := esw.SlidingWindow.GetStats() return map[string]interface{}{ "type": "sliding_window", "current": current, "limit": limit, "window": window.String(), } } // Enhanced FixedWindow with RateLimiter interface type EnhancedFixedWindow struct { *FixedWindow } func (efw *EnhancedFixedWindow) Wait(ctx context.Context) error { for { if efw.Allow() { return nil } select { case <-time.After(time.Millisecond * 10): continue case <-ctx.Done(): return ctx.Err() } } } func (efw *EnhancedFixedWindow) GetStats() map[string]interface{} { current, limit, remaining := efw.FixedWindow.GetStats() return map[string]interface{}{ "type": "fixed_window", "current": current, "limit": limit, "remaining": remaining.String(), } } // RateLimitedService demonstrates rate limiting in a service type RateLimitedService struct { limiter RateLimiter mu sync.Mutex stats struct { totalRequests int allowedRequests int deniedRequests int } } // NewRateLimitedService creates a new rate limited service func NewRateLimitedService(limiter RateLimiter) *RateLimitedService { return &RateLimitedService{ limiter: limiter, } } // ProcessRequest processes a request with rate limiting func (rls *RateLimitedService) ProcessRequest(ctx context.Context, requestID string) error { rls.mu.Lock() rls.stats.totalRequests++ rls.mu.Unlock() if !rls.limiter.Allow() { rls.mu.Lock() rls.stats.deniedRequests++ rls.mu.Unlock() return fmt.Errorf("request %s denied by rate limiter", requestID) } rls.mu.Lock() rls.stats.allowedRequests++ rls.mu.Unlock() // Simulate processing time.Sleep(50 * time.Millisecond) fmt.Printf(" Processed request %s\n", requestID) return nil } // GetServiceStats returns service statistics func (rls *RateLimitedService) GetServiceStats() map[string]interface{} { rls.mu.Lock() defer rls.mu.Unlock() return map[string]interface{}{ "total_requests": rls.stats.totalRequests, "allowed_requests": rls.stats.allowedRequests, "denied_requests": rls.stats.deniedRequests, "rate_limiter": rls.limiter.GetStats(), } } func main() { // Create multi-algorithm rate limiter multiLimiter := NewMultiRateLimiter() // Add different rate limiters multiLimiter.AddLimiter("token_bucket", &EnhancedTokenBucket{ TokenBucket: NewTokenBucket(5, 2), // 5 tokens, 2 per second }) multiLimiter.AddLimiter("sliding_window", &EnhancedSlidingWindow{ SlidingWindow: NewSlidingWindow(3, 2*time.Second), // 3 requests per 2 seconds }) multiLimiter.AddLimiter("fixed_window", &EnhancedFixedWindow{ FixedWindow: NewFixedWindow(4, 3*time.Second), // 4 requests per 3 seconds }) service := NewRateLimitedService(multiLimiter) fmt.Println("=== Multi-Algorithm Rate Limiter Demo ===") fmt.Println("Using Token Bucket (5 tokens, 2/sec) + Sliding Window (3/2sec) + Fixed Window (4/3sec)") // Simulate concurrent requests var wg sync.WaitGroup for i := 1; i <= 15; i++ { wg.Add(1) go func(id int) { defer wg.Done() ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second) defer cancel() requestID := fmt.Sprintf("req-%d", id) err := service.ProcessRequest(ctx, requestID) if err != nil { fmt.Printf(" %v\n", err) } }(i) time.Sleep(200 * time.Millisecond) } wg.Wait() // Print final statistics fmt.Println("\n=== Final Statistics ===") stats := service.GetServiceStats() fmt.Printf("Total Requests: %d\n", stats["total_requests"]) fmt.Printf("Allowed Requests: %d\n", stats["allowed_requests"]) fmt.Printf("Denied Requests: %d\n", stats["denied_requests"]) fmt.Println("\nRate Limiter Details:") rateLimiterStats := stats["rate_limiter"].(map[string]interface{}) for name, limiterStats := range rateLimiterStats { fmt.Printf(" %s: %+v\n", name, limiterStats) } } Best Practices Choose Right Algorithm: Select based on your specific requirements Token Bucket: Allow bursts, good for APIs Sliding Window: Smooth rate limiting Fixed Window: Simple, memory efficient Configure Appropriately: Set limits based on system capacity Handle Rejections Gracefully: Provide meaningful error messages Monitor Metrics: Track allowed/denied requests and adjust limits Use Context: Support cancellation in Wait operations Consider Distributed Systems: Use Redis or similar for distributed rate limiting Implement Backoff: Add exponential backoff for denied requests Common Pitfalls Too Restrictive: Setting limits too low affects user experience Too Permissive: High limits don’t protect against abuse Memory Leaks: Not cleaning old requests in sliding window Race Conditions: Not properly synchronizing access to counters Ignoring Bursts: Fixed windows can allow double the limit at boundaries Rate limiting is essential for protecting services from overload and ensuring fair resource usage. Choose the right algorithm based on your requirements and always monitor the effectiveness of your rate limiting strategy. ...

    July 24, 2024 · 10 min · Rafiul Alam