The Readers-Writers Problem
The Readers-Writers problem is fundamental in concurrent systems where data is shared between multiple threads:
- Readers: Can access data simultaneously (read-only, no conflicts)
- Writers: Need exclusive access (modifications can’t overlap)
The challenge: Maximize concurrency while maintaining data integrity.
Real-World Applications
This pattern is everywhere:
- Databases: SELECT queries (readers) vs UPDATE/INSERT (writers)
- Caching: Cache reads vs cache updates
- Configuration: Reading config vs reloading config
- File systems: Multiple readers, exclusive writes
- Web servers: Read sessions vs update sessions
Readers Preference Solution
In this variant, readers get priority:
- Readers never wait if only other readers are active
- Writers must wait for ALL readers to finish
- New readers can “jump the queue” ahead of waiting writers
Advantage: Maximum read concurrency Disadvantage: Writers can starve if readers keep arriving
Go’s Built-in Solution: sync.RWMutex
Go provides sync.RWMutex specifically for this pattern!
var mu sync.RWMutex
// Reader
mu.RLock()
// ... read data ...
mu.RUnlock()
// Writer
mu.Lock()
// ... modify data ...
mu.Unlock()
Basic Implementation
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
// SharedData represents data protected by RWMutex
type SharedData struct {
value int
mu sync.RWMutex
}
// Read acquires a read lock
func (d *SharedData) Read(readerID int) int {
d.mu.RLock()
defer d.mu.RUnlock()
fmt.Printf("[Reader %d] 🔍 Reading... value=%d\n", readerID, d.value)
time.Sleep(time.Duration(50+rand.Intn(50)) * time.Millisecond)
return d.value
}
// Write acquires a write lock
func (d *SharedData) Write(writerID, newValue int) {
d.mu.Lock()
defer d.mu.Unlock()
oldValue := d.value
d.value = newValue
fmt.Printf("[Writer %d] ✏️ Writing... %d → %d\n", writerID, oldValue, newValue)
time.Sleep(time.Duration(100+rand.Intn(100)) * time.Millisecond)
}
// Reader goroutine
func Reader(id int, data *SharedData, reads int, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < reads; i++ {
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
value := data.Read(id)
_ = value
}
fmt.Printf("[Reader %d] Finished %d reads\n", id, reads)
}
// Writer goroutine
func Writer(id int, data *SharedData, writes int, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < writes; i++ {
time.Sleep(time.Duration(rand.Intn(200)) * time.Millisecond)
data.Write(id, id*100+i)
}
fmt.Printf("[Writer %d] Finished %d writes\n", id, writes)
}
func main() {
fmt.Println("=== Readers-Writers: Readers Preference ===")
fmt.Println("Using sync.RWMutex (readers have priority)\n")
data := &SharedData{value: 0}
numReaders := 5
numWriters := 2
readsPerReader := 3
writesPerWriter := 3
var wg sync.WaitGroup
// Start readers
fmt.Println("Starting readers...")
for i := 0; i < numReaders; i++ {
wg.Add(1)
go Reader(i, data, readsPerReader, &wg)
}
// Start writers
fmt.Println("Starting writers...\n")
for i := 0; i < numWriters; i++ {
wg.Add(1)
go Writer(i, data, writesPerWriter, &wg)
}
wg.Wait()
fmt.Printf("\n✓ All operations complete! Final value: %d\n", data.value)
}
Demonstrating Reader Priority
Let’s prove readers can starve writers:
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
type MonitoredRWData struct {
value int
mu sync.RWMutex
readCount atomic.Int64
writeCount atomic.Int64
writerBlocked atomic.Bool
}
func (d *MonitoredRWData) Read(id int) {
d.mu.RLock()
defer d.mu.RUnlock()
reads := d.readCount.Add(1)
fmt.Printf("[Reader %d] Reading (total reads: %d, writer blocked: %v)\n",
id, reads, d.writerBlocked.Load())
time.Sleep(50 * time.Millisecond)
}
func (d *MonitoredRWData) Write(id int, value int) {
fmt.Printf("[Writer %d] ⏳ Waiting for write lock...\n", id)
d.writerBlocked.Store(true)
start := time.Now()
d.mu.Lock()
waitTime := time.Since(start)
d.writerBlocked.Store(false)
defer d.mu.Unlock()
fmt.Printf("[Writer %d] ✏️ Got lock after %v, writing...\n", id, waitTime)
d.value = value
d.writeCount.Add(1)
time.Sleep(100 * time.Millisecond)
}
func DemonstrateStarvation() {
fmt.Println("=== Demonstrating Writer Starvation ===\n")
data := &MonitoredRWData{}
var wg sync.WaitGroup
// Start continuous readers
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for j := 0; j < 10; j++ {
data.Read(id)
time.Sleep(20 * time.Millisecond)
}
}(i)
}
// Start a writer after readers begin
time.Sleep(100 * time.Millisecond)
wg.Add(1)
go func() {
defer wg.Done()
data.Write(0, 42)
}()
wg.Wait()
fmt.Printf("\nReads: %d, Writes: %d\n", data.readCount.Load(), data.writeCount.Load())
fmt.Println("Notice how the writer had to wait for all readers!")
}
func main() {
DemonstrateStarvation()
}
Advanced: Cache with RWMutex
Real-world example - a thread-safe cache:
package main
import (
"fmt"
"sync"
"time"
)
type CacheEntry struct {
Value interface{}
Expiration time.Time
}
type Cache struct {
data map[string]CacheEntry
mu sync.RWMutex
hits atomic.Int64
misses atomic.Int64
}
func NewCache() *Cache {
return &Cache{
data: make(map[string]CacheEntry),
}
}
// Get uses RLock for concurrent reads
func (c *Cache) Get(key string) (interface{}, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
entry, exists := c.data[key]
if !exists {
c.misses.Add(1)
return nil, false
}
if time.Now().After(entry.Expiration) {
c.misses.Add(1)
return nil, false
}
c.hits.Add(1)
return entry.Value, true
}
// Set uses Lock for exclusive writes
func (c *Cache) Set(key string, value interface{}, ttl time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
c.data[key] = CacheEntry{
Value: value,
Expiration: time.Now().Add(ttl),
}
}
// Stats uses RLock for concurrent reads
func (c *Cache) Stats() (hits, misses int64) {
return c.hits.Load(), c.misses.Load()
}
// Cleanup uses Lock for exclusive writes
func (c *Cache) Cleanup() {
c.mu.Lock()
defer c.mu.Unlock()
now := time.Now()
removed := 0
for key, entry := range c.data {
if now.After(entry.Expiration) {
delete(c.data, key)
removed++
}
}
fmt.Printf("Cleanup: removed %d expired entries\n", removed)
}
func main() {
cache := NewCache()
// Multiple readers
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
key := fmt.Sprintf("key_%d", id%3)
for j := 0; j < 5; j++ {
if val, ok := cache.Get(key); ok {
fmt.Printf("[Reader %d] Hit: %s = %v\n", id, key, val)
} else {
fmt.Printf("[Reader %d] Miss: %s\n", id, key)
cache.Set(key, fmt.Sprintf("value_%d", id), 1*time.Second)
}
time.Sleep(100 * time.Millisecond)
}
}(i)
}
// Periodic cleanup (writer)
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 3; i++ {
time.Sleep(500 * time.Millisecond)
cache.Cleanup()
}
}()
wg.Wait()
hits, misses := cache.Stats()
fmt.Printf("\nCache hits: %d, misses: %d\n", hits, misses)
}
Performance: RWMutex vs Mutex
package main
import (
"sync"
"testing"
)
func BenchmarkMutex(b *testing.B) {
var mu sync.Mutex
var value int
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
mu.Lock()
_ = value
mu.Unlock()
}
})
}
func BenchmarkRWMutexRead(b *testing.B) {
var mu sync.RWMutex
var value int
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
mu.RLock()
_ = value
mu.RUnlock()
}
})
}
func BenchmarkRWMutexWrite(b *testing.B) {
var mu sync.RWMutex
var value int
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
mu.Lock()
value++
mu.Unlock()
}
})
}
Results (approximate):
- RWMutex reads: 10x faster than Mutex for read-heavy workloads
- RWMutex writes: Slightly slower than Mutex
- Break-even: ~70% reads, 30% writes
When to Use sync.RWMutex
✓ Use when:
- Read operations greatly outnumber writes (80%+ reads)
- Read operations take significant time
- You need maximum read concurrency
- Data is shared across many goroutines
✗ Use regular Mutex when:
- Writes are frequent (>30%)
- Critical sections are very short
- Contention is low
- Simplicity is preferred
Common Pitfalls
1. Lock Upgrade (Don’t Do This!)
// ✗ Wrong: Can't upgrade RLock to Lock
mu.RLock()
if needsWrite {
mu.Lock() // DEADLOCK!
}
// ✓ Correct: Release RLock first
mu.RLock()
needsWrite := checkCondition()
mu.RUnlock()
if needsWrite {
mu.Lock()
// recheck condition, it might have changed!
mu.Unlock()
}
2. Forgetting Defer
// ✗ Risky: Early return leaks lock
mu.RLock()
if err != nil {
return err // Lock not released!
}
mu.RUnlock()
// ✓ Safe: Always use defer
mu.RLock()
defer mu.RUnlock()
Advantages of Readers Preference
✓ Maximum read throughput - All readers can proceed simultaneously
✓ Simple to implement - sync.RWMutex handles everything
✓ Low latency for reads - Readers rarely wait
✓ Built into Go - Well-tested, optimized
Disadvantages
✗ Writer starvation - Writers can wait indefinitely ✗ Unfair to writers - Continuous reads block writes ✗ Not suitable for write-heavy - Writers create bottlenecks
Next Up: Writers Preference
In the next article, we’ll flip the priority and give writers preference, preventing writer starvation!
Try It Yourself
- Measure read/write ratio - Find your workload’s characteristics
- Benchmark - Compare RWMutex vs Mutex for your use case
- Add metrics - Track lock wait times
- Stress test - Many readers, few writers
- Profile - Use
go tool pprofto find lock contention
This is part 7 of “Golang Experiments: Classic Concurrency Problems”