The Roller Coaster Problem
The Roller Coaster Problem, introduced by Allen Downey in “The Little Book of Semaphores,” demonstrates multi-phase synchronization and cyclic barriers. It’s a perfect model for batch processing systems where work happens in coordinated phases.
The Scenario
A roller coaster ride has:
- A car with capacity C passengers
- Passengers continuously arriving and queuing
- The car cycles through: board → ride → unboard → repeat
The rules:
- Car waits until exactly C passengers are ready
- Passengers can’t board until car is empty
- Car can’t run until full
- Passengers must unboard before new passengers board
- Process repeats indefinitely
The Challenge: Three-Phase Coordination
Each ride has three synchronized phases:
Coordination challenges:
- Car must wait for C passengers (not C-1, not C+1)
- Passengers must wait for empty car
- All passengers must board before ride starts
- All passengers must unboard before next boarding
- Multiple cars compound the complexity
Real-World Applications
This pattern is everywhere:
- Batch processing systems: Collect N records, process batch, write results
- Database transactions: Gather updates, commit batch, release locks
- MapReduce: Wait for all mappers, run reduce, emit results
- Test runners: Collect tests, run suite, report results
- Network packet buffers: Fill buffer, send batch, clear buffer
Implementation in Go
package main
import (
"fmt"
"math/rand"
"sync"
"sync/atomic"
"time"
)
type RollerCoaster struct {
capacity int
passengerQueue chan int
boardingComplete chan struct{}
rideComplete chan struct{}
unboardingComplete chan struct{}
passengersOnboard int
mu sync.Mutex
// Statistics
ridesCompleted atomic.Int64
totalRiders atomic.Int64
}
func NewRollerCoaster(capacity int) *RollerCoaster {
return &RollerCoaster{
capacity: capacity,
passengerQueue: make(chan int, capacity*3), // Buffer for waiting passengers
boardingComplete: make(chan struct{}),
rideComplete: make(chan struct{}),
unboardingComplete: make(chan struct{}),
}
}
// Run operates the roller coaster in an infinite loop
func (rc *RollerCoaster) Run() {
for {
rc.boardingPhase()
rc.ridingPhase()
rc.unboardingPhase()
}
}
// boardingPhase waits for car to fill
func (rc *RollerCoaster) boardingPhase() {
fmt.Println("\n🎢 [Car] Boarding phase - waiting for passengers...")
riders := make([]int, 0, rc.capacity)
// Collect exactly capacity passengers
for i := 0; i < rc.capacity; i++ {
passengerID := <-rc.passengerQueue
riders = append(riders, passengerID)
rc.mu.Lock()
rc.passengersOnboard++
rc.mu.Unlock()
fmt.Printf(" [Car] Passenger %d boarded (%d/%d)\n",
passengerID, i+1, rc.capacity)
// Last passenger? Signal boarding complete
if i == rc.capacity-1 {
fmt.Println(" [Car] ✓ Car full! Boarding complete")
close(rc.boardingComplete)
}
}
}
// ridingPhase runs the ride
func (rc *RollerCoaster) ridingPhase() {
fmt.Println("🎢 [Car] Riding phase - WHOOSH! 🎉")
// Simulate ride duration
time.Sleep(500 * time.Millisecond)
rc.ridesCompleted.Add(1)
fmt.Printf(" [Car] Ride #%d complete!\n", rc.ridesCompleted.Load())
// Signal ride complete
close(rc.rideComplete)
}
// unboardingPhase waits for all passengers to exit
func (rc *RollerCoaster) unboardingPhase() {
fmt.Println("🎢 [Car] Unboarding phase - please exit...")
// Wait for all passengers to unboard
<-rc.unboardingComplete
rc.mu.Lock()
rc.passengersOnboard = 0
rc.mu.Unlock()
fmt.Println(" [Car] ✓ All passengers off, car empty\n")
// Reset channels for next cycle
rc.boardingComplete = make(chan struct{})
rc.rideComplete = make(chan struct{})
rc.unboardingComplete = make(chan struct{})
}
// Passenger simulates a passenger's lifecycle
func (rc *RollerCoaster) Passenger(id int) {
// Random arrival time
time.Sleep(time.Duration(rand.Intn(200)) * time.Millisecond)
fmt.Printf("👤 [Passenger %d] Arrived, joining queue (queue: %d)\n",
id, len(rc.passengerQueue))
// Join queue
rc.passengerQueue <- id
// Wait for boarding to complete
<-rc.boardingComplete
fmt.Printf("👤 [Passenger %d] 🎉 Riding!\n", id)
// Wait for ride to complete
<-rc.rideComplete
fmt.Printf("👤 [Passenger %d] 🎉 Ride finished, exiting...\n", id)
// Unboard
rc.mu.Lock()
rc.passengersOnboard--
remaining := rc.passengersOnboard
rc.mu.Unlock()
rc.totalRiders.Add(1)
fmt.Printf("👤 [Passenger %d] ✓ Exited (remaining: %d)\n", id, remaining)
// Last to exit? Signal unboarding complete
if remaining == 0 {
fmt.Println(" 👤 Last passenger signals unboarding complete")
close(rc.unboardingComplete)
}
}
// Stats prints statistics
func (rc *RollerCoaster) Stats() {
fmt.Println("\n📊 Roller Coaster Statistics:")
fmt.Printf(" Capacity per ride: %d\n", rc.capacity)
fmt.Printf(" Completed rides: %d\n", rc.ridesCompleted.Load())
fmt.Printf(" Total riders: %d\n", rc.totalRiders.Load())
fmt.Printf(" Average per ride: %.1f\n",
float64(rc.totalRiders.Load())/float64(rc.ridesCompleted.Load()))
}
func main() {
fmt.Println("=== The Roller Coaster Problem ===")
fmt.Println("Multi-phase synchronization demonstration\n")
const (
capacity = 4
numPassengers = 12 // Exactly 3 rides worth
)
coaster := NewRollerCoaster(capacity)
// Start the roller coaster
go coaster.Run()
// Simulate passengers arriving
var wg sync.WaitGroup
for i := 0; i < numPassengers; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
coaster.Passenger(id)
}(i)
}
// Wait for all passengers to complete their ride
wg.Wait()
coaster.Stats()
fmt.Println("\n✓ Simulation complete!")
}
How It Works
1. Three Synchronization Channels
boardingComplete chan struct{} // Signals car is full
rideComplete chan struct{} // Signals ride finished
unboardingComplete chan struct{} // Signals car is empty
Each phase has a dedicated channel that’s closed when phase completes.
2. Phase Transitions
// Car's perspective
for {
boardingPhase() // Wait for C passengers
ridingPhase() // Run the ride
unboardingPhase() // Wait for all to exit
}
// Passenger's perspective
joinQueue()
<-boardingComplete // Wait for car to fill
<-rideComplete // Wait for ride to finish
exit()
3. Last-Actor Pattern
// Last passenger to board
if i == capacity-1 {
close(boardingComplete) // Trigger next phase
}
// Last passenger to exit
if remaining == 0 {
close(unboardingComplete) // Trigger next phase
}
The last actor in each phase triggers the transition!
4. Channel Reset Between Cycles
// After each cycle, create fresh channels
rc.boardingComplete = make(chan struct{})
rc.rideComplete = make(chan struct{})
rc.unboardingComplete = make(chan struct{})
Critical for cyclic behavior - reuse the same coaster object.
Advanced: Multiple Cars
Let’s extend to multiple cars operating independently:
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
type MultiCarCoaster struct {
capacity int
numCars int
passengerQueue chan int
totalRides atomic.Int64
mu sync.Mutex
}
type Car struct {
id int
capacity int
onboard []int
mu sync.Mutex
condition *sync.Cond
}
func NewCar(id, capacity int) *Car {
c := &Car{
id: id,
capacity: capacity,
onboard: make([]int, 0, capacity),
}
c.condition = sync.NewCond(&c.mu)
return c
}
func (c *Car) Run(queue chan int, totalRides *atomic.Int64) {
for {
// Phase 1: Board
c.board(queue)
// Phase 2: Ride
c.ride(totalRides)
// Phase 3: Unboard
c.unboard()
}
}
func (c *Car) board(queue chan int) {
c.mu.Lock()
defer c.mu.Unlock()
fmt.Printf("\n🚗 [Car %d] Boarding...\n", c.id)
for len(c.onboard) < c.capacity {
c.mu.Unlock()
passengerID := <-queue // Get next passenger
c.mu.Lock()
c.onboard = append(c.onboard, passengerID)
fmt.Printf(" [Car %d] Passenger %d boarded (%d/%d)\n",
c.id, passengerID, len(c.onboard), c.capacity)
}
fmt.Printf(" [Car %d] ✓ Full!\n", c.id)
c.condition.Broadcast() // Tell passengers we're full
}
func (c *Car) ride(totalRides *atomic.Int64) {
fmt.Printf("🎢 [Car %d] RIDING with passengers: %v\n", c.id, c.onboard)
time.Sleep(300 * time.Millisecond)
totalRides.Add(1)
fmt.Printf(" [Car %d] ✓ Ride complete\n", c.id)
}
func (c *Car) unboard() {
c.mu.Lock()
defer c.mu.Unlock()
fmt.Printf("🚪 [Car %d] Unboarding %d passengers...\n", c.id, len(c.onboard))
c.onboard = c.onboard[:0] // Clear passengers
fmt.Printf(" [Car %d] ✓ Empty\n", c.id)
}
func NewMultiCarCoaster(capacity, numCars int) *MultiCarCoaster {
return &MultiCarCoaster{
capacity: capacity,
numCars: numCars,
passengerQueue: make(chan int, capacity*numCars*2),
}
}
func (mcc *MultiCarCoaster) Run() {
// Start all cars
for i := 0; i < mcc.numCars; i++ {
car := NewCar(i, mcc.capacity)
go car.Run(mcc.passengerQueue, &mcc.totalRides)
}
}
func (mcc *MultiCarCoaster) Passenger(id int) {
fmt.Printf("👤 [Passenger %d] Joining queue\n", id)
mcc.passengerQueue <- id
}
func RunMultiCar() {
fmt.Println("\n=== Multiple Cars Variant ===\n")
const (
capacity = 3
numCars = 2
numPassengers = 12 // 2 rides per car
)
coaster := NewMultiCarCoaster(capacity, numCars)
coaster.Run()
var wg sync.WaitGroup
for i := 0; i < numPassengers; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
time.Sleep(time.Duration(id*100) * time.Millisecond)
coaster.Passenger(id)
}(i)
}
// Give time for rides to complete
time.Sleep(3 * time.Second)
fmt.Printf("\n📊 Total rides: %d\n", coaster.totalRides.Load())
}
func main() {
RunMultiCar()
}
Real-World Example: Batch Processing Pipeline
package main
import (
"fmt"
"sync"
"sync/atomic"
"time"
)
type BatchProcessor struct {
batchSize int
inputQueue chan Task
batchReady chan []Task
batchProcessed chan []Result
totalBatches atomic.Int64
}
type Task struct {
ID int
Data string
}
type Result struct {
TaskID int
Output string
Success bool
}
func NewBatchProcessor(batchSize int) *BatchProcessor {
return &BatchProcessor{
batchSize: batchSize,
inputQueue: make(chan Task, batchSize*3),
batchReady: make(chan []Task),
batchProcessed: make(chan []Result),
}
}
// Collector gathers tasks into batches (like boarding)
func (bp *BatchProcessor) Collector() {
for {
batch := make([]Task, 0, bp.batchSize)
fmt.Printf("\n📦 [Collector] Gathering batch...\n")
// Collect batch
for i := 0; i < bp.batchSize; i++ {
task := <-bp.inputQueue
batch = append(batch, task)
fmt.Printf(" [Collector] Task %d added (%d/%d)\n",
task.ID, i+1, bp.batchSize)
}
fmt.Printf(" [Collector] ✓ Batch ready! Sending for processing\n")
// Send batch for processing (like starting ride)
bp.batchReady <- batch
}
}
// Processor processes batches (like the ride)
func (bp *BatchProcessor) Processor() {
for batch := range bp.batchReady {
fmt.Printf("\n⚙️ [Processor] Processing batch of %d tasks...\n", len(batch))
// Process all tasks in batch
results := make([]Result, len(batch))
for i, task := range batch {
// Simulate processing
time.Sleep(50 * time.Millisecond)
results[i] = Result{
TaskID: task.ID,
Output: fmt.Sprintf("Processed: %s", task.Data),
Success: true,
}
}
bp.totalBatches.Add(1)
fmt.Printf(" [Processor] ✓ Batch #%d complete\n", bp.totalBatches.Load())
// Send results (like unboarding)
bp.batchProcessed <- results
}
}
// ResultHandler handles processed results (like passengers exiting)
func (bp *BatchProcessor) ResultHandler() {
for results := range bp.batchProcessed {
fmt.Printf("\n📤 [ResultHandler] Emitting %d results...\n", len(results))
for _, result := range results {
fmt.Printf(" [ResultHandler] Task %d: %s (%v)\n",
result.TaskID, result.Output, result.Success)
}
fmt.Printf(" [ResultHandler] ✓ Batch results emitted\n")
}
}
func (bp *BatchProcessor) Start() {
go bp.Collector()
go bp.Processor()
go bp.ResultHandler()
}
func (bp *BatchProcessor) SubmitTask(task Task) {
bp.inputQueue <- task
}
func RunBatchProcessor() {
fmt.Println("=== Batch Processing Pipeline ===\n")
const (
batchSize = 4
numTasks = 12
)
processor := NewBatchProcessor(batchSize)
processor.Start()
// Submit tasks
var wg sync.WaitGroup
for i := 0; i < numTasks; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
time.Sleep(time.Duration(id*50) * time.Millisecond)
task := Task{
ID: id,
Data: fmt.Sprintf("data-%d", id),
}
fmt.Printf("📥 [Task %d] Submitted\n", id)
processor.SubmitTask(task)
}(i)
}
wg.Wait()
time.Sleep(2 * time.Second)
fmt.Printf("\n📊 Total batches processed: %d\n", processor.totalBatches.Load())
}
func main() {
RunBatchProcessor()
}
Coordination Pattern Diagram
Key Go Patterns
1. Cyclic Barriers with Channels
// Create new channel for each cycle
boardingComplete = make(chan struct{})
// Last actor closes to release all waiters
if last {
close(boardingComplete)
}
// All waiters block here
<-boardingComplete
2. Last-Actor Responsibility
counter--
if counter == 0 {
close(phaseComplete) // Last one triggers transition
}
3. Multiple Phase Coordination
// Three phases, three channels
<-phase1Complete
doPhase2Work()
<-phase2Complete
doPhase3Work()
<-phase3Complete
4. Atomic Counters for Statistics
var ridesCompleted atomic.Int64
ridesCompleted.Add(1)
count := ridesCompleted.Load()
Performance Considerations
Batch Size Selection
Optimal_Size = Processing_Time / Arrival_Rate
- Too small: Overhead dominates, poor throughput
- Too large: High latency, idle time
- Sweet spot: Balance latency and throughput
Multiple Cars
Throughput = (Num_Cars * Capacity) / Ride_Duration
More cars = higher throughput, but more coordination complexity.
Variations
1. Minimum Capacity
// Start ride with minimum passengers (not full capacity)
const minCapacity = capacity * 0.75
if len(passengers) >= minCapacity {
startRide()
}
2. Timeout-Based Start
// Start after timeout even if not full
select {
case <-filledChannel:
// Full capacity
case <-time.After(30 * time.Second):
// Start anyway
}
3. Priority Boarding
// VIP passengers board first
select {
case p := <-vipQueue:
board(p)
case p := <-normalQueue:
board(p)
}
Advantages of This Pattern
✓ Batching efficiency - Process groups together ✓ Clear phase separation - Easy to reason about ✓ Resource utilization - Fill capacity before processing ✓ Natural coordination - Phases enforce ordering
When to Use
✓ Use when:
- Work should be batched
- Processing has distinct phases
- Multiple actors must coordinate
- Cyclic operation required
✗ Avoid when:
- Single-item processing preferred
- No natural batching
- Phases can overlap
- Latency is critical
Try It Yourself
- Multiple cars - Run 3 cars simultaneously
- Variable capacity - Random capacity per ride
- VIP passengers - Priority boarding
- Timeout boarding - Start if waiting too long
- Performance metrics - Track wait times, utilization
This is part 14 of “Golang Experiments: Classic Concurrency Problems”