The Banker’s Algorithm
The Banker’s Algorithm is a deadlock avoidance algorithm developed by Edsger Dijkstra in 1965. It models a bank that has limited cash and customers with credit limits who request loans in chunks. The bank only grants loans if the system stays in a “safe state” - meaning it can fulfill all future maximum requests.
The Scenario
A bank has:
- Limited cash available
- Multiple customers with credit limits
- Customers request loans in chunks over time
The rules:
- Each customer declares their maximum credit limit upfront
- Customers request loans incrementally
- Bank grants a loan only if the system remains in a safe state
- Bank can deny requests temporarily (customer must wait)
- Customers eventually return borrowed money
- A “safe state” means there exists a sequence where all customers can complete
The Safe State Concept
A state is safe if there exists a sequence of all customers where each can:
- Get their remaining maximum need using available + already allocated resources
- Complete and return all resources
- Those returned resources help the next customer in the sequence
Example Safe State:
Available: $10
Customer A: Has $3, Needs max $7 (remaining need: $4)
Customer B: Has $2, Needs max $5 (remaining need: $3)
Safe sequence: B → A
1. Give B $3 (available becomes $7) → B completes, returns $5 → available becomes $12
2. Give A $4 (available is $12) → A completes, returns $7 → all done!
Example Unsafe State:
Available: $2
Customer A: Has $5, Needs max $10 (remaining need: $5)
Customer B: Has $4, Needs max $10 (remaining need: $6)
No safe sequence exists!
- Can't satisfy A (needs $5, have $2)
- Can't satisfy B (needs $6, have $2)
- Deadlock risk!
Real-World Applications
The Banker’s Algorithm pattern appears in many systems:
- Operating systems: Memory and resource allocation
- Database systems: Lock management for transactions
- Cloud computing: VM and resource provisioning
- Manufacturing: Raw material allocation to production lines
- Network bandwidth: Quality of Service (QoS) guarantees
- Project management: Resource allocation across projects
State Diagram
Implementation in Go
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
// BankersAlgorithm manages resource allocation with deadlock avoidance
type BankersAlgorithm struct {
available []int // Available resources per type
maximum [][]int // Maximum demand of each process
allocation [][]int // Currently allocated to each process
need [][]int // Remaining need of each process
numProcess int
numResource int
mu sync.Mutex
condition *sync.Cond
// Statistics
granted int
denied int
safetyChecks int
}
// NewBankersAlgorithm creates a new banker's algorithm instance
func NewBankersAlgorithm(available []int, maximum [][]int) *BankersAlgorithm {
numProcess := len(maximum)
numResource := len(available)
ba := &BankersAlgorithm{
available: make([]int, numResource),
maximum: make([][]int, numProcess),
allocation: make([][]int, numProcess),
need: make([][]int, numProcess),
numProcess: numProcess,
numResource: numResource,
}
// Initialize available resources
copy(ba.available, available)
// Initialize maximum, allocation, and need
for i := 0; i < numProcess; i++ {
ba.maximum[i] = make([]int, numResource)
ba.allocation[i] = make([]int, numResource)
ba.need[i] = make([]int, numResource)
copy(ba.maximum[i], maximum[i])
copy(ba.need[i], maximum[i]) // Initially need = maximum
}
ba.condition = sync.NewCond(&ba.mu)
return ba
}
// RequestResources attempts to allocate resources to a process
func (ba *BankersAlgorithm) RequestResources(processID int, request []int) bool {
ba.mu.Lock()
defer ba.mu.Unlock()
fmt.Printf("[Process %d] 📝 Requesting resources: %v\n", processID, request)
// Check if request exceeds need
for i := 0; i < ba.numResource; i++ {
if request[i] > ba.need[processID][i] {
fmt.Printf("[Process %d] ❌ Request exceeds declared need!\n", processID)
return false
}
}
// Check if request exceeds available
for i := 0; i < ba.numResource; i++ {
if request[i] > ba.available[i] {
fmt.Printf("[Process %d] ⏳ Resources not available, waiting...\n", processID)
ba.denied++
return false
}
}
// Pretend to allocate (trial allocation)
ba.allocateResources(processID, request)
// Check if the state is safe
ba.safetyChecks++
if ba.isSafe() {
fmt.Printf("[Process %d] ✅ Request granted (safe state maintained)\n", processID)
fmt.Printf(" Available now: %v\n", ba.available)
ba.granted++
return true
}
// Unsafe state - rollback allocation
ba.rollbackAllocation(processID, request)
fmt.Printf("[Process %d] 🚫 Request denied (would create unsafe state)\n", processID)
ba.denied++
return false
}
// allocateResources performs the actual allocation
func (ba *BankersAlgorithm) allocateResources(processID int, request []int) {
for i := 0; i < ba.numResource; i++ {
ba.available[i] -= request[i]
ba.allocation[processID][i] += request[i]
ba.need[processID][i] -= request[i]
}
}
// rollbackAllocation undoes an allocation
func (ba *BankersAlgorithm) rollbackAllocation(processID int, request []int) {
for i := 0; i < ba.numResource; i++ {
ba.available[i] += request[i]
ba.allocation[processID][i] -= request[i]
ba.need[processID][i] += request[i]
}
}
// isSafe checks if the current state is safe using the safety algorithm
func (ba *BankersAlgorithm) isSafe() bool {
work := make([]int, ba.numResource)
copy(work, ba.available)
finish := make([]bool, ba.numProcess)
safeSequence := make([]int, 0, ba.numProcess)
// Find a safe sequence
for {
found := false
for i := 0; i < ba.numProcess; i++ {
if finish[i] {
continue
}
// Check if process i can finish with available work
canFinish := true
for j := 0; j < ba.numResource; j++ {
if ba.need[i][j] > work[j] {
canFinish = false
break
}
}
if canFinish {
// Process i can finish - add its resources to work
for j := 0; j < ba.numResource; j++ {
work[j] += ba.allocation[i][j]
}
finish[i] = true
safeSequence = append(safeSequence, i)
found = true
}
}
if !found {
break
}
}
// Check if all processes can finish
for i := 0; i < ba.numProcess; i++ {
if !finish[i] {
return false
}
}
fmt.Printf(" 🛡️ Safe sequence found: %v\n", safeSequence)
return true
}
// ReleaseResources returns resources from a process
func (ba *BankersAlgorithm) ReleaseResources(processID int, release []int) {
ba.mu.Lock()
defer ba.mu.Unlock()
fmt.Printf("[Process %d] 🔙 Releasing resources: %v\n", processID, release)
for i := 0; i < ba.numResource; i++ {
ba.available[i] += release[i]
ba.allocation[processID][i] -= release[i]
ba.need[processID][i] += release[i]
}
fmt.Printf(" Available now: %v\n", ba.available)
ba.condition.Broadcast() // Wake up waiting processes
}
// GetState returns current state for visualization
func (ba *BankersAlgorithm) GetState() ([]int, [][]int, [][]int, [][]int) {
ba.mu.Lock()
defer ba.mu.Unlock()
return ba.available, ba.allocation, ba.need, ba.maximum
}
// Stats prints statistics
func (ba *BankersAlgorithm) Stats() {
ba.mu.Lock()
defer ba.mu.Unlock()
fmt.Println("\n📊 Banker's Algorithm Statistics:")
fmt.Printf(" Total requests granted: %d\n", ba.granted)
fmt.Printf(" Total requests denied: %d\n", ba.denied)
fmt.Printf(" Safety checks run: %d\n", ba.safetyChecks)
fmt.Printf(" Success rate: %.1f%%\n",
float64(ba.granted)/float64(ba.granted+ba.denied)*100)
}
// Process simulates a process requesting and using resources
func (ba *BankersAlgorithm) Process(id int, requests [][]int) {
for i, request := range requests {
time.Sleep(time.Duration(rand.Intn(200)) * time.Millisecond)
// Keep trying until request is granted
for !ba.RequestResources(id, request) {
time.Sleep(time.Duration(100+rand.Intn(200)) * time.Millisecond)
}
// Use resources for some time
fmt.Printf("[Process %d] 💼 Working with resources (request %d/%d)...\n",
id, i+1, len(requests))
time.Sleep(time.Duration(200+rand.Intn(300)) * time.Millisecond)
// Release resources when done
if i == len(requests)-1 {
// Final release - return everything
ba.mu.Lock()
allocation := make([]int, ba.numResource)
copy(allocation, ba.allocation[id])
ba.mu.Unlock()
if hasResources(allocation) {
ba.ReleaseResources(id, allocation)
}
}
}
fmt.Printf("[Process %d] ✅ Completed all work\n", id)
}
func hasResources(resources []int) bool {
for _, r := range resources {
if r > 0 {
return true
}
}
return false
}
func main() {
fmt.Println("=== Banker's Algorithm: Deadlock Avoidance ===")
fmt.Println("Simulating resource allocation with safety checks\n")
// Initialize system
// Resources: [Type A, Type B, Type C]
available := []int{10, 5, 7}
// Maximum demand: [Process][Resource Type]
maximum := [][]int{
{7, 5, 3}, // Process 0
{3, 2, 2}, // Process 1
{9, 0, 2}, // Process 2
{2, 2, 2}, // Process 3
{4, 3, 3}, // Process 4
}
fmt.Println("Initial State:")
fmt.Printf(" Available resources: %v\n", available)
fmt.Println(" Maximum demands:")
for i, max := range maximum {
fmt.Printf(" Process %d: %v\n", i, max)
}
fmt.Println()
ba := NewBankersAlgorithm(available, maximum)
// Define request sequences for each process
requests := [][][]int{
// Process 0
{
{0, 1, 0},
{2, 0, 0},
{3, 0, 2},
},
// Process 1
{
{2, 0, 0},
{1, 1, 1},
},
// Process 2
{
{3, 0, 2},
{4, 0, 0},
},
// Process 3
{
{2, 1, 1},
},
// Process 4
{
{0, 0, 2},
{2, 1, 0},
},
}
var wg sync.WaitGroup
// Start all processes
for i := 0; i < len(maximum); i++ {
wg.Add(1)
go func(processID int) {
defer wg.Done()
ba.Process(processID, requests[processID])
}(i)
}
wg.Wait()
ba.Stats()
fmt.Println("\n✓ All processes completed without deadlock!")
}
How It Works
1. Data Structures
available []int // Available resources per type
maximum [][]int // Maximum demand of each process
allocation [][]int // Currently allocated to each process
need [][]int // Remaining need = maximum - allocation
Each process declares its maximum upfront. The algorithm tracks what’s available, allocated, and still needed.
2. Request Handling
func (ba *BankersAlgorithm) RequestResources(processID int, request []int) bool {
// 1. Validate request <= need
// 2. Check if request <= available
// 3. Pretend to allocate (trial)
// 4. Run safety algorithm
// 5. If safe → keep allocation, else → rollback
}
The key: never grant a request that would lead to an unsafe state.
3. Safety Algorithm
func (ba *BankersAlgorithm) isSafe() bool {
work := copy(available)
finish := [false, false, ...]
repeat:
find process i where:
- finish[i] == false
- need[i] <= work
if found:
work += allocation[i]
finish[i] = true
goto repeat
return all(finish) // Safe if all processes can finish
}
Tries to find a sequence where each process can complete.
4. The Safety Check Visualization
Initial: Available = [3, 3, 2]
Process 0: Need [7,4,3], Allocated [0,1,0]
Process 1: Need [1,2,2], Allocated [2,0,0] ← Can finish! (need <= available)
Process 2: Need [6,0,0], Allocated [3,0,2]
Process 3: Need [0,1,1], Allocated [2,1,1]
Process 4: Need [4,3,1], Allocated [0,0,2]
Simulate P1 finishing: Available becomes [3+2, 3+0, 2+0] = [5,3,2]
Now P3 can finish: Available becomes [5+2, 3+1, 2+1] = [7,4,3]
Now P4 can finish: Available becomes [7+0, 4+0, 3+2] = [7,4,5]
Now P2 can finish: Available becomes [7+3, 4+0, 5+2] = [10,4,7]
Now P0 can finish: Available becomes [10+0, 4+1, 7+0] = [10,5,7]
Safe sequence: [1, 3, 4, 2, 0] ✅
Advanced: Dynamic Process Addition
package main
import (
"fmt"
"sync"
"sync/atomic"
)
type DynamicBanker struct {
available []int
processes map[int]*ProcessInfo
nextID atomic.Int32
mu sync.Mutex
numResources int
}
type ProcessInfo struct {
id int
maximum []int
allocation []int
need []int
}
func NewDynamicBanker(available []int) *DynamicBanker {
return &DynamicBanker{
available: available,
processes: make(map[int]*ProcessInfo),
numResources: len(available),
}
}
// RegisterProcess adds a new process dynamically
func (db *DynamicBanker) RegisterProcess(maximum []int) int {
db.mu.Lock()
defer db.mu.Unlock()
id := int(db.nextID.Add(1))
process := &ProcessInfo{
id: id,
maximum: make([]int, db.numResources),
allocation: make([]int, db.numResources),
need: make([]int, db.numResources),
}
copy(process.maximum, maximum)
copy(process.need, maximum)
db.processes[id] = process
fmt.Printf("[Dynamic] 📝 Registered process %d with max demand %v\n", id, maximum)
return id
}
// UnregisterProcess removes a completed process
func (db *DynamicBanker) UnregisterProcess(id int) {
db.mu.Lock()
defer db.mu.Unlock()
process, exists := db.processes[id]
if !exists {
return
}
// Return any allocated resources
for i := 0; i < db.numResources; i++ {
db.available[i] += process.allocation[i]
}
delete(db.processes, id)
fmt.Printf("[Dynamic] 🗑️ Unregistered process %d, returned resources %v\n",
id, process.allocation)
}
func (db *DynamicBanker) RequestResources(id int, request []int) bool {
db.mu.Lock()
defer db.mu.Unlock()
process := db.processes[id]
if process == nil {
return false
}
// Validation and safety check (same as before)
for i := 0; i < db.numResources; i++ {
if request[i] > process.need[i] || request[i] > db.available[i] {
return false
}
}
// Trial allocation
for i := 0; i < db.numResources; i++ {
db.available[i] -= request[i]
process.allocation[i] += request[i]
process.need[i] -= request[i]
}
// Safety check
if db.isSafeLocked() {
fmt.Printf("[Process %d] ✅ Request %v granted\n", id, request)
return true
}
// Rollback
for i := 0; i < db.numResources; i++ {
db.available[i] += request[i]
process.allocation[i] -= request[i]
process.need[i] += request[i]
}
return false
}
func (db *DynamicBanker) isSafeLocked() bool {
work := make([]int, db.numResources)
copy(work, db.available)
finished := make(map[int]bool)
for {
found := false
for id, process := range db.processes {
if finished[id] {
continue
}
canFinish := true
for i := 0; i < db.numResources; i++ {
if process.need[i] > work[i] {
canFinish = false
break
}
}
if canFinish {
for i := 0; i < db.numResources; i++ {
work[i] += process.allocation[i]
}
finished[id] = true
found = true
}
}
if !found {
break
}
}
return len(finished) == len(db.processes)
}
func RunDynamicExample() {
fmt.Println("\n=== Dynamic Process Registration ===\n")
banker := NewDynamicBanker([]int{15, 10, 10})
// Processes join over time
p1 := banker.RegisterProcess([]int{7, 5, 3})
p2 := banker.RegisterProcess([]int{3, 2, 2})
banker.RequestResources(p1, []int{2, 1, 1})
banker.RequestResources(p2, []int{1, 1, 1})
// New process joins mid-execution
p3 := banker.RegisterProcess([]int{9, 4, 4})
banker.RequestResources(p3, []int{3, 1, 1})
banker.RequestResources(p1, []int{2, 2, 1})
// Process completes and leaves
banker.UnregisterProcess(p2)
fmt.Println("\n✓ Dynamic example complete!")
}
func main() {
RunDynamicExample()
}
Real-World Example: Cloud Resource Allocator
package main
import (
"fmt"
"sync"
"time"
)
type ResourceType int
const (
CPU ResourceType = iota
Memory
Storage
)
func (rt ResourceType) String() string {
return []string{"CPU", "Memory", "Storage"}[rt]
}
type CloudAllocator struct {
banker *BankersAlgorithm
vms map[int]*VM
mu sync.Mutex
}
type VM struct {
id int
name string
maxCPU int
maxMem int
maxStore int
}
func NewCloudAllocator(totalCPU, totalMem, totalStore int) *CloudAllocator {
available := []int{totalCPU, totalMem, totalStore}
return &CloudAllocator{
vms: make(map[int]*VM),
}
}
func (ca *CloudAllocator) ProvisionVM(name string, maxCPU, maxMem, maxStore int) int {
ca.mu.Lock()
defer ca.mu.Unlock()
id := len(ca.vms)
vm := &VM{
id: id,
name: name,
maxCPU: maxCPU,
maxMem: maxMem,
maxStore: maxStore,
}
ca.vms[id] = vm
fmt.Printf("[Cloud] ☁️ Provisioned VM '%s' (max: CPU=%d, Mem=%d, Store=%d)\n",
name, maxCPU, maxMem, maxStore)
return id
}
func (ca *CloudAllocator) AllocateResources(vmID int, cpu, mem, store int) bool {
request := []int{cpu, mem, store}
if ca.banker.RequestResources(vmID, request) {
ca.mu.Lock()
vm := ca.vms[vmID]
ca.mu.Unlock()
fmt.Printf("[VM %s] 🎯 Allocated: CPU=%d, Mem=%d, Store=%d\n",
vm.name, cpu, mem, store)
return true
}
return false
}
func (ca *CloudAllocator) ReleaseResources(vmID int, cpu, mem, store int) {
release := []int{cpu, mem, store}
ca.banker.ReleaseResources(vmID, release)
ca.mu.Lock()
vm := ca.vms[vmID]
ca.mu.Unlock()
fmt.Printf("[VM %s] 💨 Released: CPU=%d, Mem=%d, Store=%d\n",
vm.name, cpu, mem, store)
}
func RunCloudExample() {
fmt.Println("\n=== Cloud Resource Allocation ===\n")
// Total resources
available := []int{20, 32, 100} // 20 CPUs, 32GB RAM, 100GB Storage
maximum := [][]int{
{10, 16, 40}, // VM 0
{5, 8, 20}, // VM 1
{8, 12, 50}, // VM 2
}
banker := NewBankersAlgorithm(available, maximum)
fmt.Println("Starting cloud resource allocation simulation...")
fmt.Printf("Total resources: CPU=%d, Memory=%dGB, Storage=%dGB\n\n",
available[0], available[1], available[2])
// Simulate VM resource requests
var wg sync.WaitGroup
vmRequests := [][][]int{
{{3, 4, 10}, {2, 2, 5}}, // VM 0 requests
{{2, 3, 8}, {1, 2, 5}}, // VM 1 requests
{{4, 6, 20}, {2, 3, 10}}, // VM 2 requests
}
for i := 0; i < 3; i++ {
wg.Add(1)
go func(vmID int) {
defer wg.Done()
banker.Process(vmID, vmRequests[vmID])
}(i)
}
wg.Wait()
fmt.Println("\n✓ Cloud allocation complete!")
}
func main() {
RunCloudExample()
}
Key Go Patterns
1. Multi-Dimensional Slices for Resource Tracking
allocation := make([][]int, numProcess)
for i := range allocation {
allocation[i] = make([]int, numResource)
}
Clean way to track per-process, per-resource state.
2. Mutex Protection for Complex State
func (ba *BankersAlgorithm) RequestResources(...) bool {
ba.mu.Lock()
defer ba.mu.Unlock()
// Trial allocation
// Safety check
// Commit or rollback
}
Entire request is atomic - no race conditions.
3. Condition Variables for Waiting
ba.condition = sync.NewCond(&ba.mu)
// In request handler
for !resourcesAvailable() {
ba.condition.Wait()
}
// In release handler
ba.condition.Broadcast() // Wake all waiting processes
4. Trial-and-Rollback Pattern
allocate(request)
if !isSafe() {
rollback(request)
return false
}
return true
Performance Considerations
Time Complexity
- Safety check: O(m × n²) where m = resources, n = processes
- Request: O(m × n²) due to safety check
- Release: O(1) + broadcast notification
Space Complexity
- O(m × n) for allocation, maximum, need matrices
- Scales linearly with processes and resource types
Optimization Strategies
- Cache safety sequences - If state unchanged, reuse last sequence
- Incremental safety checks - Only check affected processes
- Batch requests - Reduce safety check frequency
- Priority queues - Serve requests most likely to succeed first
Advantages of This Pattern
✓ Deadlock prevention - Guaranteed safe operation ✓ Maximum resource utilization - Grants when safe ✓ Predictable behavior - Well-defined algorithm ✓ Flexible - Works with any number of resource types
Limitations
✗ Requires max needs upfront - Not always known ✗ Conservative - May deny safe requests in some cases ✗ Computational overhead - Safety checks are expensive ✗ Fixed process count - Difficult with dynamic processes
When to Use
✓ Use when:
- Maximum resource needs are known upfront
- Deadlock is unacceptable
- System has limited resources
- Process count is manageable (< 100s)
✗ Avoid when:
- Resource needs are unpredictable
- High throughput required (safety checks expensive)
- Processes are very dynamic
- Deadlock detection/recovery is acceptable
Try It Yourself
- Modify resource counts - See how availability affects granting
- Add resource types - Extend to 4+ resource types
- Implement detection - Compare with deadlock detection
- Measure overhead - Profile safety check performance
- Dynamic processes - Handle processes joining/leaving
Further Reading
- Original Dijkstra paper
- Operating Systems concepts (Silberschatz)
- Resource allocation in distributed systems
- Cloud resource management strategies
This is part 22 of “Golang Experiments: Classic Concurrency Problems”