Go Concurrency Patterns Series: Series Overview | Channel Fundamentals →
What are Goroutines?
Goroutines are lightweight threads managed by the Go runtime. They’re one of Go’s most powerful features, allowing you to write concurrent programs that can handle thousands of simultaneous operations with minimal overhead. Think of goroutines as extremely efficient workers that can run independently while sharing the same memory space.
Unlike traditional threads that typically consume 1-2MB of memory each, goroutines start with just 2KB of stack space and grow as needed. This efficiency allows Go programs to spawn millions of goroutines without overwhelming system resources.
The Problem: Expensive Concurrency
Let’s start with a scenario that demonstrates why goroutines are revolutionary. Imagine you’re building a web scraper that needs to fetch data from 10,000 URLs. Using traditional threading approaches:
// Traditional approach - DON'T DO THIS
func traditionalApproach() {
urls := generateURLs(10000)
for _, url := range urls {
// This blocks until each request completes
response := fetchURL(url)
processResponse(response)
}
// This would take hours to complete!
}
This sequential approach would be painfully slow. Even with traditional threads, creating 10,000 OS threads would likely crash your system due to memory constraints.
Enter Goroutines: Lightweight Concurrency
Here’s how goroutines solve this problem elegantly:
package main
import (
"fmt"
"net/http"
"sync"
"time"
)
func main() {
urls := []string{
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/2",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/3",
"https://httpbin.org/delay/1",
}
start := time.Now()
// Without goroutines - sequential execution
fmt.Println("Sequential execution:")
for _, url := range urls {
fetchURL(url)
}
fmt.Printf("Sequential time: %v\n\n", time.Since(start))
// With goroutines - concurrent execution
start = time.Now()
fmt.Println("Concurrent execution:")
var wg sync.WaitGroup
for _, url := range urls {
wg.Add(1)
go func(u string) {
defer wg.Done()
fetchURL(u)
}(url)
}
wg.Wait()
fmt.Printf("Concurrent time: %v\n", time.Since(start))
}
func fetchURL(url string) {
start := time.Now()
resp, err := http.Get(url)
if err != nil {
fmt.Printf("Error fetching %s: %v\n", url, err)
return
}
defer resp.Body.Close()
fmt.Printf("Fetched %s in %v\n", url, time.Since(start))
}
Output:
Sequential execution:
Fetched https://httpbin.org/delay/1 in 1.2s
Fetched https://httpbin.org/delay/2 in 2.1s
Fetched https://httpbin.org/delay/1 in 1.1s
Fetched https://httpbin.org/delay/3 in 3.2s
Fetched https://httpbin.org/delay/1 in 1.1s
Sequential time: 8.7s
Concurrent execution:
Fetched https://httpbin.org/delay/1 in 1.1s
Fetched https://httpbin.org/delay/1 in 1.2s
Fetched https://httpbin.org/delay/1 in 1.3s
Fetched https://httpbin.org/delay/2 in 2.1s
Fetched https://httpbin.org/delay/3 in 3.2s
Concurrent time: 3.2s
The concurrent version completes in roughly the time of the slowest request, rather than the sum of all requests!
Goroutine Creation Patterns
1. Basic Goroutine Launch
func basicGoroutine() {
// Launch a goroutine with the 'go' keyword
go func() {
fmt.Println("Hello from goroutine!")
}()
// Launch named function as goroutine
go printMessage("Hello World")
// Give goroutines time to execute
time.Sleep(100 * time.Millisecond)
}
func printMessage(msg string) {
fmt.Println(msg)
}
2. Goroutines with Parameters
func goroutineWithParams() {
names := []string{"Alice", "Bob", "Charlie"}
var wg sync.WaitGroup
for _, name := range names {
wg.Add(1)
// Capture the loop variable
go func(n string) {
defer wg.Done()
fmt.Printf("Processing %s\n", n)
time.Sleep(time.Duration(len(n)) * 100 * time.Millisecond)
fmt.Printf("Finished processing %s\n", n)
}(name)
}
wg.Wait()
fmt.Println("All processing complete")
}
3. Common Pitfall: Variable Capture
func variableCapturePitfall() {
fmt.Println("❌ WRONG WAY - Variable capture issue:")
var wg sync.WaitGroup
for i := 0; i < 3; i++ {
wg.Add(1)
go func() {
defer wg.Done()
// This captures the loop variable by reference!
// All goroutines might see the same value
fmt.Printf("Wrong: %d\n", i)
}()
}
wg.Wait()
fmt.Println("\n✅ CORRECT WAY - Pass as parameter:")
for i := 0; i < 3; i++ {
wg.Add(1)
go func(val int) {
defer wg.Done()
fmt.Printf("Correct: %d\n", val)
}(i)
}
wg.Wait()
}
Real-World Example: Concurrent File Processor
Let’s build a practical example that processes multiple files concurrently:
package main
import (
"bufio"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"time"
)
type FileResult struct {
Filename string
LineCount int
WordCount int
Duration time.Duration
Error error
}
type FileProcessor struct {
maxWorkers int
results chan FileResult
wg sync.WaitGroup
}
func NewFileProcessor(maxWorkers int) *FileProcessor {
return &FileProcessor{
maxWorkers: maxWorkers,
results: make(chan FileResult, maxWorkers),
}
}
func (fp *FileProcessor) ProcessFiles(filenames []string) []FileResult {
// Start result collector
var results []FileResult
done := make(chan bool)
go func() {
for result := range fp.results {
results = append(results, result)
}
done <- true
}()
// Process files concurrently
for _, filename := range filenames {
fp.wg.Add(1)
go fp.processFile(filename)
}
// Wait for all processing to complete
fp.wg.Wait()
close(fp.results)
// Wait for result collection to finish
<-done
return results
}
func (fp *FileProcessor) processFile(filename string) {
defer fp.wg.Done()
start := time.Now()
result := FileResult{
Filename: filename,
Duration: 0,
}
file, err := os.Open(filename)
if err != nil {
result.Error = err
result.Duration = time.Since(start)
fp.results <- result
return
}
defer file.Close()
scanner := bufio.NewScanner(file)
lineCount := 0
wordCount := 0
for scanner.Scan() {
lineCount++
words := strings.Fields(scanner.Text())
wordCount += len(words)
}
if err := scanner.Err(); err != nil {
result.Error = err
} else {
result.LineCount = lineCount
result.WordCount = wordCount
}
result.Duration = time.Since(start)
fp.results <- result
}
func main() {
// Create some sample files for demonstration
createSampleFiles()
files := []string{
"sample1.txt",
"sample2.txt",
"sample3.txt",
"sample4.txt",
}
fmt.Println("Processing files concurrently...")
start := time.Now()
processor := NewFileProcessor(4)
results := processor.ProcessFiles(files)
totalDuration := time.Since(start)
// Display results
fmt.Printf("\nResults (completed in %v):\n", totalDuration)
fmt.Println(strings.Repeat("-", 60))
totalLines := 0
totalWords := 0
for _, result := range results {
if result.Error != nil {
fmt.Printf("❌ %s: Error - %v\n", result.Filename, result.Error)
} else {
fmt.Printf("✅ %s: %d lines, %d words (processed in %v)\n",
result.Filename, result.LineCount, result.WordCount, result.Duration)
totalLines += result.LineCount
totalWords += result.WordCount
}
}
fmt.Println(strings.Repeat("-", 60))
fmt.Printf("Total: %d lines, %d words across %d files\n",
totalLines, totalWords, len(results))
// Cleanup
cleanupSampleFiles(files)
}
func createSampleFiles() {
samples := map[string]string{
"sample1.txt": "Hello World\nThis is a sample file\nWith multiple lines\n",
"sample2.txt": "Another file\nWith different content\nFor testing purposes\nConcurrent processing\n",
"sample3.txt": "Short file\nJust two lines\n",
"sample4.txt": "The longest file\nWith many lines\nTo demonstrate\nConcurrent file processing\nUsing goroutines\nIn Go programming language\n",
}
for filename, content := range samples {
os.WriteFile(filename, []byte(content), 0644)
}
}
func cleanupSampleFiles(files []string) {
for _, file := range files {
os.Remove(file)
}
}
Goroutine Lifecycle Management
1. Preventing Goroutine Leaks
func preventGoroutineLeaks() {
// ❌ BAD: Goroutine leak
badExample := func() {
go func() {
// This goroutine runs forever!
for {
time.Sleep(1 * time.Second)
fmt.Println("Still running...")
}
}()
}
// ✅ GOOD: Controlled goroutine with cancellation
goodExample := func() {
done := make(chan bool)
go func() {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Println("Working...")
case <-done:
fmt.Println("Shutting down gracefully")
return
}
}
}()
// Simulate some work
time.Sleep(3 * time.Second)
// Signal shutdown
close(done)
time.Sleep(100 * time.Millisecond) // Give time to shutdown
}
fmt.Println("Running good example:")
goodExample()
}
2. Graceful Shutdown Pattern
type Server struct {
shutdown chan bool
wg sync.WaitGroup
}
func NewServer() *Server {
return &Server{
shutdown: make(chan bool),
}
}
func (s *Server) Start() {
// Start multiple worker goroutines
for i := 0; i < 3; i++ {
s.wg.Add(1)
go s.worker(i)
}
fmt.Println("Server started with 3 workers")
}
func (s *Server) worker(id int) {
defer s.wg.Done()
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Printf("Worker %d processing...\n", id)
case <-s.shutdown:
fmt.Printf("Worker %d shutting down\n", id)
return
}
}
}
func (s *Server) Stop() {
fmt.Println("Initiating graceful shutdown...")
close(s.shutdown)
s.wg.Wait()
fmt.Println("All workers stopped")
}
func demonstrateGracefulShutdown() {
server := NewServer()
server.Start()
// Let it run for a while
time.Sleep(2 * time.Second)
// Graceful shutdown
server.Stop()
}
Performance Considerations
Goroutine Overhead
func benchmarkGoroutineOverhead() {
// Measure goroutine creation overhead
start := time.Now()
var wg sync.WaitGroup
numGoroutines := 100000
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
// Minimal work
}()
}
wg.Wait()
duration := time.Since(start)
fmt.Printf("Created and executed %d goroutines in %v\n", numGoroutines, duration)
fmt.Printf("Average time per goroutine: %v\n", duration/time.Duration(numGoroutines))
}
Memory Usage Comparison
func compareMemoryUsage() {
var m1, m2 runtime.MemStats
// Measure baseline memory
runtime.GC()
runtime.ReadMemStats(&m1)
// Create many goroutines
var wg sync.WaitGroup
numGoroutines := 10000
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(10 * time.Millisecond)
}()
}
// Measure memory with goroutines
runtime.ReadMemStats(&m2)
wg.Wait()
memoryPerGoroutine := (m2.Alloc - m1.Alloc) / uint64(numGoroutines)
fmt.Printf("Approximate memory per goroutine: %d bytes\n", memoryPerGoroutine)
}
Best Practices
1. Always Handle Goroutine Completion
// ✅ Use WaitGroup for coordination
func goodCoordination() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Printf("Task %d completed\n", id)
}(i)
}
wg.Wait() // Wait for all goroutines to complete
}
2. Limit Goroutine Creation
// ✅ Use worker pools for bounded concurrency
func boundedConcurrency() {
maxWorkers := 10
jobs := make(chan int, 100)
// Start fixed number of workers
var wg sync.WaitGroup
for i := 0; i < maxWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobs {
processJob(job)
}
}()
}
// Send jobs
for i := 0; i < 50; i++ {
jobs <- i
}
close(jobs)
wg.Wait()
}
func processJob(id int) {
fmt.Printf("Processing job %d\n", id)
time.Sleep(100 * time.Millisecond)
}
3. Proper Error Handling
type Result struct {
Value int
Error error
}
func goroutineWithErrorHandling() {
results := make(chan Result, 5)
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
// Simulate work that might fail
if id%2 == 0 {
results <- Result{Value: id * 2, Error: nil}
} else {
results <- Result{Value: 0, Error: fmt.Errorf("failed to process %d", id)}
}
}(i)
}
// Close results channel when all goroutines complete
go func() {
wg.Wait()
close(results)
}()
// Collect results
for result := range results {
if result.Error != nil {
fmt.Printf("Error: %v\n", result.Error)
} else {
fmt.Printf("Success: %d\n", result.Value)
}
}
}
Common Pitfalls and Solutions
1. Race Conditions
// ❌ Race condition
func raceCondition() {
counter := 0
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter++ // Race condition!
}()
}
wg.Wait()
fmt.Printf("Counter (with race): %d\n", counter) // Unpredictable result
}
// ✅ Thread-safe version
func threadSafe() {
var counter int64
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
atomic.AddInt64(&counter, 1) // Thread-safe
}()
}
wg.Wait()
fmt.Printf("Counter (thread-safe): %d\n", counter) // Always 1000
}
2. Goroutine Leaks
// ❌ Potential goroutine leak
func potentialLeak() {
ch := make(chan int)
go func() {
// This goroutine will block forever if no one reads from ch
ch <- 42
}()
// If we don't read from ch, the goroutine leaks
}
// ✅ Leak prevention with timeout
func leakPrevention() {
ch := make(chan int, 1) // Buffered channel prevents blocking
go func() {
ch <- 42
}()
select {
case value := <-ch:
fmt.Printf("Received: %d\n", value)
case <-time.After(1 * time.Second):
fmt.Println("Timeout - no value received")
}
}
Testing Goroutines
func TestConcurrentFunction(t *testing.T) {
// Test concurrent execution
start := time.Now()
var wg sync.WaitGroup
results := make(chan int, 5)
for i := 0; i < 5; i++ {
wg.Add(1)
go func(val int) {
defer wg.Done()
time.Sleep(100 * time.Millisecond) // Simulate work
results <- val * 2
}(i)
}
go func() {
wg.Wait()
close(results)
}()
var sum int
for result := range results {
sum += result
}
duration := time.Since(start)
// Verify results
expected := 0 + 2 + 4 + 6 + 8 // 0*2 + 1*2 + 2*2 + 3*2 + 4*2
if sum != expected {
t.Errorf("Expected sum %d, got %d", expected, sum)
}
// Verify concurrent execution (should be much faster than sequential)
if duration > 200*time.Millisecond {
t.Errorf("Execution took too long: %v", duration)
}
}
Conclusion
Goroutines are the foundation of Go’s concurrency model, providing:
- Lightweight concurrency: Minimal memory overhead and fast creation
- Simple syntax: Just add
go
before a function call - Scalability: Handle thousands of concurrent operations efficiently
- Integration: Work seamlessly with channels and other Go concurrency primitives
Key Takeaways
- Always coordinate goroutines using WaitGroups, channels, or context
- Prevent goroutine leaks with proper cleanup and cancellation
- Handle errors appropriately in concurrent contexts
- Limit concurrency when dealing with external resources
- Test concurrent code thoroughly, including race condition detection
When to Use Goroutines
- I/O-bound operations: Network requests, file operations, database queries
- Independent tasks: Operations that can run in parallel
- Event handling: Background processing, monitoring, periodic tasks
- Pipeline processing: Multi-stage data transformation
What’s Next?
Now that you understand goroutines, the next step is learning about channels - Go’s primary mechanism for goroutine communication. Channels allow goroutines to safely share data and coordinate their execution.
In the next post, we’ll explore Channel Fundamentals and learn how to build robust communication patterns between goroutines.
This post is part of the Go Concurrency Patterns series. Each pattern builds upon previous concepts, so I recommend following the suggested learning path for the best experience.