Go to Rust Series: ← Ownership and Borrowing | Series Overview | Lifetimes Explained →


The Fundamental Trade-Off

Go: Automatic memory management via garbage collection Rust: Compile-time memory management via ownership

Both approaches have profound implications for performance, predictability, and developer experience.

How Go’s GC Works

Go:

func allocateData() {
    for i := 0; i < 1000000; i++ {
        data := make([]byte, 1024)
        // Use data...
        // No explicit free needed
    }
}

Go’s GC:

  1. Tracks all allocations
  2. Periodically stops the world (STW)
  3. Scans for reachable objects
  4. Frees unreachable memory

GC pause times: Typically 1-10ms, but can spike higher.

How Rust Manages Memory

Rust:

fn allocate_data() {
    for _ in 0..1_000_000 {
        let data = vec![0u8; 1024];
        // Use data...
    }  // data is dropped here automatically
}

Rust uses RAII (Resource Acquisition Is Initialization):

  1. When value goes out of scope, its Drop trait is called
  2. Memory is freed deterministically
  3. No runtime overhead
  4. No GC pauses

Pause times: Zero. Memory is freed precisely when scope ends.

Deterministic Destruction

Go: Non-Deterministic Cleanup

Go:

type File struct {
    handle *os.File
}

func (f *File) Close() {
    f.handle.Close()
}

func processFile() error {
    f := &File{handle: /* open file */}
    defer f.Close()  // Must remember defer!

    // Use file...
    return nil
}

If you forget defer, the file stays open until GC runs finalizers (unreliable timing).

Rust: Automatic Cleanup

Rust:

use std::fs::File;

fn process_file() -> std::io::Result<()> {
    let file = File::open("data.txt")?;
    // Use file...

    // File is AUTOMATICALLY closed when it goes out of scope
    Ok(())
}

No need to remember cleanup. The Drop trait ensures it:

impl Drop for File {
    fn drop(&mut self) {
        // Close file handle automatically
    }
}

Performance Comparison

Memory Allocation Speed

Go:

func benchmark() {
    for i := 0; i < 1_000_000; i++ {
        s := make([]int, 100)
        _ = s
    }
}

Allocation is fast, but GC cost accumulates.

Rust:

fn benchmark() {
    for _ in 0..1_000_000 {
        let s = vec![0; 100];
        // Dropped immediately
    }
}

Similar allocation speed, but deallocation is immediate and free.

GC Pause Impact

Go web server under load:

Request latency:
p50:  10ms
p99:  45ms
p99.9: 150ms  <- GC pause spike

Latency has unpredictable spikes from GC.

Rust web server under load:

Request latency:
p50:  8ms
p99:  12ms
p99.9: 15ms

Consistent latency. No GC pauses.

Memory Overhead

Go: GC Metadata

Go:

type Node struct {
    value int
    next  *Node
}

// Each node:
// - 8 bytes for value
// - 8 bytes for pointer
// - GC metadata (header, mark bits, etc.)
// Total: ~32 bytes per node

GC requires metadata for tracking.

Rust: No Overhead

Rust:

struct Node {
    value: i32,
    next: Option<Box<Node>>,
}

// Each node:
// - 4 bytes for value
// - 16 bytes for Option<Box> (pointer + discriminant)
// - NO GC metadata
// Total: 20 bytes per node

No runtime metadata needed.

Real-World Example: Web Server

Go Web Server

Go:

package main

import (
    "fmt"
    "net/http"
    "runtime"
)

func handler(w http.ResponseWriter, r *http.Request) {
    data := make([]byte, 1024*1024)  // 1MB allocation
    // Process data...
    fmt.Fprintf(w, "OK")
}

func main() {
    runtime.GOMAXPROCS(4)
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

Under load:

  • Each request allocates 1MB
  • GC kicks in periodically
  • Causes latency spikes
  • Memory usage fluctuates

Rust Web Server

Rust (Actix):

use actix_web::{web, App, HttpServer, Responder};

async fn handler() -> impl Responder {
    let data = vec![0u8; 1024 * 1024];  // 1MB allocation
    // Process data...
    "OK"  // data is dropped here
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new().route("/", web::get().to(handler))
    })
    .workers(4)
    .bind("127.0.0.1:8080")?
    .run()
    .await
}

Under load:

  • Each request allocates 1MB
  • Freed immediately when function returns
  • No GC pauses
  • Consistent latency

Reference Counting: When You Need Shared Ownership

Go: GC Handles Everything

Go:

type Data struct {
    value string
}

func main() {
    data := &Data{value: "shared"}

    // Multiple goroutines can reference data
    go func() { fmt.Println(data.value) }()
    go func() { fmt.Println(data.value) }()

    // GC will clean up when no references remain
}

GC tracks all references automatically.

Rust: Explicit Reference Counting

Rust:

use std::sync::Arc;
use std::thread;

struct Data {
    value: String,
}

fn main() {
    let data = Arc::new(Data {
        value: "shared".to_string(),
    });

    let data1 = Arc::clone(&data);
    let data2 = Arc::clone(&data);

    thread::spawn(move || println!("{}", data1.value));
    thread::spawn(move || println!("{}", data2.value));

    // data is dropped when last Arc is dropped
}

Arc (Atomic Reference Counting) adds small runtime cost:

  • Increment on clone
  • Decrement on drop
  • Free when count hits zero

Trade-off: Explicit but predictable.

Stack vs Heap Allocation

Go: Escape Analysis

Go:

func local() int {
    x := 42  // Allocated on stack
    return x
}

func escape() *int {
    x := 42  // Escapes to heap (GC-managed)
    return &x
}

Go’s compiler does escape analysis, but allocation is opaque.

Rust: Explicit Control

Rust:

fn local() -> i32 {
    let x = 42;  // Stack allocation
    x
}

fn heap() -> Box<i32> {
    let x = Box::new(42);  // Explicit heap allocation
    x
}

Box::new() is explicit heap allocation. No surprises.

Custom Allocators

Go: Limited Control

Go’s allocator is hardcoded. You can’t easily swap it.

Rust: Full Control

Rust:

use std::alloc::{GlobalAlloc, Layout, System};

struct MyAllocator;

unsafe impl GlobalAlloc for MyAllocator {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        println!("Allocating {} bytes", layout.size());
        System.alloc(layout)
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        println!("Deallocating {} bytes", layout.size());
        System.dealloc(ptr, layout)
    }
}

#[global_allocator]
static GLOBAL: MyAllocator = MyAllocator;

fn main() {
    let v = vec![1, 2, 3];  // Prints allocation message
}

You can implement custom allocators for specific use cases (embedded, game engines, etc.).

Memory Leaks

Go: Leaks Still Possible

Go (leak):

var cache = make(map[string][]byte)

func addToCache(key string) {
    cache[key] = make([]byte, 1024*1024)
    // Never removed - memory leak!
}

GC can’t free what’s still reachable.

Rust: Leaks Prevented (Mostly)

Rust:

use std::collections::HashMap;

fn add_to_cache(cache: &mut HashMap<String, Vec<u8>>, key: String) {
    cache.insert(key, vec![0; 1024*1024]);
    // Still a leak if never removed, but ownership makes it clearer
}

Rust can still leak if you hold references forever, but:

  • Ownership makes it obvious
  • Tools like Clippy warn about potential leaks
  • No reference cycles (unless using Rc intentionally)

When Go’s GC is Better

Scenarios where GC shines:

  1. Rapid prototyping: Don’t think about memory
  2. Complex object graphs: GC handles cycles automatically
  3. Simplicity: Less mental overhead

Go:

type Node struct {
    children []*Node
}

func buildTree() *Node {
    root := &Node{}
    child1 := &Node{}
    child2 := &Node{}

    root.children = []*Node{child1, child2}
    child1.children = []*Node{root}  // Cycle!

    // GC handles it
    return root
}

GC detects and cleans cycles.

Rust:

use std::rc::Rc;
use std::cell::RefCell;

struct Node {
    children: Vec<Rc<RefCell<Node>>>,
}

fn build_tree() -> Rc<RefCell<Node>> {
    let root = Rc::new(RefCell::new(Node { children: vec![] }));
    let child1 = Rc::new(RefCell::new(Node { children: vec![] }));

    root.borrow_mut().children.push(Rc::clone(&child1));
    child1.borrow_mut().children.push(Rc::clone(&root));  // Cycle!

    // Potential memory leak! Rc doesn't handle cycles
    root
}

Rust requires Weak references to break cycles (more complex).

When Rust’s No-GC is Better

Scenarios where no-GC shines:

  1. Predictable latency: No pause spikes
  2. Embedded systems: No runtime overhead
  3. Game engines: Frame-time consistency
  4. High-frequency trading: Microsecond-level consistency

Performance Benchmark: Simple Allocation Test

Go:

func benchmark() {
    start := time.Now()
    for i := 0; i < 10_000_000; i++ {
        s := make([]int, 100)
        _ = s
    }
    fmt.Println(time.Since(start))  // ~800ms (including GC)
}

Rust:

use std::time::Instant;

fn benchmark() {
    let start = Instant::now();
    for _ in 0..10_000_000 {
        let s = vec![0; 100];
    }
    println!("{:?}", start.elapsed());  // ~400ms (no GC)
}

Rust is ~2x faster for this allocation-heavy workload.

Conclusion

Go’s GC:

  • Pros: Simple, handles cycles, rapid development
  • Cons: Unpredictable pauses, memory overhead, can’t be disabled

Rust’s No-GC:

  • Pros: Predictable, zero overhead, full control
  • Cons: Steeper learning curve, manual cycle handling

The verdict:

  • Use Go when developer productivity and simplicity matter most
  • Use Rust when predictable latency and zero overhead matter most

Next: Understanding lifetimes—the annotations Go developers never needed.


Go to Rust Series: ← Ownership and Borrowing | Series Overview | Lifetimes Explained →


GC verdict: Go’s GC is simpler. Rust’s lack of GC is faster and more predictable. Choose based on your constraints.