In the 1970s, the U.S. government mandated new safety features in cars: seatbelts, airbags, reinforced frames, and improved braking systems.

The goal was simple: reduce traffic fatalities.

Economist Sam Peltzman studied what actually happened.

His findings were shocking: while driver deaths stayed roughly the same, pedestrian and cyclist deaths increased.

Why? Because drivers felt safer—so they drove more recklessly.

The Paradox

The safer people feel, the more risks they take.

Safety improvements don’t just make activities safer—they change behavior. People “spend” their newfound safety on riskier actions, partially or completely offsetting the safety gain.

This is the Peltzman Effect, also known as risk compensation or risk homeostasis.

What Is the Peltzman Effect?

The Peltzman Effect is the phenomenon where safety measures lead people to take greater risks, reducing or eliminating the intended safety benefit.

When you feel protected, you compensate by behaving more dangerously.

Classic Examples

1. Mandatory Seatbelt Laws

  • Expected: Fewer driver deaths
  • Reality: Drivers drove faster and more aggressively
  • Result: Driver deaths stayed similar, but pedestrian/cyclist deaths increased

2. Anti-lock Brakes (ABS)

  • Expected: Shorter stopping distances, fewer crashes
  • Reality: Drivers followed more closely and drove faster
  • Result: Crash rates didn’t improve as much as predicted

3. Bicycle Helmets

  • Expected: Fewer head injuries
  • Reality: Cyclists with helmets ride more aggressively
  • Result: Head injury rates didn’t decrease as much as expected

4. Safety Padding in Sports

  • Expected: Fewer injuries in football, hockey, etc.
  • Reality: Players hit harder because they feel invincible
  • Result: Concussions and serious injuries remained high

5. Playground Safety Improvements

  • Expected: Fewer childhood injuries
  • Reality: Kids take bigger risks on “safe” equipment
  • Result: Injury rates didn’t decline proportionally

Why It Happens

The Peltzman Effect occurs because humans maintain a target level of risk.

The Risk Thermostat

Think of risk tolerance like a thermostat:

  1. You have an internal “comfortable” level of perceived risk
  2. When safety features reduce perceived risk below your comfort level
  3. You unconsciously increase risky behavior to return to your preferred risk level
  4. Your actual risk returns to roughly where it was before

Psychological Mechanisms

1. Overconfidence Safety features make you feel invincible, leading to overestimating your abilities.

2. Moral Licensing “I’m wearing a helmet, so I can ride more aggressively.”

3. Sensation Seeking Humans have a baseline need for excitement—remove danger, and we create it elsewhere.

4. Invisible Risk Calculation We constantly, unconsciously adjust our behavior based on perceived danger.

In Technology and Software

The Peltzman Effect shows up everywhere in tech:

Development Tools

Robust Error Handling

Developer writes sloppier code because "the framework will catch it"
Result: More bugs reach production despite better error handling

Type Systems

"TypeScript will catch my mistakes"
Developer skips thinking through edge cases
Result: Logic errors slip through type-safe code

Automated Testing

"We have 90% test coverage, we're safe to deploy"
Developers stop doing manual testing and thinking critically
Result: Tests pass but users find critical bugs

Security Tools

Antivirus Software

Users feel protected and click suspicious links
Result: Malware infections remain common

Password Managers

Users create one ultra-secure master password
Then use it everywhere because "the manager will protect me"
Result: Single point of failure

Cloud Backups

"Everything is backed up automatically"
Users stop checking if backups actually work
Result: Discover backups failed only when they need them

Infrastructure

Auto-scaling

"The system will handle any load"
Developers stop optimizing code
Result: Massive cloud bills and occasional failures

Circuit Breakers

"We have resilience built in"
Teams stop thinking about failure modes
Result: Cascading failures nobody anticipated

Blue-Green Deployments

"We can roll back instantly"
Teams deploy with less testing
Result: More frequent rollbacks

Real-World Software Examples

The Boeing 737 MAX Disaster

Boeing added MCAS (Maneuvering Characteristics Augmentation System) to make the plane safer and easier to fly.

Result:

  • Pilots relied on automation
  • Boeing minimized training requirements
  • When MCAS malfunctioned, pilots didn’t know how to respond
  • 346 people died in two crashes

The safety system became a single point of failure.

The Therac-25 Radiation Overdoses

The Therac-25 radiation therapy machine had software safety interlocks instead of hardware ones.

Result:

  • Operators trusted the software completely
  • They ignored patient complaints (“the computer says it’s fine”)
  • Software bugs caused massive radiation overdoses
  • At least 6 people died

Reliance on software safety made humans less vigilant.

GitHub Copilot and Code Quality

Developers use AI to generate code faster.

Result:

  • Less time spent understanding what the code does
  • Blindly accepting suggestions without review
  • Security vulnerabilities and bugs in AI-generated code
  • Technical debt accumulates faster

The productivity tool reduces carefulness.

The Counterintuitive Solution

Sometimes, removing safety features makes things safer.

Shared Space Traffic Design

In Europe, some cities removed traffic lights, signs, and lane markings.

Result:

  • Drivers became more cautious
  • Accidents decreased
  • Traffic flow improved

When drivers felt uncertain, they drove more carefully.

Formula 1 and Safety

Early Formula 1 was incredibly dangerous. As safety improved:

  • Drivers pushed harder
  • Speeds increased
  • Risks remained high

Ayrton Senna famously said: “If you give me a car that is safer, I will drive it faster.”

How to Counter the Peltzman Effect

1. Don’t Just Add Safety—Add Awareness

Safety features work best when paired with education about why they matter.

  • ✅ “Seatbelts reduce injuries—but don’t drive recklessly”
  • ❌ “Seatbelts make you invincible”

2. Make Risk Visible

Combat overconfidence by making real risks apparent.

  • Code reviews: Even with tests, show what could go wrong
  • Incident reports: Remind teams that safety nets fail
  • Monitoring: Make failures visible, not just silently handled

3. Build Layers of Defense

Don’t rely on a single safety mechanism—assume each one will make people less careful.

TypeScript + Tests + Code Review + Monitoring + Circuit Breakers

Each layer compensates for riskier behavior from the previous layer.

4. Create Accountability

Make people responsible for outcomes, not just following safety procedures.

  • ❌ “I wrote tests, so I’m done”
  • ✅ “I’m responsible for this feature working in production”

5. Reward Vigilance, Not Just Safety Features

Incentivize careful behavior, not just using safety tools.

  • Celebrate finding edge cases
  • Reward questioning assumptions
  • Value manual testing alongside automation

The Programmer’s Mindset

As engineers, we love building safety nets:

  • Error handling
  • Automated tests
  • Monitoring systems
  • Rollback mechanisms

But here’s the trap: every safety feature makes you slightly less careful.

The best developers are paranoid despite the safety nets, not complacent because of them.

Questions to Ask

Before shipping code with great test coverage:

  • “What could go wrong that tests won’t catch?”
  • “Am I being less careful because I trust the tools?”
  • “What happens if this safety mechanism fails?”

The Deeper Truth

The Peltzman Effect reveals an uncomfortable reality:

You cannot make humans safe—you can only change how they express risk.

People have a risk appetite. Give them safety, and they’ll find new ways to be unsafe.

This isn’t a reason to avoid safety features—it’s a reason to be honest about their limitations.

Key Takeaways

  • ✅ Safety features can make people take more risks
  • ✅ Feeling safe often leads to reckless behavior
  • ✅ Don’t rely solely on tools to prevent mistakes
  • ✅ Build awareness alongside safety mechanisms
  • ✅ Stay vigilant even when systems are “safe”

Seatbelts save lives—but only if you don’t drive like you’re invincible.

The same applies to your code, your infrastructure, your security systems.

Tools and safety nets are valuable. But they work best when you act like they might fail—because sometimes, they will.

Don’t let your safety features make you unsafe.