Most of education teaches you what to think. Almost none of it teaches you how to think.
Yet the quality of your thinking determines the quality of your decisions, and your decisions determine the trajectory of your life.
Here are 10 mental models that fundamentally change how you approach problems.
1. First Principles Thinking
Strip away assumptions. Reason from fundamental truths.
Most people think by analogy: “This is like that, so I’ll do what others did.” First principles thinking asks: “What do I know to be absolutely true? What can I build from there?”
Example: Elon Musk and Battery Costs
In 2008, everyone “knew” battery packs cost $600/kWh and always would.
Reasoning by analogy: “Batteries are expensive. They’ve always been expensive. Electric cars will always be expensive.”
Reasoning by first principles:
- What are batteries made of? Cobalt, nickel, lithium, carbon, separators, steel casing.
- What do those raw materials cost on the London Metal Exchange? About $80/kWh.
- So the real question isn’t “why are batteries expensive?” It’s “why is there a 7x markup, and which parts of the manufacturing process can be redesigned?”
Tesla built the Gigafactory. Battery costs dropped below $100/kWh.
How to apply it:
- Identify your assumptions. Write them down.
- Ask “Why?” until you hit bedrock truths.
- Rebuild your solution from those truths, ignoring convention.
The test: If everyone in your industry disappeared and you had to solve the problem from scratch with only physics and economics, what would you do differently?
2. Thinking in Systems
Nothing exists in isolation. Everything is a feedback loop.
Most people see events. Systems thinkers see the structure that produces those events.
Example: The Cobra Effect
British colonial India had a cobra problem. The government offered a bounty for dead cobras. Logical, right?
People started breeding cobras for the bounty. When the government caught on and scrapped the program, breeders released their now-worthless cobras into the wild.
Result: More cobras than before.
The mistake was treating a system as a simple input-output. They ignored the feedback loop: incentive → behavior change → unintended consequence.
Second-order effects matter more than first-order effects:
| Action | First-Order Effect | Second-Order Effect |
|---|---|---|
| Add more developers to a late project | More hands on deck | Communication overhead, project gets later (Brooks’s Law) |
| Offer unlimited PTO | Employees feel trusted | Employees take less vacation due to peer pressure |
| Optimize for a single metric | Metric improves | Everything not measured deteriorates (Goodhart’s Law) |
How to apply it: Before any decision, ask: “And then what happens?” at least three times.
3. The Art of Inverting Problems
Don’t ask how to succeed. Ask how to fail-then avoid that.
Charlie Munger’s favorite mental model, borrowed from the mathematician Carl Jacobi: “Invert, always invert.”
Example: How to Build a Great Software Team
Forward thinking: “What makes a great team? Hire smart people, use good tools, have a clear vision…”
Inverted thinking: “What guarantees a terrible team?”
- Hire for credentials, not capability
- Punish people who report bugs or raise concerns
- Change priorities every two weeks
- Let meetings consume 60% of the workday
- Promote based on politics, not performance
Now avoid every item on that list. You’ll get further than most teams that only think forward.
Example: How to Have a Good Marriage
Munger’s advice: “Figure out how to have a terrible marriage, then don’t do those things.”
- Never listen to your partner
- Keep score on everything
- Assume the worst about their intentions
- Never admit you’re wrong
Inversion works because it’s easier to identify stupidity than genius. You don’t need to be brilliant. You just need to consistently avoid being foolish.
4. Probabilistic Thinking
Stop thinking in certainties. Start thinking in likelihoods.
The world is uncertain. Your thinking should reflect that.
Example: The Startup Decision
Binary thinking: “Will this startup succeed? Yes or no?”
Probabilistic thinking: “There’s a 15% chance this startup returns 50x, a 25% chance it returns 2x, a 40% chance I get my money back, and a 20% chance I lose everything.”
Expected value: (0.15 × 50) + (0.25 × 2) + (0.40 × 1) + (0.20 × 0) = 8.4x
That’s a good bet, even though the most likely single outcome is just getting your money back.
Bayesian updating-change your mind with evidence:
You believe there’s a 70% chance your new feature will increase conversion. You ship it. After one week, conversion is flat.
Don’t think: “It failed” or “It needs more time.”
Think: “My prior was 70% positive. This evidence is weakly negative. My updated estimate is maybe 45%. I’ll run it another week before deciding.”
How to apply it: Assign actual percentages to your beliefs. Write them down. When new evidence arrives, update them. You’ll be shocked how often you held beliefs at 95% confidence that deserved 60%.
5. Thinking Slow When You Want to Think Fast
Your gut is fast. It’s also frequently wrong.
Daniel Kahneman’s two systems:
- System 1: Fast, automatic, intuitive. Catches a ball. Reads facial expressions. Jumps to conclusions.
- System 2: Slow, deliberate, analytical. Does long division. Evaluates a contract. Considers trade-offs.
The problem: System 1 volunteers answers to questions that require System 2.
Example: The Bat and Ball
A bat and a ball cost $1.10 together. The bat costs $1.00 more than the ball. How much does the ball cost?
Your System 1 screams: “10 cents!”
Wrong. If the ball is $0.10, the bat is $1.10, and together they cost $1.20.
The ball costs $0.05. The bat costs $1.05.
Over 50% of students at Harvard, MIT, and Princeton get this wrong. Not because they can’t do math-because System 1 answers before System 2 engages.
When to distrust your intuition:
- When stakes are high and reversibility is low
- When you’re emotional, tired, or rushed
- When the problem involves statistics or compounding
- When you feel very certain very quickly
The fix: When you catch yourself feeling sure about a complex decision, pause. Write out the reasoning. Sleep on it. Your future self will thank you.
6. The Feynman Technique for Understanding Anything
If you can’t explain it to a 12-year-old, you don’t understand it.
Richard Feynman, the Nobel physicist, was legendary for explaining complex ideas in plain language. His technique for learning is brutal in its simplicity:
Step 1: Pick a concept you want to understand.
Step 2: Explain it in writing as if teaching a child. No jargon. No hand-waving.
Step 3: Identify every point where you get stuck or resort to vague language.
Step 4: Go back to the source material and fill those gaps. Then rewrite.
Example: Understanding Recursion
Bad explanation: “Recursion is when a function calls itself with a modified argument until it reaches a base case that terminates the call stack.”
Technically correct. Totally unhelpful to a beginner.
Feynman-style explanation: “Imagine you’re in a line of people, and you want to know how many people are in front of you. You tap the person ahead and ask, ‘How many people are in front of you?’ They do the same thing-tap the person ahead and ask. This keeps going until someone at the front says, ‘Zero-nobody’s in front of me.’ Then each person adds one and passes the answer back. That’s recursion. Each person delegates the question forward and waits for the answer to come back.”
Where you stumble in the explanation is exactly where you don’t understand the concept.
The Feynman Technique is not about dumbing things down. It’s about exposing the gaps in your understanding that jargon usually hides.
7. Productive Disagreement with Yourself
If you can’t argue the other side, you don’t understand the issue.
This combines two techniques:
Steel-manning: Instead of attacking the weakest version of an opposing argument (straw-manning), build the strongest version of it. Then try to beat that.
Red-teaming: Actively try to break your own ideas before someone else does.
Example: You Want to Rewrite the Codebase
Your position: “We should rewrite the backend from scratch.”
Now steel-man the opposition:
“The current system works. It serves millions of users. Every quirk and edge case in the old code represents a bug that was found and fixed in production. A rewrite throws away years of battle-tested knowledge. Rewrites take 2-3x longer than estimated. During the rewrite, the old system still needs maintenance, doubling the workload. Joel Spolsky called rewrites ’the single worst strategic mistake a software company can make.’ We should refactor incrementally instead.”
If you can’t dismantle that argument, maybe you shouldn’t rewrite.
The practice: Before any major decision, write one page arguing for it and one page arguing against it. With equal effort. The decision that survives honest opposition is the one worth making.
8. Decision Journals: Thinking About Your Thinking
Separate the quality of your decisions from the quality of your outcomes.
Good decisions can produce bad outcomes (bad luck). Bad decisions can produce good outcomes (good luck). If you judge only by outcomes, you’ll learn the wrong lessons.
How to keep a decision journal:
Before the outcome is known, write down:
- The decision - What are you deciding?
- The context - What do you know right now?
- The alternatives - What else could you do?
- Your reasoning - Why this option over the others?
- Your confidence - How sure are you? (Use a percentage)
- Expected outcome - What do you think will happen?
Then close the journal. Don’t look at it until the outcome is clear.
Example Entry:
Date: Feb 2026
Decision: Promote Alex to tech lead over Jordan
Context: Alex has 3 years, strong technical skills, quieter in meetings.
Jordan has 5 years, average code, great communicator.
Alternatives: Promote Jordan, hire externally, split the role
Reasoning: The team needs technical direction more than people management
right now. Alex's architecture skills will unblock the platform work.
Confidence: 65%
Expected outcome: Team velocity increases within 2 quarters.
Risk: Alex may struggle with conflict resolution.
Six months later, compare reality to your reasoning. You’ll discover patterns: maybe you’re overconfident on people decisions, or too conservative on technical bets.
The goal isn’t to be right more often. It’s to understand how you think, so you can calibrate.
9. The Map Is Not the Territory
Your model of reality is not reality. Don’t confuse the two.
Every mental model is a simplification. Simplifications are useful-until they’re not.
Example: Résumés
A résumé is a map of a person’s professional life. It tells you where they worked and what titles they held. It doesn’t tell you:
- Whether they did the work or took credit for others’ work
- If they were fired or left voluntarily
- How they handle failure, ambiguity, or conflict
- Whether they’re kind
Hiring based on résumés alone is navigating by a map that leaves out most of the terrain.
Example: GDP as a Measure of Well-Being
GDP measures economic output. It goes up when a country spends billions cleaning up an oil spill. It doesn’t measure health, happiness, inequality, or sustainability. Countries with high GDP can have miserable citizens.
Example: Story Points in Agile
Story points are a map of effort. Teams start treating them as a map of productivity. Then they game the points. The metric becomes the target, the map replaces the territory, and nobody notices that actual output hasn’t changed.
The rule: All models are wrong. Some models are useful. Use them, but regularly ask: “What is my model leaving out? Where does this abstraction break down?”
10. Thinking Under Uncertainty
You will never have enough information. Decide anyway.
Waiting for perfect information is itself a decision-usually a bad one.
The 70% Rule (Jeff Bezos)
If you wait for 90% of the information, you’re too slow. If you act on 50%, you’re reckless. At 70%, you have enough to make a good decision and enough time to course-correct.
Most bad decisions aren’t made with too little information. They’re made too late.
Reversible vs. Irreversible Decisions
Bezos calls these Type 1 and Type 2 decisions:
- Type 1 (irreversible): Selling the company, choosing a co-founder. Deliberate carefully.
- Type 2 (reversible): Launching a feature, trying a new tool, changing a process. Decide fast, adjust later.
Most decisions are Type 2, but people treat them like Type 1. They deliberate for weeks over choices they could reverse in a day.
“Good enough” beats “perfect”:
A surgeon in the ER doesn’t wait for every test result before operating. A firefighter doesn’t model the building’s full structural integrity before entering. They act on what they know, stay alert for new information, and adjust.
Your move: For every decision you’re postponing, ask: “Is this reversible?” If yes, make the call today. You can always change course. What you can’t get back is the time you spent deliberating.
The Meta-Lesson
These ten models share a common thread: thinking about thinking.
The default mode of the human brain is reactive, pattern-matching, and overconfident. Every model above is a tool to interrupt that default, to create a pause between stimulus and response where better reasoning can happen.
You don’t need all ten at once. Pick one. Apply it to a real decision this week.
The goal isn’t to think more. It’s to think better.