Back to overview

Are descriptive, stakeholder- or process-oriented frameworks better for wild card cases than causal ones?

Overview:
1. Short description
2. Explicit example

1. My hypothesis is that explaining why beats describing what in cases. 
However, when looking through casebooks such as Peter K, I mostly see stakeholder-based descriptive frameworks. However, there is also the alternative to structure based on causality (e.g., why did initiatives did not have the expected outcome, why were they not prioritised by stakeholders, ...). 
I hope someone can clarify whether my hypothesis is true or not.

2. Example:
Prompt: You are working on an internal project for McKinsey Global Institute. 
Africa has made limited progress on gender equality since 2015, and at the current pace it could take more than 140 years to reach gender parity.
What factors would you consider to advance women’s equality and reach gender parity in Africa?

a) Descriptive Framework
- Women's equality (Definition, Current level and growth rate, break-down by African countries)
- Impact of gender inequality (Social impact, Economic impact, Political impact)
- Gender inequality at work (Employment rate, Professional & technical jobs, Unpaid care work, Leadership positions)
- Gender inequality in society (Education level, Political representation, Digital inclusion, Legal protection)

b) Causal Framework
- What local effects do current initiatives trigger? (Short-term behavior changes, symbolic compliance, limited scope adjustments)
- Which counteracting behaviors neutralize these effects? (Reallocation of unpaid care work, selective participation, incentive-compatible adaptation by households and institutions)
- Which reinforcement mechanisms stabilize the existing outcome? (Norm persistence, expectation anchoring, institutional absorption of reforms without power shifts)
- Where do system constraints become binding and non-linear? (Care burden thresholds, coordination overload, resistance tipping points)

6
100+
7
Be the first to answer!
Nobody has responded to this question yet.
Top answer
Profile picture of Melike
Melike
Coach
on Jan 05, 2026
First session free | Ex-McKinsey | Break into MBB | Empowering you to approach interviews with clarity & confidence

Hi Tobias :)

In general there is no rule that a framework based on causality beats other frameworks. It all depends on the case prompt and what it wants you to solve for. Generally, a more diagnostic case prompt (e.g., "Why didn't xyz work") would require a structured framework around causality.

However, in the example you provided both frameworks have their limitations and don't fully showcase your ability to remain MECE:

  • the descriptive one risks staying at a reporting level without clear levers,
  • the causal one risks being overly abstract and hard to operationalize in an interview setting (especially when you need to derive solutions based on your framework) 

Especially in these more explorative case prompts you can be more creative in your framework. You could go for the following structure for example:
 

Individual level: focus on women's access to health, education, their aspirations - basically determine if women are able to participate economically and socially 
Household level: focus on unpaid care work, bargaining power within the household, general household dynamics - determining binding contraints that usually override individual capability
Systems & institutions: Employer practices, access to finance, enforcement of equality laws etc. - determining if the system enables or blocks gender parity
Societal layer: View on women's roles in society, existence of role models within society, political voices etc. - basically how societal ideas shape the view of women's role in society

From this, you could brain storm intervention options much easier going forward. Think about it this way: your framework is not only there to guide the interviewer but also yourself through the interview. The easier it is to understand, the better it will serve you throughout the interview (especially when you're stuck)

Hope this helps! :) 

Profile picture of Sidi
Sidi
Coach
on Jan 05, 2026
McKinsey Senior EM & BCG Consultant | Interviewer at McK & BCG for 7 years | Coached 500+ candidates secure MBB offers

Hi Tobias!

I love this question. It exposes the core of why the vast majority of "well prepared" candidates fail thier real MBB interviews.

Causal structures are not just better than descriptive or stakeholder-based frameworks.

They expose a fundamental flaw in how essentially all casebooks train people to think.

 

The issue: Casebooks systematically teach candidates to solve the wrong problem.

 

Why does the casebook instinct fail candidates in real interviews? Because it has trained them to believe this:

“A good framework is one that comprehensively lists all relevant areas.”

So when a case becomes ambiguous or “wild-card”, typical they react by:

  • adding more buckets
  • widening the scope
  • becoming more holistic

This feels safe.
It also feels intelligent.

And it is exactly the opposite of what MBB partners are looking for!

 

Let’s make this concrete. What candidates think a framework is vs. what partners see:

Candidate mindset (casebook-trained)

“Here are the stakeholders, social factors, economic forces, political context, and cultural norms involved.”

What the candidate thinks this signals:

  • breadth
  • sophistication
  • structured thinking

Partner reaction (actual interviewer)

“You’re describing the system.
You’re not explaining why the outcome is what it is.”

At that moment, the candidate has already lost control of the case. Even though they sound polished.

 

The core mistake casebooks bake into people

Casebooks quietly redefine “structuring” as:

organizing information

But in consulting, structuring means:

explaining causality

Those are not the same skill.

Organizing information answers:

  • What exists?
  • What could matter?

Explaining causality answers:

  • Why did something change?
  • Or why has nothing changed?
  • What is the binding constraint?
  • What would have to move for the outcome to change?

"Wild-card cases" are entirely about the second set of questions.

 

Why casebooks push people in the wrong direction

Casebooks optimize for:

  • reusability
  • memorization
  • “coverage”
  • low cognitive effort

That’s why they love stuff like:

  • stakeholder lists
  • PESTLE
  • generic “factors to consider”

But reusability is the enemy of insight!

If your "framework" works for many problems, it probably explains none of them.

MBB interviewers don’t reward frameworks that could be right.
They reward logic that makes the outcome inevitable.

 

Before any framework choice matters, one thing must happen:

The objective must be operationalized into a concrete focus metric.

In wild-card cases, objectives like:

  • gender equality
  • diversity
  • sustainability
  • quality

are not self-defining.

Without a clear focus metric:

  • drivers are arbitrary
  • causality is vague
  • prioritization is opinion

So candidates compensate the only way they know how:
--> by adding more descriptive structure.

From a partner’s perspective, that’s not rigor. 
It’s evasion.

 

Why causal logic wins (and why it feels uncomfortable to candidates who trained with shallow resources like casebooks)

Once you have a clearly defined (and aligned!) objective, the real question becomes:

What would have to change in the drivers of this metric, and why hasn’t it happened so far?

That question forces:

  • tradeoffs
  • prioritization
  • uncomfortable exclusions

This is why causal frameworks feel harder:

  • you can be wrong
  • you must commit
  • you cannot hide behind completeness

Casebooks train candidates to avoid exactly that.

 

Applying this to “wild-card” cases (the gender equality example)

A descriptive framework might say:

  • education
  • work
  • politics
  • society

An experienced MBB interviewer hears: “You’re mapping the system. I already know the system exists.”

A causal structure instead asks:

  • Which driver actually determines the outcome we care about?
  • Which constraint is binding?
  • Which behaviors neutralize current initiatives?
  • Where are the non-linearities?

That is consulting-level thinking.  Not because it’s fancy, but because it explains reality.

 

Casebooks don’t fail because they’re incomplete.
They fail because they reward the wrong reflex:

“When unsure, broaden.”

MBB interviews reward the opposite reflex:

“When unsure, sharpen.”

That difference alone explains why so many well-prepared candidates “mysteriously” fail.

 

Always remember:

  • Cases (be it "wild-card" or not) don’t test how much you can think about.
  • They test whether you can explain why something is stuck and where leverage actually sits.

Descriptive frameworks should merely be an input to enrich your thinking.
The real structure should be causal logic!

That is the part most candidates never learn. 

And this is one of the core reasons why, when I start working with candidates who believe they are 90% there (because they have "already solved 80 cases") I first have to break the uncomfortable news to them that they are not even 20% there. Because they spent a large part of their effort to become perfect at doing the wrong things.

 

To everyone reading this: if this resonates uncomfortably, that’s not an accident.
It usually means you’ve been trained by casebook frameworks - not by people who have actually run these interviews and who have made hiring decisions at the MBB level. (And yes - this includes essentially all known casebook authors! Practically none of them has ever been a real interviewer in an MBB firm)

 

Hope this helps!

Sidi

____________________

Dr. Sidi S. Koné

Anonymous A
on Jan 05, 2026
This makes SO much sense! I have been wondering for months now why the approaches in these books seem so nonsensical. And they do not ressemble at all what I see when dealing with real consultants from McKinsey. Thank you for this explanation!
Profile picture of Kevin
Kevin
Coach
on Jan 05, 2026
Ex-Bain (London) | Private Equity & M&A | 12+ Yrs Experience | The Reflex Method | Free Intro Call

This is an excellent question and gets right to the heart of what separates a good case answer from a truly great one, especially in ambiguous wild card scenarios.

The fundamental issue with purely descriptive, stakeholder-based frameworks (like your Example A) is that while they are MECE for describing the situation, they rarely lead to an actionable hypothesis. They map the terrain but don't identify the engine that needs fixing. Casebooks often feature them because they are easy to teach and memorize, but they generally land you a 'Pass'—not a 'Strong Hire.'

Your hypothesis about causality is correct, but the execution needs precision. A purely causal framework (like your Example B) risks becoming too academic or abstract during the time crunch. The best strategy is a Hybrid Framework that is structurally descriptive but diagnostically causal. Every single bucket must lead the interviewer to the root cause (the why) and the potential lever (the what to do).

For the MGI gender parity example, you should pivot the structure away from describing the problem (e.g., "Education level") and towards identifying the systemic levers of change. Structure it around 3–4 core intervention areas (e.g., Economic/Workplace Levers, Legal/Institutional Policy, Societal Norms & Behavior Change). Then, within each lever, immediately drive the analysis: What is the current constraint? What is the root cause preventing progress? What specific intervention breaks that causal chain? This shows the interviewer you are thinking about impact from the first moment, which is what the firms are actually selling.

Hope it helps!

Profile picture of Cristian
on Jan 05, 2026
Ex-McKinsey | Verifiable 88% offer rate (annual report) | First-principles cases + PEI storylining

Tobias, 

Great question. 

As a default option, I agree with you that process based structures are better (not only for wild card cases, but in general). 

The mistake most candidates make is that they create a structure of 3-4 areas which consist of unrelated 'vessels' of content. They have questions and data requests that enable them to 'fill' these vessels but it's unclear how all these things work together to lead to an answer for the client. 

My recommendation is to use operational, sequenced structures that are like a roadmap you'd work with the client through in order to reach an answer. 

Best,
Cristian

Profile picture of Alessa
Alessa
Coach
on Jan 05, 2026
Ex-McKinsey Consultant & Interviewer | PEI | MBB Prep | Ex-BCG

Hi Tobias :)

Great question. Your intuition is right in interviews, especially for wild card or abstract cases, causal thinking is usually stronger because it shows real problem solving and insight into why outcomes persist and how to change them. Descriptive or stakeholder frameworks are fine to set context and show breadth, but they should quickly lead into causal drivers and constraints, otherwise they stay superficial. A good approach is often to anchor on causality while using descriptive elements only where they help explain or prioritize the why.

Best,
Alessa :)

Profile picture of Ashwin
Ashwin
Coach
4 hrs ago
First Session: $99 | Bain Senior Manager | 500+ MBB Offers

The best framework is the one that answers the question. Your prompt asks what factors could advance gender parity. That is a "how do we fix this" question.

Your descriptive framework maps the problem but does not answer it. Your causal framework is smart but sounds like an academic paper. "Expectation anchoring" and "institutional absorption" are not things you want to say out loud in an interview. You will spend more time explaining your structure than solving the problem.

Keep it simple. What does success look like? What is blocking progress? What levers could actually move things? For this case, I would go with: barriers (cultural, economic, institutional), then interventions by type (policy, private sector, grassroots), then prioritize by impact and feasibility.

Casebooks are full of stakeholder frameworks because they are safe and easy to teach. But interviewers do not want clever. They want clear. Answer the question. That is what wins.