And how one Jamaican leader used scientific thinking to crack a problem everyone else had given up on
Ask most CEOs about their strategic plan and you’ll hear about vision statements, multi-year roadmaps, and quarterly KPIs. What you won’t hear is the one thing that actually matters: What do we believe has to be true for this to work?
That missing question—the absence of a testable hypothesis—is why most strategic plans gather dust while organizations spin their wheels solving the same problems year after year.
Dianne McIntosh and Jamaica’s Citizens Security Secretariat (CSS) discovered this the hard way. Their story reveals how treating strategy like a scientific experiment—not a sacred document—can unlock progress that seemed impossible.
The Problem with Most Strategic Plans
Here’s the uncomfortable truth: calling something a “strategic plan” doesn’t make it strategic.
Most planning exercises produce elaborate action lists. Deploy new technology. Improve customer service. Expand into new markets. These aren’t strategies-they’re activities. And activities without an underlying theory about why they’ll work are just expensive guesses dressed up in PowerPoint.
Real strategy requires what Dr. Peter Compo describes as a central rule: a single guiding choice that directly addresses the system’s bottleneck—the critical constraint standing between you and your aspiration. A true strategy is not a laundry list of initiatives but a clear, testable bet about how overcoming that bottleneck will unlock progress. The central rule can then be tested, adapted, or abandoned as evidence accumulates.
Why don’t more leaders think this way? Because genuine strategic thinking is genuinely hard. It demands:
- Making peace with incomplete information
- Committing to multi-year time horizons
- Accepting you might be wrong
- Following evidence even when it contradicts your initial beliefs
Most executives understandably retreat to the comfort of detailed plans and busy work. At least that feels like progress.
When Reality Breaks Your Beautiful Plan
When Jamaica formed the CSS to tackle violent crime, they followed the standard playbook: multi-sectoral coordination, justice reform, community programs. Their flagship initiative? Train 99,000 parents of at-risk youth through the Parenting Commission.
It was comprehensive. It was ambitious. It was wrong.
The program failed spectacularly—burning through a million euros in EU funding before collapsing under the weight of insufficient capacity and entrenched behavioral patterns. The target was fantasy, not strategy.
This is where most organizations double down on the original plan or blame “execution failures.” The CSS did something different: they admitted their hypothesis was flawed and went looking for a better one.
The Question That Changed Everything
Tony Anderson, then Commissioner of Police, posed a challenge that redirected their entire approach: “When are you going to do something about the few schools producing most of our criminals?”
That question sent McIntosh’s team into Jamaica’s education data for the first time. What they found confirmed Anderson’s instinct—but revealed something far more important.
The data exposed the bottleneck: children leaving primary school reading at Grade Two level, many traumatized, seeking achievement and belonging in criminal gangs instead of classrooms.
This wasn’t about parenting programs. The constraint was literacy and unaddressed trauma in the education system itself.
Suddenly, the CSS had a testable hypothesis: If we close learning gaps and treat trauma in vulnerable schools, we can predictably reduce the pipeline of youth into criminality.
That’s a hypothesis you can build experiments around. That’s a hypothesis that points to specific interventions. That’s a hypothesis that, if wrong, tells you exactly where your assumptions broke down.
The Scientific Approach to Strategy
The CSS’ turnaround demonstrates a replicable four-step process:
First: Treat Your Plan as a Temporary Hypothesis
Stop calling it “the strategy” and start calling it “our current best guess.” The CSS abandoned their parenting program quickly because they viewed it as a testable assumption, not a political commitment. This requires intellectual humility—and organizational culture that rewards learning over face-saving.
Second: Let Data Reveal the True Constraint
The CSS didn’t commission a new consulting study. They analyzed existing school inspection reports and educational outcomes data. The constraint revealed itself: students trapped in learning failure, falling further behind each year.
Theory of Constraints teaches that every system has one primary bottleneck limiting performance. Your job isn’t solving every problem—it’s identifying which constraint actually matters, then exploiting it relentlessly.
Third: Build a Cause-Effect Model You Can Measure
The CSS formalized their new hypothesis as the Inter-Ministerial School Support Strategy (IMSSS). Success became concrete: move a child from Grade One to Grade Four reading level in ten weeks.
This shift is critical. Unlike short-term police operations that yield immediate metrics, social development requires years to show results. But intermediate indicators—reading level improvements, trauma treatment completion—let you track whether your hypothesis is working long before final outcomes appear.
You’re not waiting five years to discover you were wrong. You’re testing assumptions continuously.
Fourth: Institutionalize the Learning Process
Borrowing from Tony Blair’s UK government, the CSS established itself as a delivery coordination center—managing the “science of delivery” across multiple ministries.
Why does this matter? Because one-off experiments don’t create lasting capability. You need institutional mechanisms that turn hypothesis testing into organizational DNA.
Does Your Plan Have a Hypothesis?
Most strategic documents hide their assumptions rather than stating them explicitly. You can fix this today with a simple diagnostic.
Take your current strategic plan and ask an AI to analyze it using these prompts:
- What’s our stated long-term objective?
- What are our top five proposed actions?
- What cause-and-effect chain connects these actions to that objective? (State it explicitly: “We believe X leads to Y because of Z”)
- What’s the implicit hypothesis? (“The constraint preventing our success is…”)
- Are these actions surgical interventions testing our hypothesis—or just business-as-usual activities we hope will help?
- Can you state our central strategic hypothesis in one clear, testable sentence?
If the AI can’t find a clear hypothesis, neither can your team. You’re operating on hope, not strategy.
The Ultimate Test
After running this analysis, ask yourself one question McIntosh would recognize:
“If we executed this plan perfectly and still failed to achieve our goals, would we know exactly which assumption was wrong?”
If you can’t answer “yes”—if you can’t pinpoint the specific cause-effect belief that would be disproven—then you don’t have a strategic hypothesis. You have a prayer disguised as a plan.
But if you can articulate what would have to be true, what evidence would prove you wrong, and what you’d learn from failure—then you’re thinking scientifically.
The CSS didn’t contribute to reducing Jamaica’s violent crime by 40% through willpower or resources. They did it by treating strategy as science: form a hypothesis, design experiments, follow the evidence, adjust ruthlessly.
Your constraints are different. Your bottleneck isn’t literacy or trauma. But your challenge is identical: stop planning and start hypothesizing.
The evidence is waiting to tell you what’s actually true. The only question is whether you’re ready to listen.
This article is based on my recent Jamaica Gleaner column.





