top of page
Search

GIVE ME PROBLEMS, NOT SOLUTIONS - Why the most dangerous thing in any room is the person with an answer

  • Writer: Charlie Reynolds
    Charlie Reynolds
  • Mar 1
  • 9 min read

AUTHOR NOTE


I wrote this after two decades of watching well-funded programmes solve the wrong problem with impressive precision. A recurring pattern in my assurance work, across digital transformation, AI adoption, and public sector reform, is that the presenting challenge is almost never the real one. The organisations I work with don’t lack solutions. They lack rigorous, shared understanding of what they’re actually trying to solve. This piece is about why that happens, and what it costs.


— Charles Reynolds, Fractional CTO/CIO | AI Assurance & Programme Recovery


There is a particular kind of meeting that most professionals have learned to dread. Someone presents a challenge. Within sixty seconds, the room is alive with solutions. Whiteboards fill. Voices overlap. Energy surges. Everyone leaves feeling productive. And two weeks later, the original problem remains, sometimes worse, dressed now in the clothes of an expensive intervention that addressed the wrong thing entirely. AKA the emperor’s new clothes…


This is not incompetence. It is biology.


THE EVOLUTIONARY CASE AGAINST DEEP THINKING


Human cognition was not designed for complex, ambiguous problem spaces. It was designed for speed. The framing effect, the tendency to make different decisions depending on how a problem is described rather than what the problem actually is, reflects a deeper truth: that our decision-making machinery responds to surface features, not underlying structures. We are pattern-matching animals, and patterns resolve fastest into action. Sitting with a problem, resisting the gravitational pull toward resolution, requires active effort against our own mental architecture.


The psychologists Amos Tversky and Daniel Kahneman spent decades documenting the consequences. Their work demonstrated that people will choose fundamentally different responses to the same objective situation when that situation is framed differently, not because their reasoning is poor, but because their reasoning is fast. The extensionality principle in decision theory holds that making a decision should not be affected by how the problem is described, yet varied descriptions of the same problem routinely give rise to different decisions. We think we are solving the problem in front of us. Often, we are solving the problem as it was handed to us, which is an entirely different thing.


THE SOLUTION FIXATION TRAP


In 2023, researchers Serena Sohrab and Mary Waller published research in the Harvard Business Review that gave this tendency a clinical name. The solution fixation trap emerges when a team rushes into discussing possible solutions before fully understanding the problem at hand. Their study, involving 169 MBA students across 28 teams working through a problem-solving simulation, revealed a striking and uncomfortable pattern. High-performing teams spent significant time in the information processing phase, thoroughly understanding the problem before jumping to solutions. In contrast, low-performing teams quickly moved to solution exploration, often neglecting crucial information analysis.


The implications extend well beyond business school simulations. Consider how strategy meetings are structured, how project briefs are written, how procurement processes are designed. Nearly every professional ritual encodes a bias toward solution generation. Requirements documents don’t ask “what is the real problem here?” They ask “what do you want the system to do?” Consultancy proposals are evaluated on the quality of the proposed intervention, not on the depth of diagnosis. Executives are celebrated for decisiveness, not for the restless, uncomfortable act of refusing to conclude too early.


We have built organisations in the image of our cognitive shortcuts, and we are paying for it at scale.


WICKED PROBLEMS DON’T YIELD TO CLEVER SOLUTIONS


In 1973, design theorists Horst Rittel and Melvin Webber introduced a concept that should have reshaped professional practice far more dramatically than it has. They called them wicked problems, and the adjective was intentional. The term “wicked” was meant to describe the mischievous and even evil quality of these problems, where proposed solutions often turn out to be worse than the symptoms.


Wicked problems, as Rittel and Webber defined them, are not simply “hard problems.” They have a specific and unsettling set of characteristics. There is no definitive formulation of a wicked problem. The solution depends on how the problem is framed and vice versa, the problem definition depends on the solution. Stakeholders have radically different world views and different frames for understanding the problem. The constraints change over time, and the problem is never solved definitively.


Most of the genuinely important problems any organisation faces, workforce transformation, digital reform, service design, systemic inequality, climate adaptation, are wicked in precisely this sense. They have no singular cause or immediate solution. They are complex, ongoing, and require a continual and curated effort to address. And yet the dominant professional response to wicked problems is to treat them as complicated ones: decompose, assign, resolve, close the ticket.


This is how digital transformation programmes spend eighteen months building the wrong thing with great technical precision. It is how AI adoption initiatives deliver tools that nobody uses because the human problem was never properly understood. It is how well-meaning public sector interventions create new harms while addressing old ones.


The problem was never the problem that was solved.


AI HAS MADE THIS DRAMATICALLY WORSE


Here is where the stakes become existential rather than merely inconvenient.


The emergence of large language models and agentic coding tools has introduced a new class of solution-generation technology that is extraordinary in its capability and almost entirely indifferent to problem quality. Vibe coding, the conversational, AI-driven approach to software development coined by Andrej Karpathy in 2025, moves the developer to a higher level of abstraction, focusing on the “what” rather than the “how.” That elevation sounds appealing. But the “what” it focuses on is whatever was in the prompt, not whatever was actually needed.


The downstream risks are well-documented. A December 2025 analysis of 470 open-source GitHub pull requests found that AI co-authored code contained approximately 1.7 times more major issues compared to human-written code, including security vulnerabilities at 2.74 times the rate of human-written equivalents. But the vulnerability statistics miss the deeper point. Security flaws are at least in principle detectable. The problem of solving the wrong problem is far harder to find. A system can be technically impeccable and entirely misconceived.


Coding agents optimise for making code run, not making code correct in any broader sense. They don’t know why a requirement exists, or that removing a constraint creates downstream risk. They just know it matches the pattern that resolves the immediate error. Replace “security check” with “user requirement” and the point generalises dangerously. AI systems are extraordinarily good at answering questions. They have no inherent mechanism for questioning whether the question was worth asking.


The result is a world in which the speed of solution generation has accelerated beyond all recognition, while the capacity for problem exploration has, if anything, diminished. We are building faster than ever. We are understanding less than ever. The great irony of AI-assisted development is that the less code you write yourself, the more you need to understand about the code that is written for you. The same logic applies upstream: the faster you can generate interventions, the more rigorously you must interrogate whether those interventions address anything real.


THE PROFESSIONAL DISCIPLINE OF PROBLEM DWELLING


What does it actually look like to resist the pull of premature solution? It is harder than it sounds, and it requires deliberate structure.


Design thinking, in its more serious forms, has always insisted on this. The double diamond model, diverge in problem space before converging on solution space, is well understood in theory and routinely violated in practice. The divergent phase feels unproductive. It generates ambiguity rather than reducing it. In environments where progress is measured by outputs rather than understanding, staying in the problem space requires genuine courage.


Design thinking represents an innovative process uniquely equipped to address wicked problems through its use of “problem setting”, the effective use of needfinding, looking for solutions for relevant stakeholders, combined with double-loop learning that applies iterative knowledge while testing assumptions. Note the sequencing: problem setting first, needfinding as a form of research, and only then the movement toward solutions, with loops that continuously return to test whether your understanding of the problem remains sound.


There are practical techniques. Pre-mortems, where a team imagines that an intervention has failed and works backward to understand why, force a return to the problem. Five Whys, applied seriously and not as theatre, reveals that the presenting problem is almost never the causal one. Job-to-be-done analysis suspends feature thinking entirely and asks what outcome a person or system is actually trying to achieve. Problem mapping, making your mental models visible to the outside world, surfaces the assumptions embedded in how a problem has been framed, which are often more important than the facts that have been gathered.


None of these are complicated techniques. What makes them difficult is the organisational and psychological resistance they encounter. Every person in the room has been trained, rewarded, and promoted for producing solutions. Asking them to sit with problems, to resist the dopamine hit of convergence, to treat understanding as an end in itself, this requires not just technique but culture change.


THE CONSULTANTS’ CONFESSION


There is an uncomfortable dimension to this that professional advisors, consultants, fractional executives, architects, and assurance practitioners, need to confront directly.


The advisory model has historically been built around the provision of answers. Clients pay for recommendations. Proposals are evaluated on the quality of the proposed intervention. The business model, in other words, encodes exactly the bias we are arguing against. An advisor who arrives and says “I’d like to spend the first four weeks doing nothing but understanding your problem” is asking a client to pay for something that produces no visible output. This is a hard sell.


And yet the advisors who create lasting value, the ones whose interventions actually hold, are almost invariably the ones who refused to hurry past the problem. They are the ones who noticed that the presenting challenge (“we need a new technology platform”) was masking a different and deeper one (“nobody trusts the data that the platform would run on”). They are the ones who asked questions that made stakeholders uncomfortable, not because they were being difficult, but because the discomfort was diagnostic.


Gregor Hohpe’s observation about modern architects is instructive here: the best ones don’t try to be the smartest person in the room, they make everyone else smarter. The same discipline applies to problem exploration. The goal is not to produce a brilliant diagnosis. The goal is to build shared, rigorous understanding of the problem space that enables better collective decision-making. That is a fundamentally different posture than arriving with an answer.


WHAT NEEDS TO CHANGE


The shift required is not purely individual. It is structural.


Procurement frameworks need to reward problem framing as a deliverable. Project governance needs to require demonstrated problem understanding before solution funding is approved. AI adoption programmes, which are currently rushing toward implementation at pace, need assurance gates that ask not just “does this system work?” but “does this system address a real and well-understood problem?” In regulated environments particularly, the question of problem validity should precede the question of technical compliance.


At the individual level, the discipline is simpler to state and harder to maintain. Before any intervention, before any recommendation, before any line of code or any slide deck: write down the problem. Not the solution disguised as a problem statement (“we need to implement a new CRM”). The actual problem, what is happening that shouldn’t be, or not happening that should, and for whom, and why it matters. Then resist the urge to proceed until you have interrogated that statement rigorously.


Rather than seeking the answer that totally eliminates a problem, recognise that actions occur in an ongoing process, and further actions will always be needed. Shift the goal from solution to intervention. This is not pessimism. It is the appropriate epistemic posture toward genuinely complex challenges.


THE REAL SKILL


There is a version of professional maturity that looks, from the outside, like indecision. It is the person who asks another question when everyone else is ready to move. The person who says “I’m not sure we’ve understood the problem well enough yet” in a room full of people who want to get on with it. The person who treats a confident solution brief as a reason to investigate rather than a reason to execute.


This is not a comfortable posture. It will not make you popular in every meeting. But in an era when solutions are cheap, fast, and increasingly automated, the scarcest and most valuable thing a professional can bring is not another answer. It is the rigour, the patience, and the intellectual courage to stay with the problem long enough to understand what is actually true.


Give me problems. Properly understood problems. Problems examined from multiple angles, tested against evidence, and held long enough to reveal their structure. The solutions, when they come, will be worth far more.


WHAT THIS MEANS IN PRACTICE


This isn’t an abstract argument. It’s a description of failure modes I encounter regularly in assurance and recovery work.


If your AI programme has a delivery team but no-one formally asking whether the right thing is being built, that’s a problem space problem. If your digital transformation has a roadmap but stakeholders who can’t articulate what success looks like in human terms, that’s a problem space problem. If your board is receiving confident progress updates on a programme that feels increasingly disconnected from operational reality, that’s a problem space problem.


In each case, the intervention that’s needed isn’t more delivery. It’s a structured pause, what I call a problem audit, to reestablish shared understanding of what is actually being solved, for whom, and why it matters now.


This is the work I do with a small number of organisations at a time. It starts not with a solution but with a set of uncomfortable questions. If that’s what you’re missing, I’d welcome a conversation.




Sources: Sohrab & Waller, Harvard Business Review, October 2023 | Rittel & Webber, “Dilemmas in a General Theory of Planning,” 1973 | CodeRabbit analysis of AI-generated code, December 2025 | Tversky & Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science, 1981 | Interaction Design Foundation, “Wicked Problems,” 2025 | Columbia University research on coding agent failure patterns, Towards Data Science, 2025

 
 
 

Recent Posts

See All

Comments


07969532623

Subscribe Form

Thanks for submitting!

©2022 by CTR Ltd. Proudly created with Wix.com

bottom of page