Efficacy Without Value: Why Approved Therapies Fail Real-World Adoption
- Mar 16
- 4 min read
Executive Summary
Modern drug development successfully produces therapies that meet regulatory standards of efficacy and safety. Yet a substantial proportion of approved interventions fail to achieve meaningful adoption within healthcare systems. The failure is sometimes about pharmacological efficacy, but other times it is about a lack of convincing evidence.
Clinical trials are designed to establish causal effect under controlled conditions. Healthcare systems must decide whether to allocate resources under uncontrolled conditions. The evidence required for these two decisions is not the same.
The industry therefore operates under an implicit assumption that evidence generated for approval can later be translated into evidence for value. In practice, this translation is impractical because the original trials did not capture the variables necessary to support real-world decisions.
The consequence is a recurring pattern. Therapies demonstrate statistical efficacy but uncertain clinical utility, prompting payers to restrict coverage, clinicians to hesitate adoption, and sponsors to initiate post-approval studies that attempt to reconstruct decision-relevant information after the fact.
This paper examines why the structure of randomized trials and real-world evidence leads to this disconnect and argues that adoption-relevant evidence must be engineered during development rather than pursued after approval.
The Evidence Hierarchy Problem
Clinical development treats evidence generation as a progression. Early studies establish safety, later studies demonstrate efficacy, and post-approval analyses demonstrate value. This structure implies that value is an extension of efficacy, when in In practice, value is a different type of question.
Regulators asks whether an intervention works under conditions designed to isolate the treatment effect. Health systems ask whether introducing the intervention changes outcomes relative to cost within routine care. The former is a causal question, while the latter is a decision question.
A trial may therefore conclusively show that a drug produces a measurable biological improvement while still failing to justify its use in practice. The apparent contradiction arises because the study answered the regulatory question but never addressed the practical one.
The Limits of Randomized Evidence
Randomized controlled trials achieve reliability by eliminating variability. They restrict eligibility, intensify monitoring, enforce adherence, and standardize clinical behavior. These measures are essential to determine whether the drug can work.
However, they also remove the environment in which medical decisions occur. The patient populations are narrower than those encountered in practice, adherence exceeds typical behavior, and clinical management follows protocol rather than judgment. The treatment effect observed is therefore conditional on an artificial stability.
This does not invalidate the trial, rather, defines its scope. The trial establishes biological efficacy within a controlled environment. What it does not establish is how that effect behaves once reintroduced into biological, behavioral, and organizational variability.
The more precisely a trial isolates causality, the less directly it predicts routine use.
The Limits of Real-World Evidence
Real-world evidence attempts to recover practical relevance by observing treatment in routine care. It captures heterogeneous patients, variable adherence, and diverse clinical settings.
Observed differences in outcome may reflect patient selection, physician preference, disease severity, or treatment behavior rather than pharmacology. As a result, real-world studies frequently produce ambiguous findings. Reduced effectiveness may reflect diminished adherence rather than drug performance, and increased cost may reflect workflow complexity rather than price.
Thus, randomized trials and real-world studies answer complementary but incomplete questions. One establishes what the therapy can do under control, while the other suggests what happens under practice but struggles to attribute cause.
Health-economic decision making requires both certainty and relevance simultaneously, yet the current evidence sequence provides them separately.
Health Economics as a Decision Framework
Health technology assessment bodies do not evaluate therapies in isolation. They evaluate replacement decisions. A new therapy must justify not only its benefit but its displacement of an alternative within constrained resources.
This requires understanding comparative effectiveness across realistic patients, durability of benefit under typical adherence, and the operational burden imposed on the healthcare system. None of these properties can be inferred reliably if the original trials excluded representative patients, used non-clinical endpoints, or assumed behavioral compliance that does not persist outside research settings.
Consequently, sponsors frequently attempt to construct economic models after approval using assumptions not measured in trials. The models become sensitive to parameters that were never empirically established. Debate shifts from clinical results to modeling assumptions, and confidence erodes among payers and clinicians alike.
The Reconstruction Problem
Once a development program concludes without collecting adoption-relevant information, recovering it becomes difficult. New studies must be conducted, but these no longer benefit from randomization in the same manner as pivotal trials, nor do they possess the credibility of pre-planned evidence.
The development program effectively repeats itself under less favorable conditions. Market access is delayed, guidance becomes restrictive, and the perceived value of the therapy declines. None of these outcomes arise from lack of efficacy. They arise because the original evidence architecture separated approval from decision making.
The system attempts to reconstruct missing knowledge instead of designing it prospectively.
Integrated Evidence Design
A different approach begins with recognizing that regulatory approval and healthcare adoption are parallel decisions supported by a shared evidentiary foundation. The purpose of development is not merely to demonstrate efficacy but to produce interpretable consequences for real treatment choices.
Trials must therefore be structured so that their outcomes remain meaningful when the treatment leaves the research environment. This does not require abandoning rigor or randomization. It requires ensuring that measured outcomes, comparators, and patient populations permit inference about routine use rather than only experimental performance.
When development captures adherence behavior, clinically interpretable outcomes, and comparative context early, health-economic evaluation becomes a continuation of evidence rather than a reconstruction of it.
Conclusion
Approval demonstrates that a therapy functions under controlled conditions. Adoption requires confidence that it improves outcomes within the constraints of real healthcare systems.
The gap between these two is not primarily scientific but architectural. Evidence designed exclusively for regulatory thresholds often lacks the information required for practical decisions. Attempts to add that information after approval introduce uncertainty rather than resolve it.
Clinical development should therefore be understood as the design of decision-grade evidence. When trials are structured so that their results remain interpretable across regulatory, clinical, and economic contexts, approval and adoption converge rather than diverge.
About Maxeome
Maxeome supports sponsors in designing clinical programs whose results inform not only whether a therapy works, but whether it should be used.