Jacqueline Corrigan-Curay, Leonard Sacks and Janet Woodcock on Jamanetwork
For hundreds of years, the development of new medical treatments relied on “real-world” experience. Discoveries such as citrus fruit curing scurvy described in the 1700s or insulin as a treatment for diabetes in the 1920s long preceded the advent of the modern randomized clinical trial. What these diseases had in common was a reliable method of diagnosis, a predictable clinical course, and a large and obvious effect of the treatment.
In the late 1940s, the medical community began to adopt the use of randomized clinical designs for drug trials.1The recognition that anecdotal reports based on clinical practice observations were often misleading led to the nearly complete replacement of this “real-world evidence” (RWE) approach to evidence generated using the modern clinical trial model. Although moving medical science toward greater scientific rigor, this transformation simultaneously diminished the use (and minimized the value) of evidence generated from practice-based observations. Randomization and blinding became the gold standard for determining the effect of treatment. With strict protocol-specified definition of eligible patients, populations studied began to diverge from patients encountered in clinical practice. Patients with wider ranges of disease severity and age, taking a broader range of concomitant medications, and with more and varying comorbidities were not as well represented in clinical trials. full article