I have noticed a recent growth in the frequency of the term “pragmatic” in medical literature. A review dedicated to this kind of study was recently published in the New England Journal of Medicine. Still, there is an evident heterogeneity in literature in what concerns the definition and the purpose of this study design. As a result, we recommend to use identical words to describe identical contents and meanings of these words to avoid confusion.
In a recent review executed by our research team on the subjects “evidences about evidences”, we have realized that publications discuss pragmatic studies but disagree on definitions and fail to point out important details about the distinction of pragmatic from other studies.
This post is meant to discuss in a didactic way the meaning of the word “pragmatic” in scientific studies, classifying them in three different contexts: the pragmatic message of efficacy, the pragmatic clinical trial and the pragmatic study of effectiveness. You will realize along this text that I have emphasized the importance of the preference in the choice of the treatment as a component of effectiveness.
PRAGMATIC MESSAGE OF EFFICACY
This situation refers to a traditional clinical trial, in which the allocation to the type of medical conduct is randomized. In this case, we suggest to use the term “pragmatic message” when the study is limited to guiding what the best conduct is, but its scientific meaning does not prove a mechanical concept. We use this rationality to analyze the SPRINT trial, which brings the pragmatic message that we should chase a more intense control of blood pressure. However, the standard deviation of the blood pressure values reached in this group was wide, meaning that a big portion of the patients randomized to the intensive treatment (BP < 120/80 mmHg) had higher BP levels. That said, the study could not prove the idea that today’s normal BP levels can actually cause vascular injury and predispose to adverse events. In this case, we call the message “pragmatic”, because this is not a study that can proof the concept.
On the other hand, when we analyze the IMPROVE-IT study, we understand that it was meant to prove that cholesterol is a risk factor for coronary artery disease. Some were still in doubt if the benefit of the statin treatment came from the reduction of cholesterol itself or from direct (pleiotropic) effect of this drug class. By demonstrating events reduction with another type of drug (ezetimibe), this study reaffirms the concept that cholesterol is a risk factor. By applying a causality criterion called “reversibility”, this study proves the concept that cholesterol increases cardiovascular risk, because reducing it caused the risk to be reversed. Even though, this study does not have a pragmatic message of telling us to regularly use ezetimibe associated to statins, since risk reduction was small, with a high number needed to treat (NNT).
There is a good hypothetical example about the role of exercising in weight loss. In clinical trials, the diet pattern is identical between the exercise and the no exercise groups, so it evaluates the direct effect of the exercise independently from diet. This direct effect means a mechanical proof of concept. It is explanatory. However, if patients were randomized to exercise and no exercise, while allowing a free diet, maybe those exercising would be motivated to eat better and would lose more weight. Then there would be a pragmatic message that exercise leads to weight loss, but not a mechanical message. The pragmatic message does not explain the reasons for weight reduction but confirms the reduction of the weight (which is indeed an important information).
In these cases, the effect is demonstrated in the controlled world of the randomized clinical trials, where there is no preference in the choice of conduct (randomly defined). For this reason, the message refers to efficacy (ideal study conditions) but not effectiveness (real world conditions).
EFFICACY PRAGMATIC CLINICAL TRIAL
This one refers to the implementation of the conduct. Imagine we want to demonstrate the efficacy of physical therapy on a musculoskeletal condition. The patients are randomized to have physical therapy or not. However, the exact way the therapy will be done (type of exercise, number of sessions) is up to the therapist. The question is whether we should or not indicate physical therapy, but in this case, the therapist’s freedom makes it close to the way it is done in real world. Even though, this is still an efficacy study (not effectiveness), because the professional’s and the patient’s choice are not considered in the decision of having the therapy. It is randomized.
Some other studies randomize patients to be or not subjected to a screening for a specific disease, and then, leave it up to the doctor to decide what to do with the result. The study question refers to the real effect of the screening, so that is why the doctor is free to adopt a conduct. This is where the pragmatism is. Still, this is also not effectiveness. This is efficacy, because the decision of having a screening is artificially made (randomized).
Notice that, in these studies, conduct is randomized but it’s implementation is not necessarily defined, granting the professional a certain freedom to define the way he will do it.
A few other criteria bring some pragmatism to randomized clinical trials, like a wide selection of patients, the non-use of placebo when analyzing the total effect of a treatment and other questions. Although, I believe the most important thing to define pragmatism is the implementation of treatment.
PRAGMATIC EFFECTIVENESS STUDY
As we know, effectiveness describes effects in real world. Efficacy answers the question “can this treatment work?” This is tested in the ideal world, in the clinical trials lab, and it shows that the conduct has an effect. Effectiveness refers to a question that must be made after the valid and controlled demonstration of efficacy: “does this treatment work in real world?”
Effectiveness has two components that make it different from efficacy. The first one refers to a wider variability in types of patients and quality of medical conduct, such as smaller adhesion, less experienced surgeons and other real world characteristics. This first component is constantly remembered, but there is a second component, just as important, frequently forgotten in literature: the doctor-patient preference in the choice of treatment.
In this context, preference means a mental choice, treatment individualization. A choice that is oriented by concepts of efficacy, but that still requires clinical judgement to decide if that patient should indeed receive this treatment. What is the desired outcome? What are the risks for adverse events?
Class I conduct recommendations should be done almost universally, like prescribing antibiotics for a patient with pneumonia. This kind of treatment is not based on one doctor’s or patient’s preference but rather on high plausibility and general acceptance. It should be more like a rule. In this context, it is possible that efficacy (ideal world) is superior to effectiveness (real world).
However, there are class II recommendations, when we should measure the risk/benefit ratio. This is the situation related to the use of anticoagulants in atrial fibrillation, for instance. We should consider the benefit in CVA prevention versus the harm of the bleeding risk. This is also the situation related to the indication of effectual surgeries, in which the risk of complications in a certain group of patients might overcome the benefit. In these cases, an effectiveness study brings additional information because it evaluates if the doctor’s choice, case by case, improves the result of the treatment in real world. Therefore, preference should be an important aspect of effectiveness studies.
In these cases that we have weigh, a good doctor individually deciding might cause a better outcome than the random choice of a treatment.
The patient’s choice might also influence in effectiveness. For example, imagine that praying improves the quality of life of cancer patients. If praying is a patient’s choice, this “treatment” will work better than in a randomized clinical trial in which praying was defined by a raffle instead of preference.
Imagine we are evaluating the benefit of physical exercise in improving the functional capacity of cancer patients. If exercising is a desire of a patient who likes to do it, it might work better than when exercising is randomly decided. A patient who prefers exercise might execute it better and commit more than a patient who does not, but was randomized to do so. Again, effectiveness (real world) tends to be superior to efficacy (ideal world).
It is clear, then, that a real effectiveness study cannot be randomized, because by doing so, we set aside the choice of treatment by a mental process. I say this because some authors, mistakenly (in my opinion), use the term “randomized effectiveness clinical trial”, which is a contradiction.
Once we have defined what effectiveness really is, let us describe how a pragmatic study of effectiveness is done. As I said, this should not be a randomized study, but an observational one. It is supposed to compare patients who have the treatment, versus patients who do not.
There will be confounders, obviously, because there was not randomization and both groups are different, so there should be an adjustment. However, in the actual effectiveness study, the adjustment should focus on the patient’s risk of having the outcome, and not on his tendency of getting the treatment.
This is different from what is normally done in cohort studies that do not evaluate the impact of treatments, like hypothesis generators. After an experimental clinical trial has demonstrated efficacy, a pragmatic study should be made to demonstrate effectiveness. Also, since effectiveness depends on preference, the adjustment should be made to the variables associated to the risk of the outcome, and not to a propensity score to the treatment.
Finally, the chronological sequence of the hypothesis tests should be:
1 – Observational study that generates the hypothesis of efficacy (adjusted to the propensity of getting the treatment). This first study is an observational study. The calculation of a propensity score is necessary to reduce bias.
2 – Randomized clinical trial that proves efficacy. This is an experimental study.
3 – Pragmatic study that proves effectiveness. The baseline risks of any included patient have to be adjusted to the endpoint assessed. When different endpoints e.g. survival and allergic reactions will be assessed a particular patient may be considered high risk for the endpoint “survival” but low risk for the endpoint “allergic reaction”
There are two types of observational studies that evaluate treatment: a first one that generates the hypothesis that the treatment is efficacious (demonstrates efficacy), and a second one that confirms it is effective (demonstrates effectiveness). We may use this or another terminology but we should avoid to mix up the concepts.
In fact, any kind of study might have a pragmatic value, but pragmatic is not a synonym for effectiveness. There are some pragmatisms related to efficacy, and there are some related to effectiveness. We should know how to differentiate the use of the word “pragmatic” in these situations.
* This post was written by Franz Porzsolt and Luis Correia.