Archive for February 23rd, 2016

“I Heard This on TV….”

Tuesday, February 23, 2016 // Uncategorized

We are constantly bombarded with medical news. It is part of the fodder of the news media.  The problem with this is that information is not all of the same caliber. It doesn’t all carry the same weight. Case in point, a recent study showed and association between proton pump inhibitors like Nexium and chronic kidney disease.  This is a summary from Journal Watch.

  • Proton-Pump Inhibitors Are Associated with Chronic Kidney Disease Thomas L. Schwenk, MD reviewing Lazarus B et al. JAMA Intern Med 2016 Jan 11. Schoenfeld AJ and Grady D. JAMA Intern Med 2016 Jan 11. Thomas L. Schwenk, MDA further reason to use PPIs only when their clinical benefits are clear Thomas L. Schwenk, MD Polypharmacy is one possible cause of the increasing prevalence of chronic kidney disease (CKD) in the U.S. population. Proton-pump inhibitor (PPI) use is associated with acute renal injury, but PPIs also have other biological effects, including hypomagnesemia, that can lead to excess risk for CKD. In a population-based, prospective cohort study, researchers followed 10,482 adults (mean age, 63; 80% white) with normal renal function (estimated glomerular filtration rate, >60 mL/minute/1.73 m2); at baseline, 322 participants used PPIs and 956 participants used histamine-2 (H2)–receptor antagonists. During the study (median follow-up, 14 years), PPI use increased markedly, to ≈27% of participants.At study end, the unadjusted incidence of CKD was significantly higher among baseline-PPI users than among baseline nonusers (14.2 vs. 10.7 cases per 1000 person-years); after statistical adjustment, the difference remained significant (hazard ratio, 1.5). CKD risk for baseline H2–antagonist users remained at baseline levels. A similar replication study in 249,000 participants who were followed for a median 6 years yielded similar results.Comment:These results add to increasing concerns about PPI use, including excess risks for Clostridium difficile infections, pneumonia, and fractures. Editorialists recommend monitoring renal function and magnesium levels in patients who are taking PPIs and who are at high risk for CKD; such patients should switch to H2-antagonists when possible and should not use PPIs for vague complaints of heartburn or dyspepsia. – See more at: http://www.jwatch.org/na40149/2016/01/12/proton-pump-inhibitors-are-associated-with-chronic-kidney#sthash.SV6vDFbi.dpuf

     

    The problem is that observational studies have inherent weaknesses. The following is from Wikepedia.

    Observational study

    From Wikipedia, the free encyclopedia

    Jump to: navigation, search

    In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator.[1][2] This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group.

     

    Weaknesses

    The independent variable may be beyond the control of the investigator for a variety of reasons:

    • A randomized experiment would violate ethical standards. Suppose one wanted to investigate the abortion – breast cancer hypothesis, which postulates a causal link between induced abortion and the incidence of breast cancer. In a hypothetical controlled experiment, one would start with a large subject pool of pregnant women and divide them randomly into a treatment group (receiving induced abortions) and a control group (not receiving abortions), and then conduct regular cancer screenings for women from both groups. Needless to say, such an experiment would run counter to common ethical principles. (It would also suffer from various confounds and sources of bias, e.g.,it would be impossible to conduct it as a blind experiment.) The published studies investigating the abortion–breast cancer hypothesis generally start with a group of women who already have received abortions. Membership in this “treated” group is not controlled by the investigator: the group is formed after the “treatment” has been assigned.[citation needed]
    • The investigator may simply lack the requisite influence. Suppose a scientist wants to study the public health effects of a community-wide ban on smoking in public indoor areas. In a controlled experiment, the investigator would randomly pick a set of communities to be in the treatment group. However, it is typically up to each community and/or its legislature to enact a smoking ban. The investigator can be expected to lack the political power to cause precisely those communities in the randomly selected treatment group to pass a smoking ban. In an observational study, the investigator would typically start with a treatment group consisting of those communities where a smoking ban is already in effect.[citation needed]
    • A randomized experiment may be impractical. Suppose a researcher wants to study the suspected link between a certain medication and a very rare group of symptoms arising as a side effect. Setting aside any ethical considerations, a randomized experiment would be impractical because of the rarity of the effect. There may not be a subject pool large enough for the symptoms to be observed in at least one treated subject. An observational study would typically start with a group of symptomatic subjects and work backwards to find those who were given the medication and later developed the symptoms. Thus a subset of the treated group was determined based on the presence of symptoms, instead of by random assignment.

    Types of observational studies

    • Case-control study: study originally developed in epidemiology, in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute.
    • Cross-sectional study: involves data collection from a population, or a representative subset, at one specific point in time.
    • Longitudinal study: correlational research study that involves repeated observations of the same variables over long periods of time.
    • Cohort study or Panel study: a particular form of longitudinal study where a group of patients is closely monitored over a span of time.
    • Ecological study: an observational study in which at least one variable is measured at the group level.

    Degree of usefulness and reliability

    Although observational studies cannot be used as reliable sources to make statements of fact about the “safety, efficacy, or effectiveness” of a practice,[3] they can still be of use for some other things:

    “[T]hey can: 1) provide information on “real world” use and practice; 2) detect signals about the benefits and risks of…[the] use [of practices] in the general population; 3) help formulate hypotheses to be tested in subsequent experiments; 4) provide part of the community-level data needed to design more informative pragmatic clinical trials; and 5) inform clinical practice.”[3]

    Bias and compensating methods

    In all of those cases, if a randomized experiment cannot be carried out, the alternative line of investigation suffers from the problem that the decision of which subjects receive the treatment is not entirely random and thus is a potential source of bias. A major challenge in conducting observational studies is to draw inferences that are acceptably free from influences by overt biases, as well as to assess the influence of potential hidden biases.

    An observer of an uncontrolled experiment (or process) records potential factors and the data output: the goal is to determine the effects of the factors. Sometimes the recorded factors may not be directly causing the differences in the output. There may be more important factors which were not recorded but are, in fact, causal. Also, recorded or unrecorded factors may be correlated which may yield incorrect conclusions. Finally, as the number of recorded factors increases, the likelihood increases that at least one of the recorded factors will be highly correlated with the data output simply by chance.

    In lieu of experimental control, multivariate statistical techniques allow the approximation of experimental control with statistical control, which accounts for the influences of observed factors that might influence a cause-and-effect relationship. In healthcare and the social sciences, investigators may use matching to compare units that nonrandomly received the treatment and control. One common approach is to use propensity score matching in order to reduce confounding.[4]

    A report from the Cochrane Collaboration in 2014 came to the conclusion that observational studies are very similar in results reported by similarly conducted randomized controlled trials. In other words, it reported little evidence for significant effect estimate differences between observational studies and randomized controlled trials, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions. It therefore recommended that factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of randomized controlled trials and observational studies.[5]

    In 2007, several prominent medical researchers issued the Strengthening the reporting of observational studies in epidemiology (STROBE) statement, in which they called for observational studies to conform to 22 criteria that would make their conclusions easier to understand and generalise.[6]

    Correlation does not imply causation” is a phrase used in statistics to emphasize that a correlation between two variables does not imply that one causes the other.[1][2] Many statistical tests calculate correlation between variables. A few go further, using correlation as a basis for testing a hypothesis of a true causal relationship; examples are the Granger causality test and convergent cross mapping.[clarification needed (hypothesis testing not well explained here)]

    The counter-assumption, that “correlation proves causation”, is considered a questionable cause logical fallacy in that two events occurring together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for “with this, therefore because of this”, and “false cause”. A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for “after this, therefore because of this”).

    For example, in a widely studied case, numerous epidemiological studies showed that women taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than a direct cause and effect, as had been supposed.[3]

    As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy does in fact have a negative incidence on the likelihood of coronary heart disease the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.

    BOTTOM LINE: I don’t know if proton pump inhibitors cause kidney disease just like I don’t know if calcium supplements cause heart disease or statins cause ALS. All of these have been implicated in previous observational studies.  I do know that all medication should be used cautiously and not indiscriminately. It is worthwhile to periodically review the drugs that your are taking and discuss with your physician if they should be continued.

     

 

0 Comments Read more