Years ago I remember attending a yoga session where one student posed a question to the instructor...

“I read a study…” began the student,

“It said that contrary to what you told me last week - if I do more Kapalbhati Pranayama (a yoga pose), my kidney function doesn’t actually improve…” Without hesitation the yoga instructor shot back,

“That’s the thing with research, you show me one good study and I’m sure I can find you two more studies that suggest otherwise.”

Years later, and after delving into clinical practice and research myself, I reflected upon what that conversation had represented. The yoga instructor was totally right in one way. If you do a literature search (that of academic journals) most topics will pose various viewpoints showing results supporting different answers. Which has led me to this blog - how is it we truly understand something is of fact or just something someone once said?

On a side note, my curiosity led me to learn more about poses for kidney health and yes, there is a belief out there that kidney failure can be ‘cured’ with two yoga poses alone. Sorry if I seem overly critical but you’ll understand why.

Every day I see numerous clients who bring to me their own individual belief system. This belief system is shaped from their past experiences, their cultural background, their education and what they have been told and read. I find it of huge importance as a health practitioner to be respectful of all of these facets of their individuality but hopefully direct them with my best interpretation of evidence based health.

What is evidence based medicine? This term is thrown around widely by health practitioners but I’d argue that the majority of health practitioners actually do not practice this to the best of their ability. I mean no disrespect to any specific professions, as it includes my own, but if we don’t read literature often and critique it’s quality then our ability to advise clients, and implement best practice solutions is completely stunted – it creates ignorance. Unfortunately, vast numbers of health professions are practicing what I call “ignorance based medicine”.

The question is, how do you tell if what you are being told is fact or just utter nonsense? Let’s talk about how to tell them apart.

1) Cochrane reviews – The Cochrane reviews do the hard work for you. Experienced researchers systematically review all the literature in one particular subject, critiquing the methods and overall quality of the analysis employed in these studies. Cochrane reviews are a great place to start if you want to know what the evidence really tells us on health related research. Visit the website here.

2) Don’t search on Google. Depending on how you phrase your search, Google will tell you what you want it to. It’s fantastic like that. Try typing in ‘yoga pose for kidney function’ – case and point. Google Scholar is its alternative search engine that looks for scholarly articles – this is a better place to start. Try writing “systematic review” or “meta-analysis” after your search term. These types of articles will evaluate numerous studies and present the data, often with more precision. Try typing in the same above search into Google Scholar, it’ll find articles that support yoga as a form of exercise but not for curing kidney function.

 3) Just because something is common and many people do it, doesn’t mean it’s evidence based or good for you. Did you know that sound research shows that although knee arthroscopies’ (surgical cleaning out of the knee) and meniscal debridement (removing parts of the meniscus between the tibia and femur), are very common procedures, they are no more effective than a placebo (small incision of the skin – with no actual operation). In fact, the non-placebo procedures have a higher risk of increasing the likelihood of future knee issues[1]. Have you heard of spinal fusion? This is a highly common procedure for lumbar spine degeneration and other issues where two or more adjacent vertebra are fused together. There is evidence to show there is higher levels of complications than conservative treatments[2] but no evidence to suggest outcomes are any better than exercise and cognitive behavioural therapy interventions[3]. So, WHY do we still do it? Because patients do improve following surgery, it just happens to be no quicker than the natural healing time. Also, how often are surgical procedures actually studied with controlled experiments? Rarely! There is an argument to say it is unethical to provide one group of people with a surgery we think will work, while another that is also in pain, does not. So instead, much surgery goes without research and the ‘evidence’ is anecdotal – it is observational and is not measured against a control. 

 4) Correlation verse Causality – Have a look at Figure 1 below. As you can see from the year 1999 to 2009 there was a high correlation between drivers that were killed in collisions with trains and the amount of crude oil imported from Norway into the US. If we then conclude that there is causality, then we are saying that if we stop crude imports we can decrease train related deaths or alternatively, if we stop drivers being killed by trains then we can affect how much crude oil is imported from Norway. Yes, it is ridiculous. So why do we do it in research? If 100 people have a sore shoulder, they then all have shoulder surgery and then 10 weeks later they all have improvements, you have correlation. However, you may not have causality. Is it just the natural healing time and rest that is causing the improvement? Maybe it’s an NSAID medication they take post-surgery? It can be a number of things. Good research needs to be designed to derive true cause from an intervention. Watch for this when reading research.

Figure 1 - Correlation of US crude oil imports and drivers killed in train collisions[4]

Figure 1 - Correlation of US crude oil imports and drivers killed in train collisions[4]

5) Be Critical. If you are reading an individual study (not a review of numerous studies) be critical. Bad science is far more common than you think and it’s what slows us down, creates fads, and at times can cause more problems than what it’s worth. The following section will help you determine the good, the bad and the ugly. 

How to be critical of individual papers

Every paper will discuss its methodology – how the research took place, before discussing its findings and the AUTHORS conclusions. Reading beyond the abstract, and in to these areas of the paper will help you develop your own opinion of the findings and whether they support a hypothesis or not. We could write a complete book on research but this guide should help you get the gist of what is strong and weak.

1. Is it randomised? Is it double blinded? Every paper, for it to be effective, needs to be unbiased. Getting rid of bias from research can be tricky as each individual scientist will attend with his own predefined belief system. To get rid of this, numerous techniques can be used – for example randomising and blinding. Participants need to be randomly assigned to groups (control group, placebo group, intervention group), this can’t be predetermined by what group a researcher thinks would be a good fit for the individual. A lack of randomisation would obviously create bias. The blinding of participants can also add strength to a study. When blinded, participants are unaware if they are receiving the treatment, or a placebo. Furthermore, double-blinding can be utilised where the researcher is also blinded to what groups participants are assigned in an attempt to avoid their own bias.

2. Is there a control group? Is there a placebo group? To understand if an intervention has an effect, we need to test it against a control group. A placebo group is also very important as it rules out the placebo effect (an effect on an outcome independent of the intervention) that may occur with the actual treatment group. It’s interesting to note that a pill that is more expensive is more effective than a cheaper pill independent of any active ingredient changes[5], a pill that is blue is more effective for anxiety whereas a red pill is a more effective stimulant – all independent of their active ingredients[6].  It’s remarkable that there’s such a high psychological component to physical treatments. Understanding the difference between the true therapeutic effect and the placebo effect is highly important in research.

3. Has the research excluded data? Data exclusion is also unfortunately common. When results of a study don’t show the desired outcome, the data can be manipulated. For example, in an experiment of 40 people, 20 ingested a placebo and 20 ingested an active treatment. Ten of the placebo group had their symptoms go away, 10 of the active treatment had the same. The researchers can then break it down further if they need to – “what if we look at just men?”. They might then find that the active treatment is more effective than the placebo with 6 improved with placebo verses 8 with active ingredient. They can take it further. “What if we exclude everyone over 40 years of age?”. Even better, 2 with placebo improved and 7 with the active ingredient. They will then report, with a bold statement, that this treatment is significantly effective at its purpose. It is important to watch for this in their results. Research can tell us whatever the researcher wants it to. A great example of this is a study that illustrates this by proving listening to the Beatles – ‘When I’m sixty-four’, makes you a year and a half younger by the end of the song[7].

4. Size really does matter. The sample size of a study is also important. If the study includes 12 people, 6 people in each group, then the findings are not as strong as a 1200 people cohort with groups of 600 which obviously has a greater cross sectional sample size that reflects the population being studied more effectively. The significance of a finding is also of importance and can be seen listed as ‘p’ values. A ‘p score’ represents the likelihood that this disproves a hypothesis and we expect this score to be less than .05 (or less than 5% chance that this disproves the hypothesis). It will be written as (p=<0.05). If the p value is above 0.05 then there is no significance in its finding. This is when you can truly use the word ‘significant’ in your statement of findings.

5. Do the outcomes tell us something of importance? You may have heard that tea is high in antioxidants. Many studies have looked at antioxidant levels in various foods, which has led to high volumes of marketing of antioxidants to sell products. Why are antioxidants so good? They get rid of free radicals. Radicals! They can’t be good for us then! It’s not completely the case, our bodies have these free radicals for a reason. They clean up bacteria and debris after white cells have engulfed them. They are also self-regulated by the body quite effectively. Some research has been done that shows health benefits to consuming antioxidants i.e. Vitamin C, E and Carotenoids. The issue with this research is that the studied populations who consumed fruits and vegetables with these antioxidants, also happened to be from higher socio-economic backgrounds that have other healthy habits that cannot be controlled for[8]. There have been many epidemiological studies to show that all-cause mortality is decreased naturally in these populations and they are generally healthier. So then, the finding of tea being a great source of antioxidants does not tell us that drinking more of it is healthier. Think before you go and consume an abundance of tea – eating a healthy diet and exercising has support for decreasing the risk of death, but spending money on products that market unsupported nonsense is ridiculous.

6. Who funded the research? Every paper should have a section that covers any conflicts of interest. One of the big criticisms of ‘big pharma’ is that the research behind their own pharmaceuticals has been funded by themselves. Of course they have a bias prior to the research even starting. Independent third party, unbiased research is important to gain a clear picture of it’s true therapeutic effect. Funnily enough there is no requirement for pharmaceuticals to be trialled by a third party before approval by the FDA in the United States.

In Conclusion - you can see research has the ability to tell a story in whatever way the author would like. Going back to the yoga instructor who said, “That’s the thing with research, you show me one good study and I’m sure I can find you two more studies that suggest otherwise.” A good study, if repeated should show similar findings. If it cannot be replicated with the exact same methodology then chances are it was not a good study. If you use the guide above, this notion of one study showing one thing and another showing the opposite will be invalid. Instead you will be able to determine true current evidence and build upon your knowledge in a far more meaningful way. 

 

1.         Thorlund, J.B., et al., Arthroscopic surgery for degenerative knee: systematic review and meta-analysis of benefits and harms. Br J Sports Med, 2015. 49(19): p. 1229-35.

2.         Schoenfeld, A.J., et al., Risk factors for immediate postoperative complications and mortality following spine surgery: a study of 3475 patients from the National Surgical Quality Improvement Program. J Bone Joint Surg Am, 2011. 93(17): p. 1577-82.

3.         Brox, J.I., et al., Randomized clinical trial of lumbar instrumented fusion and cognitive intervention and exercises in patients with chronic low back pain and disc degeneration. Spine (Phila Pa 1976), 2003. 28(17): p. 1913-21.

4.         Wilson, M. Hilarious Graphs Prove That Correlation Isn’t Causation. 201415/5/2016]; Available from: http://www.fastcodesign.com/3030529/infographic-of-the-day/hilarious-graphs-prove-that-correlation-isnt-causation/6.

5.         Trojian, T.H. and C.J. Beedie, Placebo effect and athletes. Curr Sports Med Rep, 2008. 7(4): p. 214-7.

6.         de Craen, A.J., et al., Effect of colour of drugs: systematic review of perceived effect of drugs and of their effectiveness. BMJ, 1996. 313(7072): p. 1624-6.

7.         Simmons, J.P., L.D. Nelson, and U. Simonsohn, False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci, 2011. 22(11): p. 1359-66.

8.         Turrell, G. and C. Mathers, Socioeconomic inequalities in all-cause and specific-cause mortality in Australia: 1985-1987 and 1995-1997. Int J Epidemiol, 2001. 30(2): p. 231-9