Monday 18 August 2008

Dana Ullman says the thing that is not...

...no surprise there, then.

Here's Ullman, a US base homeopath, in an interview published on the website of Sue Young, a London-based homeopath. There's all kinds of drivel here, but there is an exchange on the Shang et al. meta-analysis of homeopathy (published in the Lancet in 2005) that particularly caught my eye, because it's full of absolute nonsense. Not only that, but Ullman has had his misconceptions about this paper explained to him in numerous places on the internet, including on this very blog. Here's what he and his interviewer, one Louise Mclean of the Zeus information service, had to say:

DANA: In fact there is a new study that is coming out shortly which is a re-analysis of the 2005 Lancet review of Shang. The researchers got it accepted in a major international journal of research. What they have finally done is what Shang didn’t do. He didn’t review ALL of the high calibre research but only a small part of it. He ignored comprehensive analysis entirely. I think he knew exactly what it was but he didn’t want to report on it, as it was too positive. Instead he only reported on trials with very large numbers of subjects because when you do that, most of those studies use one remedy for everybody without any degree of individuality.

LOUISE: We individualise.

DANA: We do individualise but sometimes the single remedy or the formulas will work for a broad number of people.

LOUISE: Like Mixed Pollen for hayfever.

DANA: That’s right or Oscillococcinum. But for some reason they did not include any of David Reilly’s research. http://www.bmj.com/cgi/content/abstract/321/7259/471

I don’t know why they ignored it.

LOUISE: It was too positive.

DANA: In fact they had a remark in the Shang article published in the Lancet, where they specifically made reference to trials on respiratory ailments and that the results were robust, but they said they couldn’t trust them because there were only 8 studies. But then again they based their entire analysis on 8 homeopathic studies and 6 conventional ones. So they can’t have it both ways and this new journal article in the Journal of Clinical Epidemiology which is ranked as one of the top international journals of reviews of research, has accepted the new studies.


Sigh. Why is this nonsense? Let me count the ways.

1. Shang et al. did in fact analyse ALL of the trials of homeopathy that met their inclusion criteria. This allowed them to establish, using statistical methods, that smaller trials and those of less robust methodology showed better results for homeopathy, because of bias. The good quality, large studies showed that homeopathy had no effect. This is the pattern you would expect to see if homeopathy is a placebo.

2. Ah, individualisation. In fact, a number of the trials in the Shang study were of individualised homeopathy (including two of those that were considered large and of high quality). There was no evidence that individualised homeopathy was better than any other type of homeopathy (p=0.636). In any case, individualisation is only important when it suits Ullman, as seen when he says "We do individualise but sometimes the single remedy or the formulas will work for a broad number of people".

3. The meta-analysis not only included the Reilly paper in the BMJ that is linked to, but two other Reilly papers, as can be seen from the additional material [PDF] to the paper that is available online. This is contrary to Ullman's assertion that "for some reason they did not include any of David Reilly’s research".

4. The point that Shang et al. make about the 8 studies of respiratory ailments is that 8 studies was too few for a meta-analysis restricted to those studies to detect the bias that is revealed by an analysis of the complete dataset. The eight studies of homeopathy that Ullman wrongly claims Shang et al. "based their entire analysis on" were identified as the studies most likely to be free of bias, based on an analysis of the entire dataset. So the authors are not trying to have it both ways at all, and Ullman is comparing apples with oranges.

What I find particularly annoying about this is that Ullman and Mclean are essentially accusing Shang and his co-workers of research misconduct. What do they base this very serious accusation on? On a total misunderstanding of their paper, and a flat-out lie that they omitted research that was 'too positive', when that research was in fact included in the analysis. I am not a statistician, but the paper is not that difficult to understand, if you read it. Followers of Dana Ullman's career will not be surprised by his disingenuousness on this, I'm sure.

It seems that no matter how often I (and others, notably apgaylard) write about the persistent mis-representation of the Shang paper, the homeopaths carry on regardless.

22 comments:

Anonymous said...

Dana is a chump. He lies continuously and is without shame. Recall the time he popped up on the Quackometer after Andy reviewed his book? The discussion that followed also involved Dana redefining the word 'nano' before being torn to shreds for doing so as well as all too familiar references to the COPD study, Darwin's alleged support for homeoapthy, and even Hitler. Dana has repeated every one of those claims that were debunked in that thread and attendant links since then. He either cares not for the truth or is incapable of recognising it.

And remember Dana is a leading light of homeopathy with support from both the medically illiterate and medically qualified scions of homeopathy, Peter Fisher even wrote the foreword to his latest book.

Paul Wilson said...

"Dana is a chump".

Actually, I should probably just have written that and saved myself some time and effort.

Dr* T said...

Careful chaps - say his name three times and he will appear! (No doubt redefining old words to suit his own meanings - apparently 'nano' means very very small and very very strong.)

He appears to have absolutely no idea of his own immense shortcomings and refuses to engage, rather doing drive-by commenting on blogs that diminish his credibility further (if 'twere possible).

Anonymous said...

"What I find particularly annoying about this is that Ullman and Mclean are essentially accusing Shang and his co-workers of research misconduct. What do they base this very serious accusation on? On a total misunderstanding of their paper, and a flat-out lie that they omitted research that was 'too positive'..."

Holford does that on a regular basis: examples are his egregious misrepresentation of the Cochrane Review of antioxidants and his "conspiracy theory" relating to a systematic review of Omega 3s.

Anonymous said...

I've tried twice to leave a comment on the piece about the allegation that Shang et al ignored Reilly’s research. It just goes into the moderation queue and then disappears. I've e-mailed Sue to see if its a technical problem, no response.

The only conclusion that I can draw is that they just want to talk nonsense to each other - perpetuate new myths -without nasty facts getting in the way.

The bit about the eight studies of acute infections of the upper respiratory tract seems to go back to Peter Fisher - at least that's what my favourite homeopath of the moment tells me. (here's my response). I've not been able to check whether this derives from his Lancet letter in December 2005, or some other rant.

Thanks for the H/T. Good post.

Paul Wilson said...

Adrian:

Does seem to be Fisher's commentary on the original Shang paper, which can be found here.

Something I'd forgotten about was that in Shang et al.'s reply to criticism of their original paper, there's a figure illustrating the complete dataset versus the trials of individualised homeopathy. The patterns are closely similar.

Paul Wilson said...

Oh, and I think Sue Young is notorious for not allowing any critical comments on her blog, following an invasion of Bad Science types...

Paul Wilson said...

dr* t:

Not to worry, I don't think Ullman is particularly likely to show up here. He doesn't post on JREF anymore, either, more's the pity.

Anonymous said...

"He appears to have absolutely no idea of his own immense shortcomings"
Reminds me of that recent story about members of the public attempting to keep up with the Olympic athletes they were watching on TV.

"Oh, and I think Sue Young is notorious for not allowing any critical comments on her blog, following an invasion of Bad Science types..."
Yes - it's not so much a blog as a mouthpiece for homeopathy apologists. Dorries did something similar with her 'blog' and got a fair bit of grief for it. Deservedly so.

Dana Ullman, MPH said...

Heck, I cannot disappoint my fans. Greetings to you all.

Because you folks claim to be "defenders of science," it would be helpful in our discussions if you avoided name-calling (it is a tad silly, don't cha think?). And it would be helpful if some of you were more familiar with homeopathic principles and methodology in order to provide a more rational critique of this system of medicine.

Understand the scientific method is just one part of understanding things. Further, appreciating internal validity as well as external validity are important to understanding truth (hopefully, THAT is what you're after).

In reference to Shang, he NEVER analyzed all of the "high quality" trials (the 21 homeopathic and the 9 conventional studies). Instead, he chose to analyze the high quality studies that also happened to have over 98 subjects (he didn't choose 100 subjects because THAT would not have included one more important "negative" trial of homeopathy). And if he defined 75 subjects as a fair number, it would have included TWO more trials that had a positive result for homeopathy.

Shang and so many other cherry-pickers say that the randomized and double-blind trials that were less than 98 subjects are somehow "biased" and whatever results that they had are of NO VALUE. Huh! That's right! Shang and his team threw out these "high quality" trials that had slightly less than 98 subjects (ironically, the definition of "high quality" was defined by Shang and team, and yet, they chose to throw out most of the high quality homeopathic trials...how convenient).

Is there a new rule that all medical journal abide by now that asserts that all research with less than 98 subjects should go in the garbage can? Please cite this reference...curious minds want to know this stuff.

Of the 8 homeopathic trials that were included in the final analysis, one used a previously untested and rarely used medicine, Thyroidinum 30C, to evaluate its efficacy in "weight loss." I don't care if this trial was randomized and double-blind or even triple-blind, it is a garbage-in garbage-out study. It is devoid of "external validity."

Another trial that was devoid of external validity the trial in the "prevention" of the flu with a version of "Oscillococcinum" that was conducted by a competitor (!) to its original maker (made by Dolisos of France).

Strangely enough, Shang and team included ONE trial of Oscillococcinum in the treatment of the flu, but they chose to IGNORE the 2 other LARGE trials of this same drug, including the largest (!) trial of the three studies and the only one that was published in a conventional journal ("how convenient!"): Ferley, JP, Zmirou, D, D’Admehar, D, et al., “A Controlled Evaluation of a Homoeopathic Preparation in the Treatment of Influenza-like Syndrome,” British Journal of Clinical Pharmacology, March, 1989,27:329-35.

Shang and his team also mysteriously did not define as "high quality" the four studies conducted by David Reilly and his team at the University of Glasgow. Two of these studies were published in the BMJ and one in the Lancet. Even the editors of the Lancet said of Reilly's 1994 study, "carefully done work of this sort should not be denied the attention of Lancet readers" (vol 344, December 10, 1994, p. 1585). Then, when Reilly and team conducted their fourth trial on people with allergic disorders in the BMJ, an editorial in this prestigious medical journal noted, "The authors believe that when these results are taken together with the findings of three similar previous trials, it may be time to confront the conclusion that homeopathy and placebo differ." Then, the editorial went on to say, "This may be more plausible than the conclusion that their trials have produced serial false positive results." (vol. 321, August 19, 2000)

Two trials by Jennifer Jacobs in the treatment of children with diarrhea, including her famous study that was published in the respected journal, Pediatrics, as well as the meta-analysis of the three diarrhea trials that was published in the respected journal, Pediatric Infectious Disease Journal.

The point here is that when you put on blinders, you do not see the whole picture. Finally, I stand by the words and wisdom of Sir Arthur C. Clarke who said, : "A sufficiently advanced technology is indistinguishable from magic. When a distinguished but elderly scientist states that something is possible, he is almost certainly right; that something is impossible, he is very probably wrong."

One other thing that I want to clear up is in reference to the famous meta-analysis published in the Lancet by Linde, et al, in 1997. This study reviewed 105 clinical trials and when they evaluated the high quality trials, they determined that the placebo explanation was inadequate for explaining the effects of homeopathic medicines. Even when Linde later acknowledged that newer research that had negative results for homeopathy reduced the significance of his original analysis, but he did not say that the results were no longer "significant."

Further, the greatest misunderstanding of ths meta-analysis was one of its conclusions that asserted that there was no single condition for which homeopathic medicine was found to be effective. What skeptics and others do not seem to know is that Linde and team defined "proven effective" as a specific condition and specific medicine for which there were THREE independent trials showing efficacy. Within one year of this publication in the Lancet, the THIRD trial testing Oscillococcinum was published. Homeopaths AND others can now assert that it IS effective in reducing the symptoms of influenza and influenza-like syndromes.

Further, Jacobs and team have conducted THREE trials, each of which used different homeopaths. This trial has verified that indiviudally chosen homeopathic medicines are effective in treating childhood diarrhea (it may be helpful to note that in the study published in PEDIATRICS, they showed that the best results were in children whose diarrhea was associated with a known pathogen).

Finally, please (!) get your information straight about Darwin and his homeopathic doctor! Despite his skepticism of homeopathy, his health improved remarkably shortly after receiving homeopathic medicines and hydrotherapy. Although his nausea returned, the other more serious symptoms that he had at that time (heart palpitations, dizziness with fainting spells, sereve boils, and spots before his eyes) all disappeared. For details about this, I have provided a short summary of my book, "THE HOMEOPATHIC REVOLUTION," here:
https://www.homeopathic.com/articles/view,128

You can also read my chapter on "Literary Greats" who used and/or advocated for homeopathy at: www.HomeopathicRevolution.com The bottomline is that history proves that people who seem to be attracted to and who use homeopathic medicines tend to be considerably smarter and more educated than those who don't (ouch).

I'm going away Monday for a week, but hopefully the above post will give you something to think about.

--Dana Ullman, MPH

Paul Wilson said...

And greetings to you, Dana.

You still don't get the Shang study, though. The analysis of the large, higher quality trials (8 of homeopathy versus 6 of conventional medicine) is only PART of the analysis. Your failure to understand this is at the root of your misinterpretation of the study, I think.

The authors looked at a large number of factors, and how they influenced the odds ratios they calculated (Table 3). What they found was that while various parameters were significant, the most important was the study size. The relationship between study size and odds ratio is expressed in the funnel plots shown in Figure 2, from which the asymmetry coefficients given in Table 3 are derived. Note the p-value for the study size/odds ratio relationship is <0.0001 for both sets of trials: this is by far the most significant relationship.

Figure 2 is the meta-regression plot, note, FOR ALL THE INCLUDED STUDIES. For trials of homeopathy, it shows that the odds ratio is statistically indistinguishable from zero for the largest trials. This is the pattern you would expect to see for a placebo. For trials of conventional medicine, a statistically significant benefit is found in the largest trials. This is the pattern you would expect to see for something that is more effective than placebo.

The analysis of the 8 vs. 6 trials is basically another way of saying the same thing: that the least biased trials show no effect for homeopathy. The 21 vs 9 comparison is not going to be helpful, because that subset includes trials with low numbers of subjects. The meta-regression analysis clearly shows that these are more likely to produce spurious positive results. You persist in claiming that that these trials were 'cherry-picked', but that is simply not the case. They were identified by the application of clearly stated criteria.

You always bring up studies that you think should have been included in the set of large, high quality studies. Again, studies are included in that set based on pre-stated criteria. This is NOT cherry-picking: cherry-picking is arguing for the inclusion of studies that do not meet the criteria, on the basis that they are more positive; that is what you are doing. There is nothing "mysterious" about the criteria used: they are clearly stated in the paper. In any case, all of the trials you complain about WERE included in the meta-regression analysis, which still shows that homeopathy does not work.

Shang et al. did not define larger trials as being N=98 or above: they defined them as having "SE in the lowest quartile". The advantage of this is that the authors can state this criterion before looking at the trials, and know that they will have a reasonable number of larger trials to work with.

As for Linde et al. 1997: yes, they concluded that the trial results for homeopathy were not compatible with the placebo hypothesis. But they didn't account for study quality: they re-analysed their data accounting for this, and found that the conclusion of their previous work was at least weakened. Neither of these studies used funnel plots to look at the effect of study size: Shang et al. did do this, and so their study supercedes the previous analyses.

As for Darwin and homeopathy, your arguments don't stand up to scrutiny, as the Quackometer has shown. But in any case, the main point is that it doesn't matter how many famous people use homeopathy: this is just a collection of anecdotes, and the evidence from clinical trials still shows that homeopathy is a placebo.

Paul Wilson said...

While I'm on: Dana, you say that a re-analysis of the Shang paper has been accepted for publication in the Journal of Clinical Epidemiology. There is no sign of this paper anywhere. Do you have any comment on that?

BillyJoe said...

Dana Ullman"

Arthur C Clarke’s Third Law:
“Any significantly advanced technology is indistinguishable from magic”.

Richard Dawkins:
“But you can’t simply reverse that and say that because it calls itself magic now it must be future science.”

Paul Wilson said...

Ah, I've made a mistake here. The Linde et al. 1997 paper did in fact include a funnel plot...but they did not conduct a meta-regression analysis of the type used by Shang et al.

Dana Ullman, MPH said...

Paul,

First, I'm glad that we are both trying to have an intelligent conversation here. However, you keep repeating the same error...

I do NOT doubt that the odd-ratio for the "largest trials" in THIS review of research is not positive towards homeopathy. However, I question the external validity of most of these trials. For instance, please explain to me the external validity of the weight loss study...or the "prevention" of influenza study. Garbage in, garbage out.

Please also explain to me why two of the largest trials testing Oscillococcinum were left out.

Once Shang makes his somewhat arbitrary definitions on which trials should be included or not, he creates his own set of biases.

You have again repeated the incorrect statement that somehow defines randomized double-blind trials as "biased" just because they had greater than 97 subjects (and somehow, any trial with 96 subjects is "biased").

One important quote from the Shang review is: "for the eight trials of homeopathic remedies in acute infections of the upper respiratory tract that were included in our sample, the pooled effect indicted a substantial beneficial effect (odds ratio 0.36 [95% CI 9.26-0.50], and there was neither convicing evidence of funnel-plot asymmetry nor evidence that the ffect differered btween the trial classified as of higher reported quality and the remaining trials. Such sensitivity analyses might suggest that there is robust evidence that the treatment under investigation works." This is a powerful statement.

They then assert, "However, the biases that are prevalent in these publications, as shown by our study, might promote the conclusion that the results cannot be trusted." Here is where they try to weasel out of admitting a statistically significant effect. "How convenient."

As for the Linde review, thanx for finally acknowledging that Linde did use a funnel plot. However, you said that he didn't analyze only the high quality trials, and you're wrong here. He did. When reviewing only the high quality studies and adjusting for publication bias, the odds-ratio was 1.78 (which is significant).

Finally, it is laughable that anyone can say that the homeopathic COPD is seriously flawed, but such as the problems when people have intense dogmatism and an unscientific attitude.

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...

Dana,

I agree that the passage you quote from Shang is important. It's also important to take it in context:

"Our study has implications beyond the question of whether homoeopathic remedies have specific effects. First, an important point to keep in mind is that most systematic reviews and meta-analyses are based on relatively few trials. Simulation studies have shown that detection of bias is difficult when meta-analyses are based on a small number of trials. For example,...
[the bit you quote]
...However, the biases that are prevalent in these publications, as shown by our study, might promote the conclusion that the results cannot be trusted. We submit that similar studies should be done in other types of both complementary and conventional medicine. Such studies would “borrow strength” from a large number of trials and provide empirical information to assist reviewers and readers in the interpretation of findings from small meta-analyses that focus on a
specific intervention and disorder."

They've already delt with your point. Shang et al demonstrated the presence of bias, of the kind that classic meta-analysis find difficult to detect.

The bit you cite this their example of this problem.

Paul Wilson said...

As ever, thank you to all for your comments.

Dana, you say that I "keep repeating the same error". I can accuse you of the same thing: in your interview, you said that Shang et al. "didn’t review ALL of the high calibre research but only a small part of it". It ought to be clear to you by now that that is not the case.

You say you question the external validity "of most of these trials". What, most of all 110 of them? At most, you seem to be quibbling about 2 trials, one on weight loss and one on prevention of flu. Taking those two trials out of the reckoning would not make a huge difference to the meta-regression analysis.

But that is perhaps a minor point. You want to exclude a some number of trials from the analysis based on concerns about external validity. Fair enough: so what are what your exclusion criteria going to be? You have to define these clearly and a priori. How are you going to decide, in a transparent, repeatable and clearly stated way, which trials you will exclude?

You say that "Once Shang makes his somewhat arbitrary definitions on which trials should be included or not, he creates his own set of biases". But Shang's criteria are not arbitrary. What you are suggesting - that trials should be excluded because you don't like them - is arbitrary.

You say You have again repeated the incorrect statement that somehow defines randomized double-blind trials as "biased" just because they had greater than 97 subjects (and somehow, any trial with 96 subjects is "biased"). But the point is that Shang showed that smaller trials are more likely to give spurious positive results (see their Figure 2). So it makes sense to try and look at the part of the dataset that is least likely to have spurious positive results. The point is not that all the smaller trials are garbage: it is that they are MORE LIKELY to show false positive results. And in any case, looking at the 8 vs 6 set of trials just re-inforces the conclusions drawn from the meta-regression analysis of the entire dataset.

Finally, what Linde et al. said in 1997 is really not that relevant. Their own re-analysis showed they overestimated the effect of homeopathy. Now Shang has shown that the set of homeopathic trials shows no evidence of any effect for homeopathy. Linde et al. has been superceded.

Paul Wilson said...

Oh, one more thing, Dana. It is not laughable at all to suggest that the COPD study you mention is flawed. It was discussed in some detail at Respectful Insolence.

In short, the trial was flawed because the placebo group and treatment group were not equivalent at the start of the trial. So it is no surprise that they were different at the end. The problem arose because the authors measured COPD stage for each group as a mean and standard deviation: but COPD stage is a discrete, not a continuous variable. If you labelled COPD stages as A, B, C and D rather than 1-4, you wouldn't say the mean was A.2. It makes no sense to give a mean and standard deviation for such discrete variables. In the Respectful Insolence discussion, it was shown that the placebo group contained 4 patients with severe or moderate COPD: the treatment group contained only 1. Add to this the small number of patients in the trial, and I'm afraid it doesn't tell you a thing.

You keep saying that the Respectful Insolence critique is wrong, but you've never explained why.

Anonymous said...

Hi Dana,

I can shed some light on the Oscillococcinum studies that you mention. The trial by Ferley et al reported in the British Journal of Clinical Pharmacology was included in Shang et al. If you read Webappendix 1 you'll find it's reference 35.

In fact Shang et al included four trials of Oscillococcinum; three treatment trials and one prophylaxis (Attena): Attena (5), Casanova (19), Ferley (35), Papp (71).

Looking at the Cochrane Review it does seem that Shang et al missed a 1984 study by Casanova (Casanova P. Homeopathy, flu syndrome and double blinding [Homeopathie, syndrome grippal et double insu]. Tonus 1984:25-6). However, as the Cochrane reviewers noted it was, “not published in a standard medical journal”. Perhaps this may explain why it was overlooked. They also went on to note that it, “contains little experimental detail, does not report withdrawals and analyses a suspiciously round number of patients…” Certainly, even if it was included, it could not have passed the test to make it into the ‘higher quality’ sub-group.

Also, it’s perhaps evidence for the generosity and impartiality of Shang et al that Casanova’s unpublished 1992 study (19) was considered at all. (Casanova P, Gerard R. Bilan de 3 annees d’etudes randomisees multicentriques oscillococcinum/placebo. Laboratoires Boiron, 1992: 11-16. - From the reference it looks like an internal publication by the manufacturer)

The Review also notes that “…Two trials (Ferley and Papp) pre-specified ‘recovery after 48 hours’ as the main outcome measure. The RR of being sick at 48 hours on Oscillococcinum was 93% (95% CI 88% to 99%) of that of placebo …”. This main outcome measure did not make it into the oft quoted ‘headline’ for the review, evidently, a 95% CI reaching 0.99 cannot be safely considered to have reached statistical significance.

When evaluating the results of these two journal papers it is important to remember the number of outcome measures they assessed: 8 for Ferley and 17 for Papp; the odds of getting at least one statistically significant outcome by chance alone are pretty high - given that the significance testing was carried out at the 5% level with no correction for multiple comparisons. (i.e. 1 in 20 chance of a the difference between treatment and placebo being coincidental.)

The conclusion of the Cochrane review that, “Participants taking Oscillococcinum had about a quarter of a day less illness than those on placebo. This effect might be as large as half a day and as small as about an hour” is based on combining the individually negative results for a secondary outcome from these two studies. When this is done (giving two-thirds of the “weight” to the smaller study) the lower bound of the confidence interval (95%) reaches down to a benefit ”as small as about an hour”. It’s a lot of fuss over very little.

As the reviewers conclude, “the difference between groups in the meta-analysis only just reaches statistical significance. It is arguable that a question as scientifically controversial as whether homeopathic medicines are always equivalent to placebo would require more statistically robust data.” And I’d agree.

Anyway, don't you think that you really ought to reconsider your assertion that, "Strangely enough, Shang and team included ONE trial of Oscillococcinum in the treatment of the flu, but they chose to IGNORE the 2 other LARGE trials of this same drug, including the largest (!) trial of the three studies and the only one that was published in a conventional journal ("how convenient!"):"

Paul Wilson said...

Adrian Gaylard took the trouble to write to the Zeus Information Service, who published the interview I quote from in the post, concerning Ullman's statements about Shang et al. Here's what he got back from Ullman.

Paul Wilson said...

Just on the off-chance that Dana makes it back here, here's a list of four points on which I would encourage him to acknowledge his errors:

1. The paper does in fact analyse ALL 110 trials that passed the inclusion criteria, contrary to Dana's repeated assertions.

2. Trials of individualised homeopathy showed no better (and in fact slightly worse) results than the dataset as a whole.

3. The paper includes David Reilly's research, contrary to Dana's assertion in the interview.

4. The inclusion criteria, both for the study as a whole and for the subset of larger, higher quality trials, are clearly stated in the paper. If you want to argue for the inclusion or exclusion of certain papers, you have to provide clear, a priori criteria for doing so, as Shang et al. have done. You can't just do this post-hoc on the basis that you don't like the results.