Checklist for Making Judgments About How Much Confidence to Place in a Systematic Review of Effects
Korean J Anesthesiol. 2018 April; 71(two): 103–112.
Introduction to systematic review and meta-analysis
EunJin Ahn
iSection of Anesthesiology and Hurting Medicine, Inje University Seoul Paik Infirmary, Seoul, Korea
Hyun Kang
twoDepartment of Anesthesiology and Pain Medicine, Chung-Ang Academy Higher of Medicine, Seoul, Korea
Received 2017 Dec 13; Revised 2018 Feb 28; Accepted 2018 Mar xiv.
Abstract
Systematic reviews and meta-analyses nowadays results past combining and analyzing data from unlike studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses accept been actively performed in diverse fields including anesthesiology. These research methods are powerful tools that can overcome the difficulties in performing large-calibration randomized controlled trials. However, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. Nonetheless, accepting the conclusions of many studies without understanding the meta-analysis can exist dangerous. Therefore, this article provides an easy introduction to clinicians on performing and agreement meta-analyses.
Keywords: Anesthesiology, Meta-analysis, Randomized controlled trial, Systematic review
Introduction
A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ane]. During the systematic review procedure, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Usually, in order to obtain more reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which take a high level of prove [2] (Fig. 1). Since 1999, diverse papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) statement [3], and the appearance of registers such equally Cochrane Library's Methodology Annals, a large number of systematic literature reviews take been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [4] was published, and information technology greatly helped standardize and improve the quality of systematic reviews and meta-analyses [v].
In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not just perioperative direction just besides intensive care and outpatient anesthesia [half dozen–13]. Systematic reviews and meta-analyses include diverse topics, such every bit comparison various treatments of postoperative nausea and vomiting [14,xv], comparing general anesthesia and regional anesthesia [16–18], comparing airway maintenance devices [8,nineteen], comparing various methods of postoperative pain command (e.one thousand., patient-controlled analgesia pumps, nerve block, or analgesics) [20–23], comparing the precision of various monitoring instruments [vii], and meta-analysis of dose-response in various drugs [12].
Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to help improve extract accurate, adept quality data from the flood of data existence produced. Withal, a lack of understanding about systematic reviews and meta-analyses can atomic number 82 to incorrect outcomes beingness derived from the review and analysis processes. If readers indiscriminately take the results of the many meta-analyses that are published, incorrect data may be obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a way that is easy to understand for time to come authors and readers of systematic review and meta-analysis.
Study Planning
It is like shooting fish in a barrel to misfile systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, past collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from two or more than dissimilar studies to form a pooled estimate [1]. Post-obit a systematic review, if it is not possible to form a pooled estimate, it can exist published as is without progressing to a meta-analysis; however, if it is possible to form a pooled estimate from the extracted information, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually proceed according to the flowchart presented in Fig. 2. We explain each of the stages below.
Formulating research questions
A systematic review attempts to assemble all bachelor empirical inquiry past using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Hither, the definition of the discussion "similar" is not fabricated articulate, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can exist combined. If the studies contain data on the same topic that tin can be combined, a meta-analysis tin even be performed using data from only two studies. However, study selection via a systematic review is a precondition for performing a meta-analysis, and it is important to clearly ascertain the Population, Intervention, Comparing, Outcomes (PICO) parameters that are cardinal to prove-based research. In improver, selection of the enquiry topic is based on logical evidence, and information technology is important to select a topic that is familiar to readers without clearly confirmed the evidence [24].
Protocols and registration
In systematic reviews, prior registration of a detailed research plan is very important. In club to brand the research process transparent, chief/secondary outcomes and methods are prepare in advance, and in the consequence of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organisation like PROSPERO (http://world wide web.crd.york.ac.uk/PROSPERO/), and the registration number is recorded when reporting the study, in society to share the protocol at the fourth dimension of planning.
Defining inclusion and exclusion criteria
Information is included on the study blueprint, patient characteristics, publication condition (published or unpublished), linguistic communication used, and research flow. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to be conspicuously explained while describing the patient characteristics, to avoid confusing the reader.
Literature search and report selection
In gild to secure proper basis for evidence-based research, it is essential to perform a broad search that includes as many studies as possible that encounter the inclusion and exclusion criteria. Typically, the iii bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not only published studies but too abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that encounter the inclusion/exclusion criteria based on the abstracts, and then brand the final selection of studies based on their full text. In society to maintain transparency and objectivity throughout this process, study selection is conducted independently past at least two investigators. When in that location is a inconsistency in opinions, intervention is required via debate or by a third reviewer. The methods for this process too need to be planned in advance. It is essential to ensure the reproducibility of the literature pick process [25].
Quality of evidence
All the same, well planned the systematic review or meta-analysis is, if the quality of show in the studies is low, the quality of the meta-assay decreases and incorrect results tin be obtained [26]. Fifty-fifty when using randomized studies with a loftier quality of evidence, evaluating the quality of evidence precisely helps make up one's mind the strength of recommendations in the meta-analysis. 1 method of evaluating the quality of evidence in non-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Institute ane) . However, we are mostly focusing on meta-analyses that use randomized studies.
If the Grading of Recommendations, Cess, Development and Evaluations (Form) system (http://www.gradeworkinggroup.org/) is used, the quality of evidence is evaluated on the ground of the study limitations, inaccuracies, incompleteness of outcome data, indirectness of evidence, and take chances of publication bias, and this is used to determine the strength of recommendations [27]. As shown in Table ane, the study limitations are evaluated using the "hazard of bias" method proposed past Cochrane two) . This method classifies bias in randomized studies equally "low," "high," or "unclear" on the ground of the presence or absence of 6 processes (random sequence generation, resource allotment concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [28].
Tabular array i.
Domain | Support of judgement | Review author'southward judgement |
---|---|---|
Sequence generation | Describe the method used to generate the allocation sequence in sufficient item to allow for an assessment of whether it should produce comparable groups. | Choice bias (biased resource allotment to interventions) due to inadequate generation of a randomized sequence. |
Allotment concealment | Describe the method used to conceal the allocation sequence in sufficient item to determine whether intervention allocations could have been foreseen in advance of, or during, enrollment. | Pick bias (biased resource allotment to interventions) due to inadequate concealment of allocations prior to consignment. |
Blinding | Describe all measures used, if whatever, to bullheaded study participants and personnel from knowledge of which intervention a participant received. | Performance bias due to noesis of the allocated interventions by participants and personnel during the report. |
Describe all measures used, if any, to blind written report upshot assessors from knowledge of which intervention a participant received. | Detection bias due to noesis of the allocated interventions past outcome assessors. | |
Incomplete outcome data | Describe the abyss of event data for each main upshot, including attrition and exclusions from the analysis. Country whether attrition and exclusions were reported, the numbers in each intervention group, reasons for compunction/exclusions where reported, and whatever re-inclusions in analyses performed past the review authors. | Attrition bias due to amount, nature, or treatment of incomplete result data. |
Selective reporting | State how the possibility of selective outcome reporting was examined by the review authors, and what was plant. | Reporting bias due to selective result reporting. |
Other bias | State any important concerns about bias not addressed in the other domains in the tool. | Bias due to problems not covered elsewhere in the table. |
If particular questions/entries were prespecified in the reviews protocol, responses should exist provided for each question/entry. |
Data extraction
2 different investigators extract information based on the objectives and form of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are too different, and slight changes may be required when combining the information [29]. If there are differences in the size and format of the outcome variables that cause difficulties combining the data, such as the utilise of different evaluation instruments or different evaluation timepoints, the analysis may be limited to a systematic review. The investigators resolve differences of stance by debate, and if they neglect to reach a consensus, a tertiary-reviewer is consulted.
Information Analysis
The aim of a meta-analysis is to derive a conclusion with increased ability and accuracy than what could non be able to accomplish in individual studies. Therefore, before assay, it is crucial to evaluate the direction of effect, size of effect, homogeneity of effects among studies, and forcefulness of show [xxx]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the different inquiry outcomes cannot be combined, all the results and characteristics of the private studies are displayed in a table or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated by calculating the weighted pooled estimate for the interventions in at to the lowest degree ii split up studies.
The pooled gauge is the effect of the meta-assay, and is typically explained using a wood plot (Figs. 3 and iv). The black squares in the forest plot are the odds ratios (ORs) and 95% confidence intervals in each study. The expanse of the squares represents the weight reflected in the meta-assay. The black diamond represents the OR and 95% confidence interval calculated beyond all the included studies. The bold vertical line represents a lack of therapeutic consequence (OR = 1); if the confidence interval includes OR = one, information technology means no significant difference was found betwixt the handling and control groups.
Dichotomous variables and continuous variables
In data analysis, outcome variables tin can be considered broadly in terms of dichotomous variables and continuous variables. When combining data from continuous variables, the hateful difference (MD) and standardized mean difference (SMD) are used (Table 2).
Tabular array 2.
Blazon of data | Effect mensurate | Fixed-effect methods | Random-upshot methods |
---|---|---|---|
Dichotomous | Odds ratio (OR) | Mantel-Haenszel (G-H) | Mantel-Haenszel (M-H) |
Inverse variance (Four) | Inverse variance (4) | ||
Peto | |||
Risk ratio (RR), | Mantel-Haenszel (G-H) | Mantel-Haenszel (M-H) | |
Adventure deviation (RD) | Inverse variance (Four) | Changed variance (IV) | |
Continuous | Mean difference (MD), Standardized hateful divergence (SMD) | Inverse variance (IV) | Changed variance (Four) |
The Medico is the absolute difference in mean values between the groups, and the SMD is the mean difference between groups divided by the standard difference. When results are presented in the same units, the MD can exist used, simply when results are presented in different units, the SMD should be used. When the Physician is used, the combined units must be shown. A value of "0" for the MD or SMD indicates that the effects of the new treatment method and the existing treatment method are the aforementioned. A value lower than "0" means the new treatment method is less constructive than the existing method, and a value greater than "0" means the new treatment is more effective than the existing method.
When combining information for dichotomous variables, the OR, risk ratio (RR), or adventure difference (RD) can exist used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other instance-control studies or cross-sectional studies. However, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the result variable is a dichotomous variable, it can be presented as the number needed to treat (NNT), which is the minimum number of patients who demand to be treated in the intervention group, compared to the control group, for a given event to occur in at least 1 patient. Based on Table three, in an RCT, if ten is the probability of the consequence occurring in the control group and y is the probability of the issue occurring in the intervention group, so x = c/(c + d), y = a/(a + b), and the absolute adventure reduction (ARR) = x − y. NNT tin can be obtained as the reciprocal, 1/ARR.
Table 3.
Event occurred | Event non occurred | Sum | |
---|---|---|---|
Intervention | A | B | a + b |
Control | C | D | c + d |
Fixed-outcome models and random-effect models
In society to clarify outcome size, two types of models tin can be used: a fixed-consequence model or a random-event model. A fixed-upshot model assumes that the consequence of treatment is the same, and that variation between results in dissimilar studies is due to random error. Thus, a fixed-outcome model can be used when the studies are considered to take the same design and methodology, or when the variability in results within a written report is modest, and the variance is thought to be due to random error. Iii common methods are used for weighted estimation in a fixed-result model: 1) inverse variance-weighted estimation iii) , 2) Mantel-Haenszel interpretation 4) , and 3) Peto estimation 5) .
A random-effect model assumes heterogeneity between the studies being combined, and these models are used when the studies are assumed different, even if a heterogeneity test does not show a significant upshot. Unlike a stock-still-issue model, a random-consequence model assumes that the size of the effect of handling differs among studies. Thus, differences in variation among studies are idea to exist due to not only random error but also between-written report variability in results. Therefore, weight does not subtract greatly for studies with a modest number of patients. Amongst methods for weighted estimation in a random-effect model, the DerSimonian and Laird method 6) is mostly used for dichotomous variables, as the simplest method, while changed variance-weighted estimation is used for continuous variables, as with fixed-effect models. These four methods are all used in Review Manager software (The Cochrane Collaboration, UK), and are described in a study by Deeks et al. [31] (Tabular array 2). Still, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) tin can better reduce the risk of blazon ane error than does the DerSimonian and Laird method [32].
Fig. 3 shows the results of analyzing outcome data using a stock-still-effect model (A) and a random-effect model (B). As shown in Fig. 3, while the results from large studies are weighted more heavily in the fixed-effect model, studies are given relatively similar weights irrespective of written report size in the random-effect model. Although identical data were being analyzed, as shown in Fig. 3, the pregnant result in the fixed-upshot model was no longer significant in the random-effect model. One representative case of the small study consequence in a random-effect model is the meta-analysis by Li et al. [33]. In a large-scale study, intravenous injection of magnesium was unrelated to astute myocardial infarction, merely in the random-consequence model, which included numerous small studies, the minor study effect resulted in an association being institute between intravenous injection of magnesium and myocardial infarction. This pocket-size study effect can be controlled for by using a sensitivity assay, which is performed to examine the contribution of each of the included studies to the final meta-analysis result. In detail, when heterogeneity is suspected in the study methods or results, by changing certain information or belittling methods, this method makes information technology possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [34].
Heterogeneity
Homogeneity test is a method whether the degree of heterogeneity is greater than would be expected to occur naturally when the outcome size calculated from several studies is higher than the sampling mistake. This makes information technology possible to test whether the effect size calculated from several studies is the aforementioned. Three types of homogeneity tests tin be used: i) forest plot, 2) Cochrane'due south Q examination (chi-squared), and 3) Higgins Iii statistics. In the woods plot, as shown in Fig. 4, greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared test, calculated from the woods plot in Fig. 4, is less than 0.1, it is considered to testify statistical heterogeneity and a random-upshot tin can be used. Finally, Itwo can be used [35].
I ii = 100% × (Q -d f)/Q Q:c h i -s q u a r e dsouthward t a t i s t i c d f:d e k r e easto ff r e e d o thouo fQs t a t i due south t i c
I2 , calculated as shown above, returns a value betwixt 0 and 100%. A value less than 25% is considered to show strong homogeneity, a value of 50% is boilerplate, and a value greater than 75% indicates strong heterogeneity.
Even when the data cannot be shown to be homogeneous, a fixed-upshot model can be used, ignoring the heterogeneity, and all the study results tin can be presented individually, without combining them. However, in many cases, a random-effect model is applied, as described higher up, and a subgroup assay or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to exist homogeneous, and these subgroups are analyzed. This needs to exist planned in the predetermined protocol before starting the meta-analysis. A meta-regression assay is similar to a normal regression analysis, except that the heterogeneity betwixt studies is modeled. This process involves performing a regression analysis of the pooled estimate for covariance at the study level, and and then it is usually non considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.
Publication bias
Publication bias is the near common type of reporting bias in meta-analyses. This refers to the distortion of meta-analysis outcomes due to the higher likelihood of publication of statistically significant studies rather than non-pregnant studies. In gild to test the presence or absence of publication bias, showtime, a funnel plot can be used (Fig. 5). Studies are plotted on a scatter plot with result size on the x-axis and precision or total sample size on the y-axis. If the points grade an upside-downward funnel shape, with a wide base that narrows towards the peak of the plot, this indicates the absence of a publication bias (Fig. 5A) [29,36]. On the other mitt, if the plot shows an disproportionate shape, with no points on one side of the graph, so publication bias tin be suspected (Fig. 5B). Second, to test publication bias statistically, Begg and Mazumdar's rank correlation examination eight) [37] or Egger'south exam nine) [29] can exist used. If publication bias is detected, the trim-and-make full method x) can be used to right the bias [38]. Fig. 6 displays results that bear witness publication bias in Egger'due south test, which has then been corrected using the trim-and-make full method using Comprehensive Meta-Analysis software (Biostat, USA).
Result Presentation
When reporting the results of a systematic review or meta-assay, the analytical content and methods should be described in item. First, a flowchart is displayed with the literature search and pick process according to the inclusion/exclusion criteria. Second, a tabular array is shown with the characteristics of the included studies. A table should also be included with information related to the quality of evidence, such equally GRADE (Tabular array four). Third, the results of data analysis are shown in a forest plot and funnel plot. Quaternary, if the results utilise dichotomous information, the NNT values tin can be reported, every bit described higher up.
Table 4.
Quality assessment | Number of patients | Effect | Quality | Importance | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
N | ROB | Inconsistency | Indirectness | Imprecision | Others | Palonosetron (%) | Ramosetron (%) | RR (CI) | |||
PON | six | Serious | Serious | Not serious | Not serious | None | 81/304 (26.6) | 80/305 (26.2) | 0.92 (0.54 to 1.58) | Very low | Important |
POV | v | Serious | Serious | Non serious | Non serious | None | 55/274 (20.1) | 60/275 (21.eight) | 0.87 (0.48 to 1.57) | Very low | Of import |
PONV | 3 | Non serious | Serious | Not serious | Not serious | None | 108/184 (58.7) | 107/186 (57.5) | 0.92 (0.54 to ane.58) | Depression | Important |
When Review Manager software (The Cochrane Collaboration, United kingdom) is used for the analysis, two types of P values are given. The commencement is the P value from the z-test, which tests the zippo hypothesis that the intervention has no effect. The second P value is from the chi-squared exam, which tests the zippo hypothesis for a lack of heterogeneity. The statistical outcome for the intervention consequence, which is by and large considered the most important result in meta-analyses, is the z-examination P value.
A common mistake when reporting results is, given a z-examination P value greater than 0.05, to say in that location was "no statistical significance" or "no deviation." When evaluating statistical significance in a meta-assay, a P value lower than 0.05 can be explained every bit "a significant difference in the furnishings of the 2 handling methods." However, the P value may appear non-significant whether or not in that location is a departure between the two treatment methods. In such a situation, it is better to announce "at that place was no strong prove for an upshot," and to present the P value and conviction intervals. Another mutual error is to think that a smaller P value is indicative of a more significant effect. In meta-analyses of large-scale studies, the P value is more than greatly afflicted past the number of studies and patients included, rather than by the significance of the results; therefore, care should exist taken when interpreting the results of a meta-analysis.
Conclusion
When performing a systematic literature review or meta-assay, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes tin can exist wrong. However, when systematic reviews and meta-analyses are properly implemented, they tin yield powerful results that could normally only be achieved using large-calibration RCTs, which are difficult to perform in individual studies. As our understanding of bear witness-based medicine increases and its importance is better appreciated, the number of systematic reviews and meta-analyses volition keep increasing. However, indiscriminate credence of the results of all these meta-analyses tin exist dangerous, and hence, we recommend that their results be received critically on the footing of a more accurate understanding.
Footnotes
ane)http://www.ohri.ca.
ii)http://methods.cochrane.org/bias/assessing-risk-bias-included-studies.
3)The inverse variance-weighted estimation method is useful if the number of studies is small-scale with large sample sizes.
four)The Mantel-Haenszel estimation method is useful if the number of studies is large with small sample sizes.
5)The Peto interpretation method is useful if the effect rate is low or i of the two groups shows zero incidence.
six)The near popular and simplest statistical method used in Review Director and Comprehensive Meta-assay software.
vii)Alternative random-effect model meta-analysis that has more adequate error rates than does the mutual DerSimonian and Laird method, particularly when the number of studies is small. Nevertheless, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than 5 studies with very unequal sizes, extra caution is needed.
8)The Begg and Mazumdar rank correlation exam uses the correlation between the ranks of effect sizes and the ranks of their variances [37].
9)The degree of funnel plot asymmetry equally measured past the intercept from the regression of standard normal deviates against precision [29].
10)If there are more than small studies on 1 side, nosotros look the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by adding the original studies dorsum into the analysis as a mirror prototype of each study.
References
1. Kang H. Statistical considerations in meta-analysis. Hanyang Med Rev. 2015;35:23–32. [Google Scholar]
two. Uetani G, Nakayama T, Ikai H, Yonemoto N, Moher D. Quality of reports on randomized controlled trials conducted in Japan: evaluation of adherence to the CONSORT argument. Intern Med. 2009;48:307–13. [PubMed] [Google Scholar]
iii. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM argument. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–900. [PubMed] [Google Scholar]
four. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: caption and elaboration. J Clin Epidemiol. 2009;62:e1–34. [PubMed] [Google Scholar]
5. Willis BH, Quigley Thousand. The assessment of the quality of reporting of meta-analyses in diagnostic inquiry: a systematic review. BMC Med Res Methodol. 2011;eleven:163. [PMC free commodity] [PubMed] [Google Scholar]
six. Chebbout R, Heywood EG, Drake TM, Wild JR, Lee J, Wilson 1000, et al. A systematic review of the incidence of and risk factors for postoperative atrial fibrillation post-obit general surgery. Amazement. 2018;73:490–8. [PubMed] [Google Scholar]
vii. Chiang MH, Wu SC, Hsu SW, Chin JC. Bispectral Index and not-Bispectral Index anesthetic protocols on postoperative recovery outcomes. Minerva Anestesiol. 2018;84:216–28. [PubMed] [Google Scholar]
viii. Damodaran Due south, Sethi South, Malhotra SK, Samra T, Maitra S, Saini Five. Comparison of oropharyngeal leak force per unit area of air-Q, i-gel, and laryngeal mask airway supreme in adult patients during general anesthesia: A randomized controlled trial. Saudi J Anaesth. 2017;eleven:390–5. [PMC free article] [PubMed] [Google Scholar]
9. Kim MS, Park JH, Choi YS, Park SH, Shin S. Efficacy of palonosetron vs. ramosetron for the prevention of postoperative nausea and vomiting: a meta-analysis of randomized controlled trials. Yonsei Med J. 2017;58:848–58. [PMC free article] [PubMed] [Google Scholar]
10. Lam T, Nagappa M, Wong J, Singh 1000, Wong D, Chung F. Continuous pulse oximetry and capnography monitoring for postoperative respiratory depression and adverse events: a systematic review and meta-analysis. Anesth Analg. 2017;125:2019–29. [PubMed] [Google Scholar]
11. Landoni 1000, Biondi-Zoccai GG, Zangrillo A, Bignami East, D'Avolio S, Marchetti C, et al. Desflurane and sevoflurane in cardiac surgery: a meta-analysis of randomized clinical trials. J Cardiothorac Vasc Anesth. 2007;21:502–xi. [PubMed] [Google Scholar]
12. Lee A, Ngan Kee WD, Gin T. A dose-response meta-assay of rubber intravenous ephedrine for the prevention of hypotension during spinal anesthesia for constituent cesarean delivery. Anesth Analg. 2004;98:483–90. [PubMed] [Google Scholar]
13. Xia ZQ, Chen SQ, Yao 10, Xie CB, Wen SH, Liu KX. Clinical benefits of dexmedetomidine versus propofol in adult intensive intendance unit patients: a meta-analysis of randomized clinical trials. J Surg Res. 2013;185:833–43. [PubMed] [Google Scholar]
xiv. Ahn E, Choi K, Kang H, Baek C, Jung Y, Woo Y, et al. Palonosetron and ramosetron compared for effectiveness in preventing postoperative nausea and vomiting: a systematic review and meta-analysis. PLoS One. 2016;11:e0168509. [PMC free article] [PubMed] [Google Scholar]
15. Ahn EJ, Kang H, Choi GJ, Baek CW, Jung YH, Woo YC. The effectiveness of midazolam for preventing postoperative nausea and airsickness: a systematic review and meta-assay. Anesth Analg. 2016;122:664–76. [PubMed] [Google Scholar]
sixteen. Yeung J, Patel 5, Champaneria R, Dretzke J. Regional versus general anaesthesia in elderly patients undergoing surgery for hip fracture: protocol for a systematic review. Syst Rev. 2016;5:66. [PMC free article] [PubMed] [Google Scholar]
17. Zorrilla-Vaca A, Healy RJ, Mirski MA. A comparison of regional versus general anesthesia for lumbarspine surgery: a meta-assay of randomized studies. J Neurosurg Anesthesiol. 2017;29:415–25. [PubMed] [Google Scholar]
18. Zuo D, Jin C, Shan Grand, Zhou L, Li Y. A comparison of general versus regional anesthesia for hip fracture surgery: a meta-assay. Int J Clin Exp Med. 2015;8:20295–301. [PMC complimentary commodity] [PubMed] [Google Scholar]
19. Ahn EJ, Choi GJ, Kang H, Baek CW, Jung YH, Woo YC, et al. Comparative efficacy of the air-q intubating laryngeal airway during general anesthesia in pediatric patients: a systematic review and meta-analysis. Biomed Res Int. 2016;2016:6406391. [PMC gratis article] [PubMed] [Google Scholar]
20. Kirkham KR, Grape S, Martin R, Albrecht E. Analgesic efficacy of local infiltration analgesia vs. femoral nervus cake after anterior cruciate ligament reconstruction: a systematic review and meta-analysis. Anaesthesia. 2017;72:1542–53. [PubMed] [Google Scholar]
21. Tang Y, Tang 10, Wei Q, Zhang H. Intrathecal morphine versus femoral nerve block for hurting control subsequently total knee arthroplasty: a metaanalysis. J Orthop Surg Res. 2017;12:125. [PMC costless article] [PubMed] [Google Scholar]
22. Hussain Due north, Goldar Thousand, Ragina N, Banfield L, Laffey JG, Abdallah FW. Suprascapular and interscalene nerve cake for shoulder surgery: a systematic review and meta-analysis. Anesthesiology. 2017;127:998–1013. [PubMed] [Google Scholar]
23. Wang K, Zhang HX. Liposomal bupivacaine versus interscalene nerve cake for pain control after full shoulder arthroplasty: A systematic review and meta-analysis. Int J Surg. 2017;46:61–70. [PubMed] [Google Scholar]
24. Stewart LA, Clarke Thou, Rovers M, Riley RD, Simmonds 1000, Stewart G, et al. Preferred reporting items for systematic review and meta-analyses of private participant data: the PRISMA-IPD Statement. JAMA. 2015;313:1657–65. [PubMed] [Google Scholar]
25. Kang H. How to understand and behave show-based medicine. Korean J Anesthesiol. 2016;69:435–45. [PMC free commodity] [PubMed] [Google Scholar]
26. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and force of recommendations. BMJ. 2008;336:924–half-dozen. [PMC free article] [PubMed] [Google Scholar]
27. Dijkers Thousand. Introducing GRADE: a systematic approach to rating evidence in systematic reviews and to guideline development. Knowl Translat Update. 2013;1:1–ix. [Google Scholar]
28. Higgins JP, Altman DG, Sterne JA. Chapter eight: Assessing the risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec 13. Available from http://handbook.cochrane.org.
29. Egger 1000, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ. 1998;316:140–4. [PMC free article] [PubMed] [Google Scholar]
30. Higgins JP, Altman DG, Sterne JA. Affiliate 9: Assessing the risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec 13. Bachelor from http://handbook.cochrane.org.
31. Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Systematic Reviews in Health Care. In: Egger M, Smith GD, Altman DG, editors. London: BMJ Publishing Group; 2008. pp. 285–312. [Google Scholar]
32. IntHout J, Ioannidis JP, Borm GF. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-assay is straightforward and considerably outperforms the standard DerSimonian-Laird method. BMC Med Res Methodol. 2014;14:25. [PMC free article] [PubMed] [Google Scholar]
33. Li J, Zhang Q, Zhang M, Egger 1000. Intravenous magnesium for astute myocardial infarction. Cochrane Database Syst Rev. 2007;(ii):CD002755. [PMC free article] [PubMed] [Google Scholar]
34. Thompson SG. Controversies in meta-analysis: the case of the trials of serum cholesterol reduction. Stat Methods Med Res. 1993;2:173–92. [PubMed] [Google Scholar]
35. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60. [PMC gratuitous article] [PubMed] [Google Scholar]
36. Sutton AJ, Abrams KR, Jones DR. An illustrated guide to the methods of meta-analysis. J Eval Clin Pract. 2001;7:135–48. [PubMed] [Google Scholar]
37. Begg CB, Mazumdar M. Operating characteristics of a rank correlation exam for publication bias. Biometrics. 1994;50:1088–101. [PubMed] [Google Scholar]
38. Duval Due south, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56:455–63. [PubMed] [Google Scholar]
Articles from Korean Journal of Anesthesiology are provided hither courtesy of Korean Lodge of Anesthesiologists
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5903119/
0 Response to "Checklist for Making Judgments About How Much Confidence to Place in a Systematic Review of Effects"
Post a Comment