Skip to main content

Main menu

  • Home
  • Current Issue
  • Archive
  • Info for
    • Authors
    • Editorial Policies
    • Advertisers
    • Editorial Board
  • Other Publications
    • Anticancer Research
    • Cancer Genomics & Proteomics
    • Cancer Diagnosis & Prognosis
  • More
    • IIAR
    • Conferences
  • About Us
    • General Policy
    • Contact
  • Other Publications
    • In Vivo
    • Anticancer Research
    • Cancer Genomics & Proteomics

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
In Vivo
  • Other Publications
    • In Vivo
    • Anticancer Research
    • Cancer Genomics & Proteomics
  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart
In Vivo

Advanced Search

  • Home
  • Current Issue
  • Archive
  • Info for
    • Authors
    • Editorial Policies
    • Advertisers
    • Editorial Board
  • Other Publications
    • Anticancer Research
    • Cancer Genomics & Proteomics
    • Cancer Diagnosis & Prognosis
  • More
    • IIAR
    • Conferences
  • About Us
    • General Policy
    • Contact
  • Visit iiar on Facebook
  • Follow us on Linkedin
Review ArticleReview
Open Access

Is Network Meta-analysis a Revolutionary Statistical Tool for Improving the Reliability of Clinical Trial Results? A Brief Overview and Emerging Issues Arising

GEORGE BEIS and IOANNIS PAPASOTIRIOU
In Vivo May 2023, 37 (3) 972-984; DOI: https://doi.org/10.21873/invivo.13171
GEORGE BEIS
1Research Genetic Cancer Centre S.A., Florina, Greece;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: beis.giorgos@rgcc-genlab.com
IOANNIS PAPASOTIRIOU
2Research Genetic Cancer Centre International GmbH Headquarters, Zug, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: office@rgcc-international.com
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Network meta-analysis (NMA) as the quantification of pairwise meta-analysis in a network format has been of particular interest to medical researchers in recent years. As a powerful tool with which direct and indirect evidence from multiple interventions can be synthesized simultaneously in the study and design of clinical trials, NMA enables inferences to be drawn about the relative effect of drugs that have never been compared. In this way, NMA provides information on the hierarchy of competing interventions for a given disease concerning clinical effectiveness, thus giving clinicians a comprehensive picture for decision-making and potential avoidance of additional costs. However, estimates of treatment effects derived from the results of network meta-analyses should be interpreted with due consideration of their uncertainty, because simple scores or treatment probabilities may be misleading. This is particularly true where, given the complexity of the evidence, there is a serious risk of misinterpretation of information from aggregated data sets. For these reasons, NMA should be performed and interpreted by both expert clinicians and experienced statisticians, while a more comprehensive search of the literature and a more careful evaluation of the body of evidence can maximize the transparency of the NMA and potentially avoid errors in its interpretation. This review provides the key concepts as well as the challenges we face when studying a network meta-analysis of clinical trials.

Key Words:
  • Network meta-analysis
  • Bayesian network models
  • Markov Chain Monte Carlo algorithms
  • indirect evidence
  • treatment effects
  • treatment network
  • review

Meta-analysis is a highly recognized scientific discipline capable of providing high-quality evidence in medical research, particularly in clinical oncology (1-3). The main goal of a meta-analysis is to provide a definitive answer to the fundamental clinical research question about which treatment is most effective when it has been evaluated by multiple studies but with inconsistent results (4). In particular, the purpose of meta-analysis steps in clinical trials, as in any intervention study in general, is to calculate the true effect size of a specific treatment, that is, the same type of intervention compared with similar control groups. Therefore, it is possible to assess whether a particular type of treatment is effective. For the strategy followed in randomized controlled trials (RCTs), pairwise meta-analysis is a well-known statistical tool for synthesizing evidence from multiple trials but refers only to the relative effectiveness of two specific interventions. Therefore, the utility of pairwise meta-analyses is very limited in medical reality. Because there are usually many competing interventions for a given disease, studies related to some of the pairwise comparisons may be missing. Only a small percentage of these have been examined in head-to-head studies. For these reasons, needs have led to the development of network meta-analysis (5-7) (NMA), which is also called mixed treatment comparisons (8-11) (MTC) and may provide more accurate estimates of treatment effects than a pairwise meta-analysis (12). Especially when comparisons between important treatments are missing (13), NMA may be a more useful technique, as it is used to compare multiple treatments simultaneously in a statistical study, whereby combining direct and indirect evidence in a network of randomized controlled trials (RCTSs) (13-16), by providing a more complete picture to clinicians and thus enabling them to more clearly ‘rank’ treatments using summary results. This is achieved by assessing a composite (mixed) effect size as the weighted average of these direct and indirect components, which then allows competing interventions to be ordered more clearly according to their relative effectiveness, even if they have not been compared in a single trial (17, 18).

In recent years this statistical approach has matured as a technique (19, 20), where models are available for raw data that produce different aggregated outcome measures, using both frequentist and Bayesian models through statistical software packages (16). Especially in the last decade, many applications have been published (21, 22), as there are methodological developments in the subject of NMA. The study of the concept of NMA came to the fore to ‘open wider horizons’ for clinicians, by drawing information from the evaluation of a connected network of studies comparing the results of several interventions simultaneously (23). This approach has gained great popularity among clinicians and decision-makers because the costs involved in the development of new or unnecessary clinical studies may be reduced.

The study of an NMA model during the approval process of a drug can make a decisive contribution to the design of a clinical trial by giving accurate information about both the competitive picture and the corresponding evidence so that the information collected can help to ensure that the clinical trial design is the best possible to receive strong support. Consequently, the NMA is a very useful tool for evaluating the comparative effectiveness of different treatments commonly used in clinical practice, provided, however, that appropriate care is taken in the interpretation of the concepts that characterize it so that the results are not biased or bulging (24). Although this technique is increasingly used by biomedical researchers, it has created several challenges and pitfalls that deserve careful consideration, especially since this technique cultivates all the hypotheses of pairwise meta-analysis but with greater complexity due to the multitude of comparisons involved. Moreover, despite the wider acceptance of NMA, there are concerns about the validity of its findings (25). However, as NMA remains a hot research topic to this day, the purpose of this review is to examine the key concepts underlying it, focusing on its risks and benefits, and outlining relevant emerging issues and concerns.

Network Geometry

In clinical trials it is known that for n treatments in NMA the maximum number of designs (i.e. each combination of treatments within a study) is 2n-n-1, while for each multi-arm study, there are (n¦2)=n(n-1)/2 comparisons including all possible unique comparisons, even if they are not observed in clinical trials or a pairwise meta-analysis (Figure 1), which would lead to a fully connected network. However, some of the comparisons predicted by the combinatorial formula will be ineligible due to protocol compliance or post hoc limitations (26).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Venn Diagram resulting from overlapping comparisons of network meta-analyses (from the binomial formula), pairwise comparisons (from network diagrams), and trial comparisons (from study-based registry).

The most important parameter in the utility of a treatment network before relevant data analysis is the assessment of its geometry (27-29), showing which interventions have been directly compared in RCTs and which can only be indirectly compared. In particular, the geometry of the network allows one to understand how many choices there are for each treatment, whether or not certain types of comparisons have been avoided, and whether there are particular patterns among the possible choices of the comparators. However, a network can ‘mutate’ over time as more tests are carried out, thus modifying its geometry which must be studied at each evolving step.

NMA Assumptions

NMA requires the same steps as a conventional meta-analysis but is graphically represented with a network, thus providing direct information about treatments that can be compared with each other and identifying all interventions linked to a common comparator (the linking treatment). For example, two different treatments have been compared with a placebo in different trials. An NMA allows a hypothesis test to be created that compares these active treatments to each other based on their effectiveness against a common comparator (usually a placebo), thus providing ‘indirect’ evidence. These indirect comparisons provide the opportunity to fill the knowledge gaps of efficacy comparisons of existing treatments, thereby providing a more comprehensive understanding of the multitude of treatment options for the clinician. In short, the network estimate is an aggregate result of the direct and indirect evidence for a given comparison or the indirect evidence if no direct evidence is available. Then, once all the treatments in the existing network have been compared, there are different methods for ranking (30-33) the treatments in terms of their net effectiveness.

The main objective of NMA is to examine and statistically validate the effects of each treatment by evaluating and analyzing three or more interventions/treatments using both direct and indirect evidence. Therefore, basic assumptions such as transitivity, consistency, and homogeneity of direct evidence should be satisfied for performing NMA to be valid. More specifically, these assumptions should be evaluated with statistical tests (34). However, these methodological aspects, although poorly understood, are nevertheless key concepts for understanding a network meta-analysis (35, 36). For this reason, we will explain the basic principles governing these assumptions.

The Concepts of Transitivity, Consistency, and Heterogeneity in NMA

Transitivity (37) is the composition of studies that makes a direct comparison between 2 meta-analytic estimates A vs. C and B vs. C meaningful when the studies are similar in important clinical characteristics that influence the relevant treatment effects (9) (effect modifiers, i.e., characteristics that influence the relevant outcomes of a clinical intervention) which need not be identical and therefore can be examined by comparing the distribution of potential effect modifiers across the different comparisons (38). Indirect information on the relative effect of 2 interventions will be considered valid if the studies and comparisons in the network do not differ in terms of the distribution of the various effect modifiers (the intervention effects are transitive). A valid indirect comparison (such as AB) requires both AC and BC studies to be similar in terms of the distribution of these characteristics, and only then will the assumption of transitivity apply. Then the indirect comparison (AB) is calculated by subtracting BC from AC as defined by the formula (20, 39):

Embedded Image

where θ denotes the observed estimates of treatment effects in terms such as odds ratios (OR), mean difference (MD), etc. In oncology, time-to-event data (40) are used where the hazard ratio (HR) (41) is taken as the necessary measure to interpret treatment efficacy. The HR is calculated using Cox regression models (42) in the survival analysis and indicates the relative probability of the event (e.g., death). Transitivity, although is an essentially incalculable hypothesis, nevertheless, its validity can be assessed by clinical and epidemiological methods (34), and suitable models have been found through which, with suitable modifications, its valid hypothesis can be ensured (43). However, if the clinical characteristics are different (e.g., different patient populations), then the transitivity assumption is violated, so the estimate of the indirect AB comparison is invalid (44, 45). Furthermore, detecting the absence of transitivity can often be difficult because sufficient details published in clinical trials are not always available to allow a detailed assessment (46).

The transitivity translated into statistical terms (36) is essentially the consistency (or coherence) and occurs when the above abstraction equation is supported by the corresponding data, but it can only be evaluated when there is a loop in the evidence network, that is when there is direct and indirect evidence for a specific comparison of interventions. The basic assumption underlying the validity of indirect and mixed comparisons is that there are no significant differences between trials making different comparisons in addition to the treatments already compared. So, an area that remains open and is one of the biggest challenges in NMA is inconsistency (36, 44, 47) which generally occurs when direct and indirect evidence diverge (37) Embedded Image.

More specifically, the inconsistency may arise from the characteristics of the studies due to their different design or when the estimates of the size of the direct and indirect effects differ (48).

The magnitude of inconsistency in an NMA can be statistically calculated by comparing direct and indirect summative effects in predetermined loops (15, 49) or a network by fitting models that allow and disallow inconsistency (50, 51). There are several methods for measuring inconsistency when suspected (48), such as the Akaike (52) and Deviance (53) information criteria for assessing the goodness of fit of models in frequentist/Bayesian approaches to NMA or meta-regression models (50). Also, several methods for detecting inconsistency in an RCT network include the inconsistency parameter approach (48) and the net heat graphical approach (54, 55). ‘Node splitting’ model methods (56-58) have been reported too in the literature to assess inconsistency in NMA, with any direct comparison excluded from the network and then calculating the difference between these direct and indirect components from the network, while appropriate decision rules have been defined to select only those comparisons belonging to potentially inconsistent loops in the network (57). As mentioned earlier, inconsistency exists when discrepancies between direct and indirect estimates exist, therefore transitivity is a common cause of inconsistency.

Another very important advantage of NMA is its ability to investigate whether there is homogeneity or heterogeneity between the results from different clinical trials in each of the pairwise comparisons it involves, and therefore, assessment of heterogeneity in the results of different trials in each of the pairwise comparisons is important and should be considered. There are many valuable reviews on assessing and dealing with heterogeneity (59, 60) in a network. Heterogeneity in a meta-analysis is usually assessed with Cochran’s Q statistic (61-64) and in particular with Cochran’s generalized Q statistic for multivariate meta-analyses, where it can be used in the context of NMA to quantify heterogeneity across the network, both within trials and between trials (the latter is known as inconsistency). Although heterogeneity variance is often the most difficult parameter to estimate, several alternative approaches to estimating this variance have been explored in NMA studies (65) in recent years such as the use of the I2 statistic (62, 66-68) (proportion description between-study variation) or meta-regression models (69, 70) are mainly used to reduce heterogeneity (and inconsistency) between RCTs in the network. Measures have also been considered to assess its confidence in the results of an NMA, where the impact of its variability on the corresponding clinical decisions is analyzed (71). In the special case when variances in between-study heterogeneity are estimated with considerable imprecision (because the data are sparse), including external evidence usually improves the conclusions (72). However, as the power and precision of indirect comparisons included in the NMA study depend on sample size and extensive statistical information, further improvements in methodology should be made.

Ranking Treatments in NMA

The results of the studies are closely associated with uncertainty, and consequently, we cannot be sure that the treatment is the most effective. But we can determine the probability of a particular outcome about which treatment is best. With Bayesian thinking for each treatment, the probability of having a particular rank is derived from the posterior distributions of all possible treatments. The treatments are then ranked by the area under the cumulative rank curve SUCRA (30), which is a quantification of the overall rank and presents a unique number associated with each treatment. The higher the SUCRA value and the closer to 100%, the higher the probability that a treatment is close to the first place, while the opposite is true when this value is close to 0. To compare treatments in an NMA, a frequent analog of SUCRA -by considering the frequentist perspective- called P-score is also used. Both concepts allow the ranking of treatments on a continuous scale of 0-1 (73), while rankograms represent these values graphically (74).

Bayesian Statistical Inference

In addition to frequentist inference which is arguably more commonly used in most research fields, Bayesian statistics (75, 76) is another very important statistical inference tool, having as its main advantage a framework that properly accounts for uncertainty in variance heterogeneity (77), and at the same time is more flexible as it can handle problems that frequency techniques cannot, such as handling missing data. In addition, it is considered more robust because it gives more precise effect estimates with smaller credible intervals, thus implying that it should not be considered as a competitive method of frequentist statistical analysis, but as an additional tool that contributes to the success of a more significant result.

Bayesian statistics treat the unknown quantities as random variables and assign a prior probability distribution to each of them, whereby specifying a joint probability distribution for the data (i.e., a likelihood) we get a full probability model for the set of observable and unobservable quantities. In a few words, in Bayesian inference, prior beliefs (represented by prior distributions) are combined with existing data to arrive at a posterior distribution (Figure 2). So let us assume that the observed data are represented by y and the unknown parameters by θ. Then to have relevant inference for we use Bayes’ theorem (78, 79) to get a posterior distribution for making predictions about future events, i.e., the joint distribution of all the parametric models that depend on the observed data: p(θ|y)∝p(θ)p(y|θ). Here p(y|θ) is the conditional probability of the data given the model parameters which is known as the likelihood function, while the term p(θ) is the probability of certain model parameter values in the population which is the prior distribution. Therefore, the posterior distribution p(θ|y) is proportional to the likelihood function times the prior distribution (80, 81).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Illustration of Bayes’ theorem applied to medical research where data, in the form of symptoms, is used to determine the likelihood of those symptoms if the patient has a particular disease. Bayes’ rule combines this probability with prior knowledge to determine the posterior probability that the patient has the disease, given the observed symptoms.

Methodology

Bayesian meta-analysis is mainly based on the hierarchical Bayes model, with the basic principles of this model being very similar to the ordinary random-effects model. When fitting Bayesian meta-analysis models, it is critical to test the model for whether it included sufficient iterations for convergence, as well as to perform sensitivity analyses with different prior standards to assess the effect on the overall simulation results. The Markov Chain Monte Carlo (MCMC) algorithm (82) that is used in Bayesian probabilistic models must have found the optimal solution (due to convergence); otherwise, more iterations will have to occur. MCMC simulation plays a very important role here because it allows the estimation of the posterior distributions of the parameters for the results of the NMA (83).

Software Options for Fitting NMA Models and Assessing Inconsistency

The most popular software R (84) packages accessible and currently available for Bayesian and frequentist inference in NMA are included in Table I. Details on how data is analyzed, its input options, and the corresponding statistical models can be found in each package’s respective manuals, which are also mentioned in the references. However, because most of these packages require strong contact with statistical programming for their use (existence of routines for performing NMA), there are also toolkits based on simple and standard instructions, intended to present the results using only the graphs of the analyses (85).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table I.

Summary of the most important R software packages for reporting and interpretation of results regarding the Bayesian and Frequentist inferences in network meta-analyses.

An Example of a Network Meta-analysis in Diabetes

The objective of the NMA that was applied in Diabetes disease was considered as an example for the estimation of the relative effects on HbA1c (glycated hemoglobin) change to a baseline sulfonylurea therapy in patients with type 2 diabetes, where the mean HbA1c change from baseline was used in the study and measurements of HbA1c were after a follow-up ranging 3-12 months (113). The studies contained in this data set compared different treatments for blood glucose control in patients with diabetes. The researchers selected 26 studies which consisted of a total of 6,646 patients and 10 drug groups that were acarbose (acar), benfluorex (benf), metformin (metf), miglitol (migl), pioglitazone (piog), placebo (plac), rosiglitazone (rosi), sitagliptin (sita), sulfonylurea alone (sulf), and vildagliptin (vild). In the corresponding network, there were 15 different designs (i.e., the set of treatments compared in one study). The data recorded the treatment effect (TE), where the effect was introduced here as MD, the standard error of the effect (seTE), the names of the treatments, and finally the name of each study. The effects measure was the MD in blood plasma glucose concentration, while the fixed effects model was used. The visualization of the network (Figure 3) was done via the netmeta R statistical package (108, 114). Based on the ranking of treatments (rankogram) in the network with the P-score (73) measurement, the top 3 interventions are rosiglitazone treatment which seems to be the most effective (1P-score=0.978), metformin (2P-score=0.851), and pioglitazone (3P-score=0.768), while the corresponding SUCRA values have very little deviations (rosiglitazone=0.983, metformin=0.852, and pioglitazone=0.766) (108). However, clinicians and decision-makers should not consider an intervention to be best just because it comes first unless the quality of the evidence used and the confidence in the NMA results are considered (30).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

The treatment network for meta-analysis of multiple interventions for investigating efficacy in Diabetes. Nodes represent the treatments in the network and lines represent relative direct comparisons between the treatments. Nodes are weighted by the number of patients receiving each treatment relative to the number of participants in all studies, while the thickness of the line grows proportionally to the number of studies referred to in each comparison.

Discussion

In general, NMA can provide increased statistical power when normal network connectivity is possible and sample sizes are sufficient. Mathematical approaches exist to ‘measure’ network connectivity, but raw data are required to calculate these indicators (39, 115). However, inappropriate use of NMA can lead to erroneous results, such as when there is low network connectivity and therefore low statistical power (1, 44, 116) or when the results are derived from indirect data which, although they remain observations, are nevertheless not interpreted with due care (7, 14). Regarding indirect treatment comparisons, there is disagreement among researchers about the validity of their use in decision-making and especially when direct treatment comparisons are also available (117-119). More specifically, it is argued that decisions should not be made based on rank probabilities alone (especially when treatments are not directly compared) as they may be incompletely informed (120), but also because estimates of rank probabilities are extremely sensitive as they are influenced by factors such as an unequal number of studies per comparison in the network, sample size of individual studies, overall network configuration, and effect sizes between treatments. For example, an unequal number of studies per comparison may lead to biased (121) estimates of treatment rank probabilities for each network considered and thus to an incorrect NMA analysis, as a result of increased variability in the precision of treatment effect estimates (122). For these reasons, it is necessary to provide detailed reports on the strategy researchers intend to follow to assess transitivity and consistency and clarify their calculation methods. Clinicians should also always be cautious about effect sizes and treatment rankings because a good ranking does not necessarily mean a clinically important effect size, and on the other hand, treatment rankings derived from NMAs can often show some degree of inaccuracy (123). This is because their uncertainty can be ignored and so the rankings give the illusion that some interventions are better than others when the relative effects are not different from zero beyond chance (28). However, even so, NMA has serious advantages over pairwise meta-analyses. Especially when there are cycles of evidence (loops), the Bayesian NMA approach has been shown to significantly improve effect estimates compared to separate pairwise meta-analyses (124).

Another equally important aspect that should be considered before constructing an NMA and could help researchers to further improve the results is, as mentioned above, the exploring of the geometry of the network, and by extension the number of nodes (treatments) that will be included in the network, because a decision maker may not be interested in all pairwise comparisons of the network. Thus, because the therapeutic effect between two active treatments can often be more influential in decision-making than the relative effects of all active treatments versus placebo, researchers could modify the network by using only a subset of available treatments, namely those that are considered clinically more relevant (36) (the most effective treatments). Otherwise, inclusion in the main studies of data comparing treatments without clinical interest provides additional indirect evidence of clinical interest, which may increase the precision of the estimates (9, 125), but may also lead to additional inconsistencies (126). In network studies, it is common to exclude trials and specific comparators based on a variety of different criteria, because choosing to include all possible interventions ever evaluated in an RCT gives unclear and discouraging conclusions. After all, some trials deviate significantly from others and it is not advisable to combine them in NMA (trial-level outliers), where studies suggest corresponding Bayesian outlier detection measures (127). However, deriving treatments from an NMA can significantly change the effect estimates and thus the probability ranking of the most appropriate treatment. Well-connected treatments appear to have the most influence (128). Consequently, the greatest impact on the results occurs when well-connected nodes are removed and so the most evaluated treatments available for a condition must be considered necessary for a network to be valid. Special care is required when it comes to exclusions of potential nodes, and decisions on eligibility criteria must be carefully justified, because small “mutations” in the geometry of the NMA have a direct impact on the analysis and in turn affect the decision-making process. That is why the ‘node-making’ process has been identified as one of the most important problems in NMA, where different ways of generating treatment nodes could significantly affect the results (129-131). But in addition to network size, it is proposed to incorporate the description of specific graph theory statistical measures to complement graph information (132, 133). Particularly for distinguishing similar NMAs, sensitivity analysis is critical to perform when ‘confounding’ is identified in the initial review to infer the absence of heterogeneity, especially when the studies are few (134).

An also very strong reason that the definition of the nodes is critical, is that the interventions are combined with more than one treatment. It is common for researchers to tend to combine treatment arms, where treatments with different characteristics or patients with different subtypes that cannot belong to the same group are merged as one treatment at a node. This has the goal of increasing the statistical power of the network or connecting the network (1), thus introducing bias into the network (135, 136). The simplest approach would be to analyze each combination as a different node in the NMA. Furthermore, evidence has shown that meta-analysis across multiple smaller RCTs is more valuable than one large RCT (137). As there are always confounding factors in studies that can affect the results, the variation in treatment effect between trials gives a better estimate of the mean effect than an RCT. A simulation study showed that when treatment effects are truly additive, the ‘conventional’ NMA model does not outperform them (138).

A notable venture in NMA that has also taken place very recently and is steadily gaining ground is the incorporation of non-randomized data to assess relative treatment effects, especially in cases with limited randomized data to avoid disconnected network phenomena. By incorporating real-world evidence from non-randomized studies can confirm findings from RCTs, thereby increasing the accuracy of results and empowering the decision-making process (139). Because quality meta-analysis is highly dependent on the availability of individual study data, the use of IPD in NMA is increasingly recognized in the scientific community today. More specifically, the benefits of integrating various proportions of individual patient data (IPD) studies into one NMA and aggregate data (AD) and IPD into the same NMA have been explored. This is because standard NMA methods combine aggregate data from multiple studies, assuming that effect modifiers are balanced across populations (95). New methods such as population fitting methods relax this assumption. One such approach is to analyze IPD from each study in a meta-regression model. IPD-based NMA can lead to increased precision of estimated treatment effects. Additionally, it can help improve network coherence and account for heterogeneity across studies by adjusting participant-level effect modifiers and adopting more advanced models to deal with missing response data. Although the availability of such data is not always feasible, an increased IPD rate has been shown to lead to more accurate estimates for most models (140, 141) and these methods need further evaluation. A typical example is the multilevel network meta-regression (ML-NMR) method as the most recent application, which in this case, is the generalization of NMA for synthesizing data from a mixture of IPD and AD studies that provide estimates for a population decision target (95, 96, 103, 142). This use of meta-analysis, which is also the future of population adjustment, including individual studies, can be extended to areas such as prognostic models and prognostic factors that are particularly important in medical disciplines such as oncology.

Conclusion

As NMA becomes more and more popular and therefore more influential in the scientific community, familiarity with statistical network concepts will be a one-way street as the demands for transparency and more reliable synthesis of the original data increase. Enriching these data belonging to databases for meta-analysis combined with the opinion of experienced researchers can improve the construction of more reliable predictive models for the desired outcome. But this should be done on the assumption that the construction and study of an NMA should always be based on detailed protocols, as this is the only way to protect against decisions such as the selective use of circumstantial evidence. In any case, NMA as a statistical tool is undoubtedly very useful for evaluating the comparative results of multiple competing interventions in clinical practice and is the ‘next step’ in meta-analysis for further health technology development. However, more specialized training is needed to ensure that the basic methodologies underlying NMAs are understood by health researchers to maximize their ability to interpret and validate these results.

Footnotes

  • Authors’ Contributions

    Conceptualization: GB; IP. Literature search: GB. Network analysis: GB. Writing original draft: GB. Critically revised the work: GB; IP. Supervised the study: IP.

  • Conflicts of Interest

    The Authors declared no potential conflicts of interest in relation to this study.

  • Received March 13, 2023.
  • Revision received March 27, 2023.
  • Accepted March 31, 2023.
  • Copyright © 2023, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY-NC-ND) 4.0 international license (https://creativecommons.org/licenses/by-nc-nd/4.0).

References

  1. ↵
    1. Ter Veer E,
    2. van Oijen MGH and
    3. van Laarhoven HWM
    : The use of (network) meta-analysis in clinical oncology. Front Oncol 9: 822, 2019. PMID: 31508373. DOI: 10.3389/fonc.2019.00822
    OpenUrlCrossRefPubMed
    1. Gyawali B
    : Meta-analyses and RCTs in oncology-what is the right balance? The Lancet Oncology 19(12): 1565-1566, 2020. DOI: 10.1016/S1470-2045(18)30655-7
    OpenUrlCrossRef
  2. ↵
    1. Ge L,
    2. Tian JH,
    3. Li XX,
    4. Song F,
    5. Li L,
    6. Zhang J,
    7. Li G,
    8. Pei GQ,
    9. Qiu X and
    10. Yang KH
    : Epidemiology characteristics, methodological assessment and reporting of statistical analysis of network meta-analyses in the field of cancer. Sci Rep 6: 37208, 2016. PMID: 27848997. DOI: 10.1038/srep37208
    OpenUrlCrossRefPubMed
  3. ↵
    1. Murad MH,
    2. Montori VM,
    3. Ioannidis JP,
    4. Jaeschke R,
    5. Devereaux PJ,
    6. Prasad K,
    7. Neumann I,
    8. Carrasco-Labra A,
    9. Agoritsas T,
    10. Hatala R,
    11. Meade MO,
    12. Wyer P,
    13. Cook DJ and
    14. Guyatt G
    : How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature. JAMA 312(2): 171-179, 2014. PMID: 25005654. DOI: 10.1001/jama.2014.5559
    OpenUrlCrossRefPubMed
  4. ↵
    1. Dias S and
    2. Caldwell DM
    : Network meta-analysis explained. Arch Dis Child Fetal Neonatal Ed 104(1): F8-F12, 2019. PMID: 30425115. DOI: 10.1136/archdischild-2018-315224
    OpenUrlFREE Full Text
    1. Riley RD,
    2. Jackson D,
    3. Salanti G,
    4. Burke DL,
    5. Price M,
    6. Kirkham J and
    7. White IR
    : Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples. BMJ 358: j3932, 2017. PMID: 28903924. DOI: 10.1136/bmj.j3932
    OpenUrlFREE Full Text
  5. ↵
    1. Lumley T
    : Network meta-analysis for indirect treatment comparisons. Stat Med 21(16): 2313-2324, 2002. PMID: 12210616. DOI: 10.1002/sim.1201
    OpenUrlCrossRefPubMed
  6. ↵
    1. Phillippo DM
    : multinma: Bayesian network meta-analysis of individual and aggregate data. R package version 0.5.0, 2022. DOI: 10.5281/zenodo.3904454
  7. ↵
    1. Salanti G
    : Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Methods 3(2): 80-97, 2012. PMID: 26062083. DOI: 10.1002/jrsm.1037
    OpenUrlCrossRefPubMed
    1. Jansen JP,
    2. Crawford B,
    3. Bergman G and
    4. Stam W
    : Bayesian meta-analysis of multiple treatment comparisons: an introduction to mixed treatment comparisons. Value Health 11(5): 956-964, 2008. PMID: 18489499. DOI: 10.1111/j.1524-4733.2008.00347.x
    OpenUrlCrossRefPubMed
  8. ↵
    1. Jansen JP,
    2. Fleurence R,
    3. Devine B,
    4. Itzler R,
    5. Barrett A,
    6. Hawkins N,
    7. Lee K,
    8. Boersma C,
    9. Annemans L and
    10. Cappelleri JC
    : Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 1. Value Health 14(4): 417-428, 2011. PMID: 21669366. DOI: 10.1016/j.jval.2011.04.002
    OpenUrlCrossRefPubMed
  9. ↵
    1. Papakonstantinou T,
    2. Nikolakopoulou A,
    3. Egger M and
    4. Salanti G
    : In network meta-analysis, most of the information comes from indirect evidence: empirical study. J Clin Epidemiol 124: 42-49, 2020. PMID: 32302680. DOI: 10.1016/j.jclinepi.2020.04.009
    OpenUrlCrossRefPubMed
  10. ↵
    1. Lu G and
    2. Ades AE
    : Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med 23(20): 3105-3124, 2004. PMID: 15449338. DOI: 10.1002/sim.1875
    OpenUrlCrossRefPubMed
  11. ↵
    1. Song F,
    2. Altman DG,
    3. Glenny AM and
    4. Deeks JJ
    : Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses. BMJ 326(7387): 472, 2003. PMID: 12609941. DOI: 10.1136/bmj.326.7387.472
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Mills EJ,
    2. Bansback N,
    3. Ghement I,
    4. Thorlund K,
    5. Kelly S,
    6. Puhan MA and
    7. Wright J
    : Multiple treatment comparison meta-analyses: a step forward into complexity. Clin Epidemiol 3: 193-202, 2011. PMID: 21750628. DOI: 10.2147/CLEP.S16526
    OpenUrlCrossRefPubMed
  13. ↵
    1. Tonin FS,
    2. Rotta I,
    3. Mendes AM and
    4. Pontarolo R
    : Network meta-analysis: a technique to gather evidence from direct and indirect comparisons. Pharm Pract (Granada) 15(1): 943, 2017. PMID: 28503228. DOI: 10.18549/PharmPract.2017.01.943
    OpenUrlCrossRefPubMed
  14. ↵
    1. Sutton A,
    2. Ades AE,
    3. Cooper N and
    4. Abrams K
    : Use of indirect and mixed treatment comparisons for technology assessment. Pharmacoeconomics 26(9): 753-767, 2008. PMID: 18767896. DOI: 10.2165/00019053-200826090-00006
    OpenUrlCrossRefPubMed
  15. ↵
    1. Mavridis D,
    2. Giannatsi M,
    3. Cipriani A and
    4. Salanti G
    : A primer on network meta-analysis with emphasis on mental health. Evid Based Ment Health 18(2): 40-46, 2015. PMID: 25908686. DOI: 10.1136/eb-2015-102088
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Sutton AJ and
    2. Higgins JP
    : Recent developments in meta-analysis. Stat Med 27(5): 625-650, 2008. PMID: 17590884. DOI: 10.1002/sim.2934
    OpenUrlCrossRefPubMed
  17. ↵
    1. Caldwell DM
    : An overview of conducting systematic reviews with network meta-analysis. Syst Rev 3: 109, 2014. PMID: 25267336. DOI: 10.1186/2046-4053-3-109
    OpenUrlCrossRefPubMed
  18. ↵
    1. Nikolakopoulou A,
    2. Chaimani A,
    3. Veroniki AA,
    4. Vasiliadis HS,
    5. Schmid CH and
    6. Salanti G
    : Characteristics of networks of interventions: a description of a database of 186 published networks. PLoS One 9(1): e86754, 2014. PMID: 24466222. DOI: 10.1371/journal.pone.0086754
    OpenUrlCrossRefPubMed
  19. ↵
    1. Petropoulou M,
    2. Nikolakopoulou A,
    3. Veroniki AA,
    4. Rios P,
    5. Vafaei A,
    6. Zarin W,
    7. Giannatsi M,
    8. Sullivan S,
    9. Tricco AC,
    10. Chaimani A,
    11. Egger M and
    12. Salanti G
    : Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol 82: 20-28, 2017. PMID: 27864068. DOI: 10.1016/j.jclinepi.2016.11.002
    OpenUrlCrossRefPubMed
  20. ↵
    1. Dias S,
    2. Sutton AJ,
    3. Ades AE and
    4. Welton NJ
    : Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Decis Making 33(5): 607-617, 2013. PMID: 23104435. DOI: 10.1177/0272989X12458724
    OpenUrlCrossRefPubMed
  21. ↵
    1. Fanelli D,
    2. Costas R and
    3. Ioannidis JP
    : Meta-assessment of bias in science. Proc Natl Acad Sci U S A 114(14): 3714-3719, 2017. PMID: 28320937. DOI: 10.1073/pnas.1618569114
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Efthimiou O and
    2. White IR
    : The dark side of the force: Multiplicity issues in network meta-analysis and how to address them. Res Synth Methods 11(1): 105-122, 2020. PMID: 31476256. DOI: 10.1002/jrsm.1377
    OpenUrlCrossRefPubMed
  23. ↵
    1. Shokraneh F and
    2. Adams CE
    : A simple formula for enumerating comparisons in trials and network meta-analysis. F1000Res 8: 38, 2019. PMID: 30863537. DOI: 10.12688/f1000research.17352.2
    OpenUrlCrossRefPubMed
  24. ↵
    1. Salanti G,
    2. Kavvoura FK and
    3. Ioannidis JP
    : Exploring the geometry of treatment networks. Ann Intern Med 148(7): 544-553, 2008. PMID: 18378949. DOI: 10.7326/0003-4819-148-7-200804010-00011
    OpenUrlCrossRefPubMed
  25. ↵
    1. Mills EJ,
    2. Thorlund K and
    3. Ioannidis JP
    : Demystifying trial networks and network meta-analysis. BMJ 346: f2914, 2013. PMID: 23674332. DOI: 10.1136/bmj.f2914
    OpenUrlFREE Full Text
  26. ↵
    1. Rouse B,
    2. Chaimani A and
    3. Li T
    : Network meta-analysis: an introduction for clinicians. Intern Emerg Med 12(1): 103-111, 2017. PMID: 27913917. DOI: 10.1007/s11739-016-1583-7
    OpenUrlCrossRefPubMed
  27. ↵
    1. Mbuagbaw L,
    2. Rochwerg B,
    3. Jaeschke R,
    4. Heels-Andsell D,
    5. Alhazzani W,
    6. Thabane L and
    7. Guyatt GH
    : Approaches to interpreting and choosing the best treatments in network meta-analyses. Syst Rev 6(1): 79, 2017. PMID: 28403893. DOI: 10.1186/s13643-017-0473-z
    OpenUrlCrossRefPubMed
    1. Chiocchia V,
    2. Nikolakopoulou A,
    3. Papakonstantinou T,
    4. Egger M and
    5. Salanti G
    : Agreement between ranking metrics in network meta-analysis: an empirical study. BMJ Open 10(8): e037744, 2020. PMID: 32819946. DOI: 10.1136/bmjopen-2020-037744
    OpenUrlAbstract/FREE Full Text
    1. Veroniki AA,
    2. Straus SE,
    3. Rücker G and
    4. Tricco AC
    : Is providing uncertainty intervals in treatment ranking helpful in a network meta-analysis? J Clin Epidemiol 100: 122-129, 2018. PMID: 29432861. DOI: 10.1016/j.jclinepi.2018.02.009
    OpenUrlCrossRefPubMed
  28. ↵
    1. Salanti G,
    2. Ades AE and
    3. Ioannidis JP
    : Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial. J Clin Epidemiol 64(2): 163-171, 2011. PMID: 20688472. DOI: 10.1016/j.jclinepi.2010.03.016
    OpenUrlCrossRefPubMed
  29. ↵
    1. Ahn E and
    2. Kang H
    : Concepts and emerging issues of network meta-analysis. Korean J Anesthesiol 74(5): 371-382, 2021. PMID: 34551467. DOI: 10.4097/kja.21358
    OpenUrlCrossRefPubMed
  30. ↵
    1. Catalá-López F,
    2. Tobías A,
    3. Cameron C,
    4. Moher D and
    5. Hutton B
    : Network meta-analysis for comparing treatment effects of multiple interventions: an introduction. Rheumatol Int 34(11): 1489-1496, 2014. PMID: 24691560. DOI: 10.1007/s00296-014-2994-2
    OpenUrlCrossRefPubMed
  31. ↵
    1. Efthimiou O,
    2. Debray TP,
    3. van Valkenhoef G,
    4. Trelle S,
    5. Panayidou K,
    6. Moons KG,
    7. Reitsma JB,
    8. Shang A,
    9. Salanti G and GetReal Methods Review Group
    : GetReal in network meta-analysis: a review of the methodology. Res Synth Methods 7(3): 236-263, 2016. PMID: 26754852. DOI: 10.1002/jrsm.1195
    OpenUrlCrossRefPubMed
  32. ↵
    1. Phillips MR,
    2. Steel DH,
    3. Wykoff CC,
    4. Busse JW,
    5. Bannuru RR,
    6. Thabane L,
    7. Bhandari M,
    8. Chaudhary V and Retina Evidence Trials InterNational Alliance (R.E.T.I.N.A.) Study Group
    : A clinician’s guide to network meta-analysis. Eye (Lond) 36(8): 1523-1526, 2022. PMID: 35145277. DOI: 10.1038/s41433-022-01943-5
    OpenUrlCrossRefPubMed
  33. ↵
    1. Jansen JP and
    2. Naci H
    : Is network meta-analysis as valid as standard pairwise meta-analysis? It all depends on the distribution of effect modifiers. BMC Med 11: 159, 2013. PMID: 23826681. DOI: 10.1186/1741-7015-11-159
    OpenUrlCrossRefPubMed
  34. ↵
    1. Fernández-Castilla B and
    2. Van den Noortgate W
    : Network meta-analysis in psychology and educational sciences: A systematic review of their characteristics. Behav Res Methods, 2022. PMID: 35821493. DOI: 10.3758/s13428-022-01905-5
    OpenUrlCrossRefPubMed
  35. ↵
    1. Le-Rademacher J and
    2. Wang X
    : Time-to-event data: an overview and analysis considerations. J Thorac Oncol 16(7): 1067-1074, 2021. PMID: 33887465. DOI: 10.1016/j.jtho.2021.04.004
    OpenUrlCrossRefPubMed
  36. ↵
    1. Spruance SL,
    2. Reid JE,
    3. Grace M and
    4. Samore M
    : Hazard ratio in clinical trials. Antimicrob Agents Chemother 48(8): 2787-2792, 2004. PMID: 15273082. DOI: 10.1128/AAC.48.8.2787-2792.2004
    OpenUrlFREE Full Text
  37. ↵
    1. Cox D
    : Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) 34(2): 187-202, 2022. DOI: 10.1111/j.2517-6161.1972.tb00899.x
    OpenUrlCrossRef
  38. ↵
    1. Spineli LM
    : Modeling missing binary outcome data while preserving transitivity assumption yielded more credible network meta-analysis results. J Clin Epidemiol 105: 19-26, 2019. PMID: 30223064. DOI: 10.1016/j.jclinepi.2018.09.002
    OpenUrlCrossRefPubMed
  39. ↵
    1. Cipriani A,
    2. Higgins JP,
    3. Geddes JR and
    4. Salanti G
    : Conceptual and technical challenges in network meta-analysis. Ann Intern Med 159(2): 130-137, 2013. PMID: 23856683. DOI: 10.7326/0003-4819-159-2-201307160-00008
    OpenUrlCrossRefPubMed
  40. ↵
    1. Baker SG and
    2. Kramer BS
    : The transitive fallacy for randomized trials: if A bests B and B bests C in separate trials, is A better than C? BMC Med Res Methodol 2: 13, 2002. PMID: 12429069. DOI: 10.1186/1471-2288-2-13
    OpenUrlCrossRefPubMed
  41. ↵
    1. Xiong T,
    2. Parekh-Bhurke S,
    3. Loke YK,
    4. Abdelhamid A,
    5. Sutton AJ,
    6. Eastwood AJ,
    7. Holland R,
    8. Chen YF,
    9. Walsh T,
    10. Glenny AM and
    11. Song F
    : Overall similarity and consistency assessment scores are not sufficiently accurate for predicting discrepancy between direct and indirect comparison estimates. J Clin Epidemiol 66(2): 184-191, 2013. PMID: 23186991. DOI: 10.1016/j.jclinepi.2012.06.022
    OpenUrlCrossRefPubMed
  42. ↵
    1. Biondi-Zoccai G
    1. Katsanos K
    : Appraising Inconsistency between Direct and Indirect Estimates (5th Section, Chapter 12). In: Network Meta-Analysis: Evidence Synthesis with Mixed Treatment Comparison. Biondi-Zoccai G (ed.). Hauppauge, NY, USA, Nova Science Publishers, pp. 191-210, 2014.
  43. ↵
    1. Lu G and
    2. Ades A
    : Assessing evidence inconsistency in mixed treatment comparisons. Journal of the American Statistical Association 101(474): 447-459, 2020. DOI: 10.1198/016214505000001302
    OpenUrlCrossRef
  44. ↵
    1. Bucher HC,
    2. Guyatt GH,
    3. Griffith LE and
    4. Walter SD
    : The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol 50(6): 683-691, 1997. PMID: 9250266. DOI: 10.1016/s0895-4356(97)00049-8
    OpenUrlCrossRefPubMed
  45. ↵
    1. White IR,
    2. Barrett JK,
    3. Jackson D and
    4. Higgins JP
    : Consistency and inconsistency in network meta-analysis: model estimation using multivariate meta-regression. Res Synth Methods 3(2): 111-125, 2012. PMID: 26062085. DOI: 10.1002/jrsm.1045
    OpenUrlCrossRefPubMed
  46. ↵
    1. Dias S,
    2. Welton NJ and
    3. Ades AE
    : Study designs to detect sponsorship and other biases in systematic reviews. J Clin Epidemiol 63(6): 587-588, 2010. PMID: 20434021. DOI: 10.1016/j.jclinepi.2010.01.005
    OpenUrlCrossRefPubMed
  47. ↵
    1. Cavanaugh JE and
    2. Neath AA
    : The Akaike information criterion: Background, derivation, properties, application, interpretation, and refinements. Wiley Interdiscip Rev Comput Stat 11(3): e1460, 2019. DOI: 10.1002/wics.1460
    OpenUrlCrossRef
  48. ↵
    1. Spiegelhalter DJ,
    2. Best NG,
    3. Carlin BP and
    4. van der Linde A
    : Bayesian measures of model complexity and fit. J R Statist Soc B Stat Methodol 64(4): 583-639, 2002. DOI: 10.1111/1467-9868.00353
    OpenUrlCrossRef
  49. ↵
    1. Freeman SC,
    2. Fisher D,
    3. White IR,
    4. Auperin A and
    5. Carpenter JR
    : Identifying inconsistency in network meta-analysis: Is the net heat plot a reliable method? Stat Med 38(29): 5547-5564, 2019. PMID: 31647136. DOI: 10.1002/sim.8383
    OpenUrlCrossRefPubMed
  50. ↵
    1. Krahn U,
    2. Binder H and
    3. König J
    : A graphical tool for locating inconsistency in network meta-analyses. BMC Med Res Methodol 13: 35, 2013. PMID: 23496991. DOI: 10.1186/1471-2288-13-35
    OpenUrlCrossRefPubMed
  51. ↵
    1. Dias S,
    2. Welton NJ,
    3. Caldwell DM and
    4. Ades AE
    : Checking consistency in mixed treatment comparison meta-analysis. Stat Med 29(7-8): 932-944, 2010. PMID: 20213715. DOI: 10.1002/sim.3767
    OpenUrlCrossRefPubMed
  52. ↵
    1. van Valkenhoef G,
    2. Dias S,
    3. Ades AE and
    4. Welton NJ
    : Automated generation of node-splitting models for assessment of inconsistency in network meta-analysis. Res Synth Methods 7(1): 80-93, 2016. PMID: 26461181. DOI: 10.1002/jrsm.1167
    OpenUrlCrossRefPubMed
  53. ↵
    1. Yu-Kang T
    : Node-splitting generalized linear mixed models for evaluation of inconsistency in network meta-analysis. Value Health 19(8): 957-963, 2016. PMID: 27987646. DOI: 10.1016/j.jval.2016.07.005
    OpenUrlCrossRefPubMed
  54. ↵
    1. Biondi-Zoccai G
    1. Beyene J,
    2. Bonner AJ and
    3. Neupane B
    : Choosing the statistical model and between fixed and random effects (Chapter 8). In: Network Meta-Analysis: Evidence Synthesis with Mixed Treatment Comparison. Biondi-Zoccai G (ed.). Hauppauge, NY, USA, Nova Science Publishers, pp.117-140, 2014.
  55. ↵
    1. Biondi-Zoccai G
    1. Gagnier JJ
    : Appraising Between-Study Heterogeneity (5th Section, Chapter 11). In: Network Meta-Analysis: Evidence Synthesis with Mixed Treatment Comparison. Biondi-Zoccai G (ed.). Hauppauge, NY, USA, Nova Science Publishers, pp. 171-190, 2014.
  56. ↵
    1. Pereira TV,
    2. Patsopoulos NA,
    3. Salanti G and
    4. Ioannidis JP
    : Critical interpretation of Cochran’s Q test depends on power and prior assumptions about heterogeneity. Res Synth Methods 1(2): 149-161, 2010. PMID: 26061380. DOI: 10.1002/jrsm.13
    OpenUrlCrossRefPubMed
  57. ↵
    1. Huedo-Medina TB,
    2. Sánchez-Meca J,
    3. Marín-Martínez F and
    4. Botella J
    : Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol Methods 11(2): 193-206, 2006. PMID: 16784338. DOI: 10.1037/1082-989X.11.2.193
    OpenUrlCrossRefPubMed
    1. Patil KD
    : Cochran’s Q test: Exact distribution. J Am Stat Assoc 70(349): 186-189, 1975. DOI: 10.1080/01621459.1975.10480285
    OpenUrlCrossRef
  58. ↵
    1. Cochran WG
    : The combination of estimates from different experiments. Biometrics 10(1): 101-129, 1954. DOI: 10.2307/3001666
    OpenUrlCrossRef
  59. ↵
    1. Piepho HP,
    2. Madden LV,
    3. Roger J,
    4. Payne R and
    5. Williams ER
    : Estimating the variance for heterogeneity in arm-based network meta-analysis. Pharm Stat 17(3): 264-277, 2018. PMID: 29676023. DOI: 10.1002/pst.1857
    OpenUrlCrossRefPubMed
  60. ↵
    1. Higgins JP,
    2. Thompson SG,
    3. Deeks JJ and
    4. Altman DG
    : Measuring inconsistency in meta-analyses. BMJ 327(7414): 557-560, 2003. PMID: 12958120. DOI: 10.1136/bmj.327.7414.557
    OpenUrlFREE Full Text
    1. von Hippel PT
    : The heterogeneity statistic I(2) can be biased in small meta-analyses. BMC Med Res Methodol 15: 35, 2015. PMID: 25880989. DOI: 10.1186/s12874-015-0024-z
    OpenUrlCrossRefPubMed
  61. ↵
    1. Kao YS,
    2. Ma KS,
    3. Wu MY,
    4. Wu YC,
    5. Tu YK and
    6. Hung CH
    : Topical prevention of radiation dermatitis in head and neck cancer patients: a network meta-analysis. In Vivo 36(3): 1453-1460, 2022. PMID: 35478163. DOI: 10.21873/invivo.12851
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Jansen JP and
    2. Cope S
    : Meta-regression models to address heterogeneity and inconsistency in network meta-analysis of survival outcomes. BMC Med Res Methodol 12: 152, 2012. PMID: 23043545. DOI: 10.1186/1471-2288-12-152
    OpenUrlCrossRefPubMed
  63. ↵
    1. Baker WL,
    2. White CM,
    3. Cappelleri JC,
    4. Kluger J,
    5. Coleman CI and Health Outcomes, Policy, and Economics (HOPE) Collaborative Group
    : Understanding heterogeneity in meta-analysis: the role of meta-regression. Int J Clin Pract 63(10): 1426-1434, 2009. PMID: 19769699. DOI: 10.1111/j.1742-1241.2009.02168.x
    OpenUrlCrossRefPubMed
  64. ↵
    1. Nikolakopoulou A,
    2. Higgins JPT,
    3. Papakonstantinou T,
    4. Chaimani A,
    5. Del Giovane C,
    6. Egger M and
    7. Salanti G
    : CINeMA: An approach for assessing confidence in the results of a network meta-analysis. PLoS Med 17(4): e1003082, 2020. PMID: 32243458. DOI: 10.1371/journal.pmed.1003082
    OpenUrlCrossRefPubMed
  65. ↵
    1. Turner RM,
    2. Domínguez-Islas CP,
    3. Jackson D,
    4. Rhodes KM and
    5. White IR
    : Incorporating external evidence on between-trial heterogeneity in network meta-analysis. Stat Med 38(8): 1321-1335, 2019. PMID: 30488475. DOI: 10.1002/sim.8044
    OpenUrlCrossRefPubMed
  66. ↵
    1. Rücker G and
    2. Schwarzer G
    : Ranking treatments in frequentist network meta-analysis works without resampling methods. BMC Med Res Methodol 15: 58, 2015. PMID: 26227148. DOI: 10.1186/s12874-015-0060-8
    OpenUrlCrossRefPubMed
  67. ↵
    1. Antoniou SA,
    2. Koelemay M,
    3. Antoniou GA and
    4. Mavridis D
    : A practical guide for application of network meta-analysis in evidence synthesis. Eur J Vasc Endovasc Surg 58(1): 141-144, 2019. PMID: 30528457. DOI: 10.1016/j.ejvs.2018.10.023
    OpenUrlCrossRefPubMed
  68. ↵
    1. Van de Schoot R,
    2. Depaoli S,
    3. King R,
    4. Kramer B,
    5. Märtens K,
    6. Tadesse MG,
    7. Vannucci M,
    8. Gelman A,
    9. Veen D,
    10. Willemsen J and
    11. Yau C
    : Bayesian statistics and modelling. Nature Reviews Methods Primers 1(1), 2022. DOI: 10.1038/s43586-020-00001-2
    OpenUrlCrossRef
  69. ↵
    1. López Puga J,
    2. Krzywinski M and
    3. Altman N
    : Points of significance: Bayes’ theorem. Nat Methods 12(4): 277-278, 2015. PMID: 26005726. DOI: 10.1038/nmeth.3335
    OpenUrlCrossRefPubMed
  70. ↵
    1. Hackenberger BK
    : Bayesian meta-analysis now - let’s do it. Croat Med J 61(6): 564-568, 2020. PMID: 33410305. DOI: 10.3325/cmj.2020.61.564
    OpenUrlCrossRefPubMed
  71. ↵
    1. Stone JV
    : Bayes’ rule: A tutorial introduction to Bayesian analysis. Sebtel Press, 2013. DOI: 10.13140/2.1.1371.6801
    OpenUrlCrossRef
  72. ↵
    1. Van Valkenhoef G,
    2. Tervonen T,
    3. de Brock B and
    4. Hillege H
    : Algorithmic parameterization of mixed treatment comparisons. Stat Comput 22(5): 1099-1111, 2012. DOI: 10.1007/s11222-011-9281-9
    OpenUrlCrossRef
  73. ↵
    1. Lunn DJ,
    2. Thomas A,
    3. Best N and
    4. Spiegelhalter D
    : WinBUGS - A Bayesian modeling framework: Concepts, structure, and extensibility. Stat Comput 10(4): 325-337, 2000. DOI: 10.1023/a:1008929526011
    OpenUrlCrossRef
  74. ↵
    1. Hoff PD
    : A first course in Bayesian statistical methods. Springer Texts in Statistics, 2009. DOI: 10.1007/978-0-387-92407-6
    OpenUrlCrossRef
  75. ↵
    1. Lunn DJ,
    2. Best N,
    3. Thomas A,
    4. Wakefield J and
    5. Spiegelhalter D
    : Bayesian analysis of population PK/PD models: general concepts and software. J Pharmacokinet Pharmacodyn 29(3): 271-307, 2002. PMID: 12449499. DOI: 10.1023/a:1020206907668
    OpenUrlCrossRefPubMed
  76. ↵
    1. Harrer M,
    2. Cuijpers P,
    3. Furukawa T and
    4. Ebert D
    : Doing Meta-Analysis with R. 2021. DOI: 10.1201/9781003107347
  77. ↵
    1. R Core Team
    : R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing, 2014. Available at: http://www.R-project.org [Last accessed on March 31, 2023]
  78. ↵
    1. Owen RK,
    2. Bradbury N,
    3. Xin Y,
    4. Cooper N and
    5. Sutton A
    : MetaInsight: An interactive web-based tool for analyzing, interrogating, and visualizing network meta-analyses using R-shiny and netmeta. Res Synth Methods 10(4): 569-581, 2019. PMID: 31349391. DOI: 10.1002/jrsm.1373
    OpenUrlCrossRefPubMed
    1. Plummer M
    : JAGS: A Program for Analysis of Bayesian Graphical Models Using Gibbs Sampling. Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), Vienna, Austria, pp. 1-10, 2003. Available at: https://www.r-project.org/conferences/DSC-2003/Proceedings/Plummer.pdf [Last accessed on March 31, 2023]
    1. van Valkenhoef G,
    2. Lu G,
    3. de Brock B,
    4. Hillege H,
    5. Ades AE and
    6. Welton NJ
    : Automating network meta-analysis. Res Synth Methods 3(4): 285-299, 2012. PMID: 26053422. DOI: 10.1002/jrsm.1054
    OpenUrlCrossRefPubMed
    1. Plummer M
    : rjags: Bayesian Graphical Models Using MCMC. R package version 4-6, 2016. Available at: https://CRAN.R-project.org/package=rjags [Last accessed on March 31, 2023]
    1. Lunn D,
    2. Spiegelhalter D,
    3. Thomas A and
    4. Best N
    : The BUGS project: Evolution, critique and future directions. Stat Med 28(25): 3049-3067, 2009. PMID: 19630097. DOI: 10.1002/sim.3680
    OpenUrlCrossRefPubMed
    1. Béliveau A,
    2. Boyne DJ,
    3. Slater J,
    4. Brenner D and
    5. Arora P
    : BUGSnet: an R package to facilitate the conduct and reporting of Bayesian network Meta-analyses. BMC Med Res Methodol 19(1): 196, 2019. PMID: 31640567. DOI: 10.1186/s12874-019-0829-2
    OpenUrlCrossRefPubMed
    1. Van Valkenhoef G and
    2. Kuiper J
    : gemtc: Network Meta-Analysis Using Bayesian Methods. R package version 0.8-2, 2016. Available at: https://CRAN.R-project.org/package=gemtc [Last accessed on March 31, 2023]
    1. Spiegelhalter D,
    2. Thomas A,
    3. Best N and
    4. Lunn D
    : OpenBUGS user manual, version 3.2.3. MRC Biostatistics Unit, Cambridge, 2014. Available at: https://www.mrc-bsu.cam.ac.uk/wp-content/uploads/2021/06/OpenBUGS_Manual.pdf [Last accessed on March 31, 2023]
    1. Lawson A
    : Using R for Bayesian spatial and spatio-temporal health modeling. 2021. DOI: 10.1201/9781003043997
    1. Sturtz S,
    2. Ligges U and
    3. Gelman A
    : R2WinBUGS: A package for running WinBUGS from R. Journal of Statistical Software 12(3): 1-16, 2015. DOI: 10.18637/jss.v012.i03
    OpenUrlCrossRef
  79. ↵
    1. Phillippo DM,
    2. Dias S,
    3. Ades AE,
    4. Belger M,
    5. Brnabic A,
    6. Schacht A,
    7. Saure D,
    8. Kadziola Z and
    9. Welton NJ
    : Multilevel network meta-regression for population-adjusted treatment comparisons. J R Stat Soc Ser A Stat Soc 183(3): 1189-1210, 2020. PMID: 32684669. DOI: 10.1111/rssa.12579
    OpenUrlCrossRefPubMed
  80. ↵
    1. Singh J,
    2. Gsteiger S,
    3. Wheaton L,
    4. Riley RD,
    5. Abrams KR,
    6. Gillies CL and
    7. Bujkiewicz S
    : Bayesian network meta-analysis methods for combining individual participant data and aggregate data from single arm trials and randomised controlled trials. BMC Med Res Methodol 22(1): 186, 2022. PMID: 35818035. DOI: 10.1186/s12874-022-01657-y
    OpenUrlCrossRefPubMed
    1. Pandey S,
    2. Sharma A and
    3. Siddiqui M
    : NM3 extending the network meta-analysis (NMA) framework to multilevel network meta-regression (ML-NMR): a worked example of ML-NMR vs. standard NMA. Value in Health 23: S407, 2021. DOI: 10.1016/j.jval.2020.08.058
    OpenUrlCrossRef
    1. Freeman SC
    : Individual patient data meta-analysis and network meta-analysis. Methods Mol Biol 2345: 279-298, 2022. PMID: 34550597. DOI: 10.1007/978-1-0716-1566-9_17
    OpenUrlCrossRefPubMed
    1. Phillippo DM
    : Calibration of treatment effects in network meta-analysis using individual patient data. Ph.D. thesis, University of Bristol, UK, 2019. Available at: https://research-information.bris.ac.uk/en/studentTheses/calibration-of-treatment-effects-in-network-meta-analysis-using-i [Last accessed on March 31, 2023]
    1. Jansen JP
    : Network meta-analysis of individual and aggregate level data. Res Synth Methods 3(2): 177-190, 2012. PMID: 26062089. DOI: 10.1002/jrsm.1048
    OpenUrlCrossRefPubMed
    1. Kanters S,
    2. Karim ME,
    3. Thorlund K,
    4. Anis AH,
    5. Zoratti M and
    6. Bansback N
    : Comparing the use of aggregate data and various methods of integrating individual patient data to network meta-analysis and its application to first-line ART. BMC Med Res Methodol 21(1): 60, 2021. PMID: 33784981. DOI: 10.1186/s12874-021-01254-5
    OpenUrlCrossRefPubMed
    1. Annis J,
    2. Miller BJ and
    3. Palmeri TJ
    : Bayesian inference with Stan: A tutorial on adding custom distributions. Behav Res Methods 49(3): 863-886, 2017. PMID: 27287444. DOI: 10.3758/s13428-016-0746-9
    OpenUrlCrossRefPubMed
  81. ↵
    1. Carpenter B,
    2. Gelman A,
    3. Hoffman M,
    4. Lee D,
    5. Goodrich B,
    6. Betancourt M,
    7. Brubaker M,
    8. Guo J,
    9. Li P and
    10. Riddell A
    : Stan: a probabilistic programming language. Journal of Statistical Software 76(1): 1-32, 2017. DOI: 10.18637/jss.v076.i01
    OpenUrlCrossRefPubMed
    1. Hoffman MD and
    2. Gelman A
    : The no-U-turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J Mach Learn Res 15(47): 1593-1623, 2014. DOI: 10.48550/arXiv.1111.4246
    OpenUrlCrossRef
    1. Egidio L,
    2. Hansson A and
    3. Wahlberg B
    : Learning the step-size policy for the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. 2021 International Joint Conference on Neural Networks (IJCNN), 2022. DOI: 10.1109/IJCNN52387.2021.9534194
    OpenUrlCrossRef
    1. Martin A,
    2. Quinn K and
    3. Park J
    : MCMCpack: Markov Chain Monte Carlo in R. Journal of Statistical Software 42(9): 1-21, 2015. DOI: 10.18637/jss.v042.i09
    OpenUrlCrossRefPubMed
    1. Neupane B,
    2. Richer D,
    3. Bonner AJ,
    4. Kibret T and
    5. Beyene J
    : Network meta-analysis using R: a review of currently available automated packages. PLoS One 9(12): e115065, 2014. PMID: 25541687. DOI: 10.1371/journal.pone.0115065
    OpenUrlCrossRefPubMed
  82. ↵
    1. Rücker G,
    2. Petropoulou M and
    3. Schwarzer G
    : Network meta-analysis of multicomponent interventions. Biom J 62(3): 808-821, 2020. PMID: 31021449. DOI: 10.1002/bimj.201800167
    OpenUrlCrossRefPubMed
    1. Shim SR,
    2. Kim SJ,
    3. Lee J and
    4. Rücker G
    : Network meta-analysis: application and practice using R software. Epidemiol Health 41: e2019013, 2019. PMID: 30999733. DOI: 10.4178/epih.e2019013
    OpenUrlCrossRefPubMed
    1. Zhang J,
    2. Carlin BP,
    3. Neaton JD,
    4. Soon GG,
    5. Nie L,
    6. Kane R,
    7. Virnig BA and
    8. Chu H
    : Network meta-analysis of randomized clinical trials: reporting the proper summaries. Clin Trials 11(2): 246-262, 2014. PMID: 24096635. DOI: 10.1177/1740774513498322
    OpenUrlCrossRefPubMed
    1. Lin L,
    2. Zhang J,
    3. Hodges JS and
    4. Chu H
    : Performing arm-based network meta-analysis in R with the pcnetmeta package. J Stat Softw 80: 5, 2017. PMID: 28883783. DOI: 10.18637/jss.v080.i05
    OpenUrlCrossRefPubMed
    1. Denwood M
    : runjags: an R package providing interface utilities, model templates, parallel computing methods and additional distributions for MCMC models in JAGS. Journal of Statistical Software 71(9): 1-25, 2016. DOI: 10.18637/jss.v071.i09
    OpenUrlCrossRef
  83. ↵
    1. Senn S,
    2. Gavini F,
    3. Magrez D and
    4. Scheen A
    : Issues in performing a network meta-analysis. Stat Methods Med Res 22(2): 169-189, 2013. PMID: 22218368. DOI: 10.1177/0962280211432220
    OpenUrlCrossRefPubMed
  84. ↵
    1. Schwarzer G,
    2. Carpenter JR and
    3. Rücker G
    : Meta-analysis with R. Use R! Cham, Springer, 2015. Available at: https://zums.ac.ir/files/socialfactors/files/Meta-Analysis_with_R-2015.pdf [Last accessed on March 30, 2023]
  85. ↵
    1. Thom H,
    2. White IR,
    3. Welton NJ and
    4. Lu G
    : Automated methods to test connectedness and quantify indirectness of evidence in network meta-analysis. Res Synth Methods 10(1): 113-124, 2019. PMID: 30403829. DOI: 10.1002/jrsm.1329
    OpenUrlCrossRefPubMed
  86. ↵
    1. Salanti G,
    2. Higgins JP,
    3. Ades AE and
    4. Ioannidis JP
    : Evaluation of networks of randomized trials. Stat Methods Med Res 17(3): 279-301, 2008. PMID: 17925316. DOI: 10.1177/0962280207080643
    OpenUrlCrossRefPubMed
  87. ↵
    1. Edwards SJ,
    2. Clarke MJ,
    3. Wordsworth S and
    4. Borrill J
    : Indirect comparisons of treatments based on systematic reviews of randomised controlled trials. Int J Clin Pract 63(6): 841-854, 2009. PMID: 19490195. DOI: 10.1111/j.1742-1241.2009.02072.x
    OpenUrlCrossRefPubMed
    1. Gartlehner G and
    2. Moore CG
    : Direct versus indirect comparisons: a summary of the evidence. Int J Technol Assess Health Care 24(2): 170-177, 2008. PMID: 18400120. DOI: 10.1017/S0266462308080240
    OpenUrlCrossRefPubMed
  88. ↵
    1. Ioannidis JP
    : Indirect comparisons: the mesh and mess of clinical trials. Lancet 368(9546): 1470-1472, 2006. PMID: 17071265. DOI: 10.1016/S0140-6736(06)69615-3
    OpenUrlCrossRefPubMed
  89. ↵
    1. Kibret T,
    2. Richer D and
    3. Beyene J
    : Bias in identification of the best treatment in a Bayesian network meta-analysis for binary outcome: a simulation study. Clin Epidemiol 6: 451-460, 2014. PMID: 25506247. DOI: 10.2147/CLEP.S69660
    OpenUrlCrossRefPubMed
  90. ↵
    1. Phillippo DM,
    2. Dias S,
    3. Ades AE,
    4. Didelez V and
    5. Welton NJ
    : Sensitivity of treatment recommendations to bias in network meta-analysis. J R Stat Soc Ser A Stat Soc 181(3): 843-867, 2018. PMID: 30449954. DOI: 10.1111/rssa.12341
    OpenUrlCrossRefPubMed
  91. ↵
    1. Davies AL and
    2. Galla T
    : Degree irregularity and rank probability bias in network meta-analysis. Res Synth Methods 12(3): 316-332, 2021. PMID: 32935913. DOI: 10.1002/jrsm.1454
    OpenUrlCrossRefPubMed
  92. ↵
    1. Trinquart L,
    2. Attiche N,
    3. Bafeta A,
    4. Porcher R and
    5. Ravaud P
    : Uncertainty in treatment rankings: Reanalysis of network meta-analyses of randomized trials. Ann Intern Med 164(10): 666-673, 2016. PMID: 27089537. DOI: 10.7326/M15-2521
    OpenUrlCrossRefPubMed
  93. ↵
    1. Lin L,
    2. Chu H and
    3. Hodges JS
    : On evidence cycles in network meta-analysis. Stat Interface 13(4): 425-436, 2020. PMID: 32742550. DOI: 10.4310/sii.2020.v13.n4.a1
    OpenUrlCrossRefPubMed
  94. ↵
    1. Li T,
    2. Puhan MA,
    3. Vedula SS,
    4. Singh S,
    5. Dickersin K and Ad Hoc Network Meta-analysis Methods Meeting Working Group
    : Network meta-analysis-highly attractive but more methodological research is needed. BMC Med 9: 79, 2011. PMID: 21707969. DOI: 10.1186/1741-7015-9-79
    OpenUrlCrossRefPubMed
  95. ↵
    1. Sturtz S and
    2. Bender R
    : Unsolved issues of mixed treatment comparison meta-analysis: network size and inconsistency. Res Synth Methods 3(4): 300-311, 2012. PMID: 26053423. DOI: 10.1002/jrsm.1057
    OpenUrlCrossRefPubMed
  96. ↵
    1. Zhang J,
    2. Fu H and
    3. Carlin BP
    : Detecting outlying trials in network meta-analysis. Stat Med 34(19): 2695-2707, 2015. PMID: 25851533. DOI: 10.1002/sim.6509
    OpenUrlCrossRefPubMed
  97. ↵
    1. Mills EJ,
    2. Kanters S,
    3. Thorlund K,
    4. Chaimani A,
    5. Veroniki AA and
    6. Ioannidis JP
    : The effects of excluding treatments from network meta-analyses: survey. BMJ 347: f5195, 2013. PMID: 24009242. DOI: 10.1136/bmj.f5195
    OpenUrlAbstract/FREE Full Text
  98. ↵
    1. Naudet F,
    2. Schuit E and
    3. Ioannidis JPA
    : Overlapping network meta-analyses on the same topic: survey of published studies. Int J Epidemiol 46(6): 1999-2008, 2017. PMID: 29040566. DOI: 10.1093/ije/dyx138
    OpenUrlCrossRefPubMed
    1. James A,
    2. Yavchitz A,
    3. Ravaud P and
    4. Boutron I
    : Node-making process in network meta-analysis of nonpharmacological treatment are poorly reported. J Clin Epidemiol 97: 95-102, 2018. PMID: 29196202. DOI: 10.1016/j.jclinepi.2017.11.018
    OpenUrlCrossRefPubMed
  99. ↵
    1. Xing A and
    2. Lin L
    : Effects of treatment classifications in network meta-analysis. Research Methods in Medicine & Health Sciences 1(1): 12-24, 2020. DOI: 10.1177/2632084320932756
    OpenUrlCrossRef
  100. ↵
    1. Tonin FS,
    2. Borba HH,
    3. Mendes AM,
    4. Wiens A,
    5. Fernandez-Llimos F and
    6. Pontarolo R
    : Description of network meta-analysis geometry: A metrics design study. PLoS One 14(2): e0212650, 2019. PMID: 30785955. DOI: 10.1371/journal.pone.0212650
    OpenUrlCrossRefPubMed
  101. ↵
    1. Rücker G
    : Network meta-analysis, electrical networks and graph theory. Res Synth Methods 3(4): 312-324, 2012. PMID: 26053424. DOI: 10.1002/jrsm.1058
    OpenUrlCrossRefPubMed
  102. ↵
    1. Jackson D,
    2. Barrett JK,
    3. Rice S,
    4. White IR and
    5. Higgins JP
    : A design-by-treatment interaction model for network meta-analysis with random inconsistency effects. Stat Med 33(21): 3639-3654, 2014. PMID: 24777711. DOI: 10.1002/sim.6188
    OpenUrlCrossRefPubMed
  103. ↵
    1. Lunny C,
    2. Tricco AC,
    3. Veroniki AA,
    4. Dias S,
    5. Hutton B,
    6. Salanti G,
    7. Wright JM,
    8. White I and
    9. Whiting P
    : Methodological review to develop a list of bias items used to assess reviews incorporating network meta-analysis: protocol and rationale. BMJ Open 11(6): e045987, 2021. PMID: 34168027. DOI: 10.1136/bmjopen-2020-045987
    OpenUrlAbstract/FREE Full Text
  104. ↵
    1. Cameron C,
    2. Fireman B,
    3. Hutton B,
    4. Clifford T,
    5. Coyle D,
    6. Wells G,
    7. Dormuth CR,
    8. Platt R and
    9. Toh S
    : Network meta-analysis incorporating randomized controlled trials and non-randomized comparative cohort studies for assessing the safety and effectiveness of medical treatments: challenges and opportunities. Syst Rev 4: 147, 2015. PMID: 26537988. DOI: 10.1186/s13643-015-0133-0
    OpenUrlCrossRefPubMed
  105. ↵
    1. IntHout J,
    2. Ioannidis JP and
    3. Borm GF
    : Obtaining evidence by a single well-powered trial or several modestly powered trials. Stat Methods Med Res 25(2): 538-552, 2016. PMID: 23070590. DOI: 10.1177/0962280212461098
    OpenUrlCrossRefPubMed
  106. ↵
    1. Thorlund K and
    2. Mills E
    : Stability of additive treatment effects in multiple treatment comparison meta-analysis: a simulation study. Clin Epidemiol 4: 75-85, 2012. PMID: 22570567. DOI: 10.2147/CLEP.S29470
    OpenUrlCrossRefPubMed
  107. ↵
    1. Efthimiou O,
    2. Mavridis D,
    3. Debray TP,
    4. Samara M,
    5. Belger M,
    6. Siontis GC,
    7. Leucht S,
    8. Salanti G and GetReal Work Package 4
    : Combining randomized and non-randomized evidence in network meta-analysis. Stat Med 36(8): 1210-1226, 2017. PMID: 28083901. DOI: 10.1002/sim.7223
    OpenUrlCrossRefPubMed
  108. ↵
    1. Leahy J,
    2. O’Leary A,
    3. Afdhal N,
    4. Gray E,
    5. Milligan S,
    6. Wehmeyer MH and
    7. Walsh C
    : The impact of individual patient data in a network meta-analysis: An investigation into parameter estimation and model selection. Res Synth Methods 9(3): 441-469, 2018. PMID: 29923679. DOI: 10.1002/jrsm.1305
    OpenUrlCrossRefPubMed
  109. ↵
    1. Debray TP,
    2. Schuit E,
    3. Efthimiou O,
    4. Reitsma JB,
    5. Ioannidis JP,
    6. Salanti G,
    7. Moons KG and GetReal Workpackage
    : An overview of methods for network meta-analysis using individual participant data: when do benefits arise? Stat Methods Med Res 27(5): 1351-1364, 2018. PMID: 27487843. DOI: 10.1177/0962280216660741
    OpenUrlCrossRefPubMed
  110. ↵
    1. Kanters S,
    2. Karim ME,
    3. Thorlund K,
    4. Anis A and
    5. Bansback N
    : When does the use of individual patient data in network meta-analysis make a difference? A simulation study. BMC Med Res Methodol 21(1): 21, 2021. PMID: 33435879. DOI: 10.1186/s12874-020-01198-2
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

In Vivo: 37 (3)
In Vivo
Vol. 37, Issue 3
May-June 2023
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Back Matter (PDF)
  • Ed Board (PDF)
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on In Vivo.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Is Network Meta-analysis a Revolutionary Statistical Tool for Improving the Reliability of Clinical Trial Results? A Brief Overview and Emerging Issues Arising
(Your Name) has sent you a message from In Vivo
(Your Name) thought you would like to see the In Vivo web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
5 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Is Network Meta-analysis a Revolutionary Statistical Tool for Improving the Reliability of Clinical Trial Results? A Brief Overview and Emerging Issues Arising
GEORGE BEIS, IOANNIS PAPASOTIRIOU
In Vivo May 2023, 37 (3) 972-984; DOI: 10.21873/invivo.13171

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Reprints and Permissions
Share
Is Network Meta-analysis a Revolutionary Statistical Tool for Improving the Reliability of Clinical Trial Results? A Brief Overview and Emerging Issues Arising
GEORGE BEIS, IOANNIS PAPASOTIRIOU
In Vivo May 2023, 37 (3) 972-984; DOI: 10.21873/invivo.13171
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Network Geometry
    • NMA Assumptions
    • The Concepts of Transitivity, Consistency, and Heterogeneity in NMA
    • Ranking Treatments in NMA
    • Bayesian Statistical Inference
    • Methodology
    • Software Options for Fitting NMA Models and Assessing Inconsistency
    • An Example of a Network Meta-analysis in Diabetes
    • Discussion
    • Conclusion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Immune Checkpoint Inhibitors and Infection: What Is the Interplay?
  • Synchronous Insulinoma and Glucagonoma: A Review of the Literature
  • Pectoralis Major Muscle Morbidity After Submuscular Silicone-based Breast Reconstruction: A Systematic Review
Show more Review

Similar Articles

Keywords

  • network meta-analysis
  • Bayesian network models
  • Markov Chain Monte Carlo algorithms
  • indirect evidence
  • treatment effects
  • treatment network
  • review
In Vivo

© 2023 In Vivo

Powered by HighWire