The Network Relative Model Accuracy (Nerma) Score Can Quantify the Relative Accuracy of Prediction Models in Concurrent External Validations [post]

Carl Walraven, Meltem Tuna
2022 unpublished
Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from trials evaluating some, but usually not all, treatments. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct applicability that are evaluated on the same population ('concurrent external validation'). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its
more » ... precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods accounting for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which increases as models become more accurate and ranges from <0 (model less accurate than random guessing) through 0 (accuracy of random guessing) to 1 (most accurate model in concurrent external validation).Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created and available to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the relative accuracy of binomial prediction models with distinct applicability in a concurrent external validation.
doi:10.21203/rs.3.rs-1521400/v1 fatcat:gcvpkz5penbwjllkovr23telt4