Corrections to Aggregate Cyber-Risk Management in the IoT Age: Cautionary Statistics for (Re)Insurers and Likes

Ranjan Pal, Ziyuan Huang, Xinlong Yin, Sergey Lototsky, Swades De, Sasu Tarkoma, Mingyan Liu, Jon Crowcroft, Nishanth Sastry
2021 IEEE Internet of Things Journal  
As authors of our recently accepted article: Aggregate Cyber-Risk Management in the IoT Age: Cautionary Statistics for (Re)Insurers and Likes, published in the IEEE IoT Journal, we regret that we have found a few errors in the numerical evaluation setup of the works in [1] and [2] that we had borrowed for our accepted paper. In this correction statement, we describe the errors in detail, correct it, and present our revised results with a renewed experimental setup, hoping it to replace the
more » ... ing incorrect numerical results in the accepted paper. We apologize for the inconvenience caused to the reader. We emphasize that the numerical evaluation section does not in any way hamper the theoretical contributions in this article, and was initially only meant to provide some empirical evidence for whether the theory proposed in this article generalizes to behavioral settings introduced in [2]. Data Set Forming the Basis of Our "Faulty" Numerical Evaluation Eling and Schnell [1] considered 1553 cyber losses between 1995 and 2014 extracted from the SAS OpRisk database. We, in [3], did not have access to this paid data set (the data set is not sold anymore by SAS), and apologetically borrowed (assuming correctness) the statistical parameters obtained by Eling and Schnell [1] along with their prospect-theoretic setup, to run our numerical experiments. We are unable to generalize the results for the prospecttheoretic behavioral setup proposed in [1], for a broader set of feasible model parameters. This is the main motivation for us to file the correction. On a closer and repeated look, we are not sure whether the parameters (e.g., Pareto Index of 0.62) proposed for the numerical evaluation setup mentioned in [1] are derived accurately in [1]-and due to lack of access to their data set, it is hard for us to verify the parameters. To detail further, in order to analyze which distribution describes the data best, Eling and Schnell [1] compared several goodness-of-fit statistics for several widely used distributions. They arrive at
doi:10.1109/jiot.2021.3077963 fatcat:7y4yzcsg55ebzphtxbtoeq2ghy