1 Hit in 0.068 sec

Theory Development Via Replicated Simulations and the Added Value of Standards

Jonas Hauke, Sebastian Achter, Matthias Meyer
2020 Journal of Artificial Societies and Social Simulation  
Using the agent-based model of Miller et al. ( ), which depicts how di erent types of individuals' memory a ect the formation and performance of organizational routines, we show how a replicated simulation model can be used to develop theory. We also assess how standards, such as the ODD (Overview, Design concepts, and Details) protocol and DOE (design of experiments) principles, support the replication, evaluation, and further analysis of this model. Using the verified model, we conduct
more » ... , we conduct several simulation experiments as examples of di erent types of theory development. First, we show how previous theoretical insights can be generalized by investigating additional scenarios, such as mergers. Second, we show the potential of replicated simulation models for theory refinement, such as analyzing in-depth the relationship between memory functions and routine performance or routine adaptation. Introduction . Reproducibility of results is crucial to all scientific disciplines (Giles ), a fundamental scientific principle, and a hallmark of cumulative science (Axelrod ). The reproducibility of simulation experiments has gained attention with the increasing application of computational methods over the past two decades (Stodden et al. ). Simulation models can be verified by reproducing identical or at least similar results. Moreover, replicated models allow to conduct further research on a reliable basis. Still, as in other scientific endeavors (Nosek et al. ), independent replications of simulation studies are lacking (Heath et al. ; Janssen ). . Potential reasons for the shortage of independent model replications are manifold: lacking incentives for researchers, deficient communication of model information, uncertainty in how to validate replicated results, and the inherent di iculty of re-implementing (prototype) models (Fachada et al. ) . Agent-based models, moreover, are built on more assumptions than traditional models due to their high degree of disaggregation and bottom-up logic, rendering more di icult the verification and validation of these models (Zhong & Kim ). Replication e orts of agent-based models may also lack supporting methods. . This paper shows how replicated simulation models can be used to develop theory, which could increase the incentives to publish replicated work. Both replication and the subsequent theory development are fostered here through the use of simulation standards, such as the ODD (Overview, Design concepts, and Details) protocol and DOE (design of experiments) principles; these standards were not used when the model we replicate was initially developed, presented, and analyzed. For this exercise, we use the agent-based simulation model of organizational routines by Miller et al. ( ), examining the relationship between di erent types of individual memory and organizational routines. Although publications to date have cited this study, none so far have replicated the model. . We selected this model for our replication study for several further reasons. First, the model is highly original in its approach to address the micro-foundations of organizational routines by modeling agents' procedural, declarative, and transactive memory, enabling an investigation of the dynamic relationship between individual cognitive properties and both the formation and the performance of organizational routines. Second, it is JASSS, ( ) , of the most frequently cited agent-based models of organizational routines. Third, it was published in the reputed Journal of Management Studies, not a typical outlet for agent-based simulation studies. Finally, it has the potential to support further development of theory, and the fact that it did not use simulation standards enables us to demonstrate their potential benefits. . This paper proceeds in three main steps in order to show how a replicated simulation model can be used both to generalize previous results and to refine theory: ( ) replicate and verify the model, comparing results with those of Miller et al. ( ); ( ) test the usefulness of agent-based modeling standards for replication, such as the ODD protocol and DOE principles; and ( ) develop theoretical understanding of the modeled organizational system by extending the simulation experiments on verified grounds. . We successfully reproduce the results of Miller et al. ( ) in the replicated model. The ODD structure helps to systematically extract information from the original model, while DOE principles guide the experimental analysis of the model and enhance interpretability of the results. For example, we clarify one ambiguous model assumption. For theory development, we generalize the scope of the replicated model by investigating how additional scenarios, such as a merger or a volatile environment, a ect routine formation and performance, as well as relating previous and new findings to prominent constructs in the literature. . The remainder of this paper is structured as follows. The next section reviews relevant literature concerning replication, simulation standards, and theory development. We then introduce our replication methodology, where we apply the ODD protocol and DOE principles in the context of the simulation model replication. The replicated model is then used to generalize and refine previous theoretical insights. The final section concludes and provides an outlook for further research. Related Literature . Replication, in general, is considered a cornerstone of good science. The successful replication of results powerfully fosters the credibility of a study. Besides, replications can be used to advance the knowledge in a field, in the sense that the original study design can be extended, generalized, and applied in new domains. Replications allow linking existing and new knowledge (Schmidt ) and reflect an ideal of science as an incremental process of cumulative knowledge production that avoids "reinventing the wheel" (Richardson ). . Computational models successfully replicated by independent researchers are considered to be more reliable (Sansores & Pavón ) and credible (Zhong & Kim ). Replications can reveal three types of errors: ( ) programming errors; ( ) misrepresentations of what was actually simulated; and ( ) errors in the analysis of simulation results (Axelrod ; Sansores & Pavón ). A replication might also reveal hidden, undocumented, or ambiguous assumptions (Miodownik et al. ), which can a ect the fit of the implemented model with the world to be represented. . The current practice stands in stark contrast to the o -stated importance of replication. Nosek et al. ( ) sparked intense discussion of a potential "replication crisis" in fields as diverse as psychology, economics, and medicine. While much of this discussion concerned empirical areas, replicability and replication also have high relevance for computational modeling (Miłkowski et al. ; Monks et al. ). Nevertheless, most agentbased models have not been replicated (Heath et al. ; Legendi & Gulyas ; Rand & Wilensky ). Most researchers build new models instead of using existing models (Donkin et al. ; Thiele & Grimm ), a practice which hampers cumulative and collective learning and raises the costs of modeling (Dawid et al. ; Monks et al. ). Replicated models can also provide a good starting point for theory development (Lorscheid et al. ). . Recently developed standards and guidelines to enable rigorous simulation modeling and model analysis (Grimm et al. ; Lorscheid et al. ; Rand & Rust ; Richiardi et al. ) can also support the replication process. Social simulation researchers increasingly acknowledge such standards as the ODD protocol and DOE principles (Hauke et al. ). The ODD protocol allows the standardized communication of models (Grimm et al. , ), while DOE principles can foster the systematic analysis and communication of model behavior (Lorscheid et al. ; Padhi et al. ). Using these standards can help researchers compare simulation models, designs, and results. . Given the cumulative nature of science, replication, ideally supported by these standards, can potentially help to build theory through simulation. Among the many ways to develop theory (see Lorscheid et al. ), we focus here on the ideas of Davis et al. ( ), who position the elaboration of simple theories via simulation experiments in a "sweet spot" between theory-creating research, formal modeling, and empirical, theory-testing JASSS, ( ) , / / .html Doi: . /jasss. research. Basic or simple theory typically stems from individual cases or formal modeling; the authors describe it as follows: By simple theory, we mean undeveloped theory that has only a few constructs and related propositions with modest empirical or analytic grounding such that the propositions are in all likelihood correct but are currently limited by weak conceptualization of constructs, few propositions linking these constructs together, and/or rough underlying theoretical logic. Simple theory also includes basic processes that may be known (e.g., competition, imitation) but that have interactions that are only vaguely understood, if at all. Thus, simple theory contrasts with well-developed theory, such as institutional and transaction cost theories that have multiple and clearly defined theoretical constructs (e.g., normative structures, mimetic di usion, asset specificity, uncertainty), wellestablished theoretical propositions that have received extensive empirical grounding, and wellelaborated theoretical logic. Simple theory also contrasts with situations where there is no real theoretical understanding of the phenomena. (Davis et al. , p. ) . In this spirit, we later contribute to the literature on dynamic capabilities, specifically from the perspective of knowledge integration. Despite a large body of research, the concept of dynamic capabilities has not reached the level of elaboration of other theories in the field of strategic management or organizational science (Helfat & Peteraf ; Pisano ). This is perhaps because the concept has a longitudinal and processual focus and because empirical data are di icult to obtain; all these factors make simulation particularly useful for theory development (Davis et al. ). . In this regard, we posit that simulations can strengthen the formal understanding of knowledge-integrating processes as one potential micro-foundation for dynamic capabilities. To this end, we begin with a replicated model of Miller et al. ( ), who acknowledge their study's contribution to the literature on dynamic capabilities, and then conduct several additional simulation experiments. We focus on the representation of underlying knowledge structures as a determinant for the e ectiveness of dynamic capabilities. We use formal modeling to increase precision, compared to previously used verbal models (Smaldino et al. ), in the underlying theoretical logic and the description of the connected constructs. In doing so, we refine the theory of dynamic capabilities by expressing knowledge-integrating processes as a potential mechanism a ecting knowledge structures' underlying routines. Hence, we aim to strengthen the conceptualization of constructs. At the same time, we generalize the concept of knowledge structures in routines' formation by showing the benefits of this concept in new contexts, such as mergers. Method . The replication re-implements the conceptual model in a di erent so ware and hardware environment to ensure that neither hardware nor so ware specifics drive results (Miodownik et al. ; Wilensky & Rand ). Greater di erences in the implementation yield stronger verification if the model nevertheless produces the same results. . Table compares the features of the original study and our replication. The replication is performed by independent researchers, which enhances the objectivity. The conceptual model is re-implemented in a di erent so ware environment, which allows the detection of coding issues and e ects induced by di erent stochastic algorithms. We chose NetLogo for re-implementation, a widely-used agent-based simulation so ware package (Hauke et al. ; Rand & Rust ). A significant di erence between the original model implementation and our re-implementation is that we apply the relatively recently established modeling standards of ODD and DOE. This enables us to uncover potential ambiguities hampering a fully conclusive replication process, necessitating the exploration of implicitly made assumptions. Dimension Original study Replication Year ( published) Authors Miller, Pentland, Choi Hauke, Achter, Meyer Simulation so ware MATLAB NetLogo . Model documentation individual structure ODD protocol Model analysis selected experiments selected experiments + DOE Table : Features of the original study and the present replication JASSS, ( ) ,
doi:10.18564/jasss.4219 fatcat:2uuymorzfje7fa3npscz32ltr4