On the requirements of new software development

Vincenzo De Florio, Chris Blondia
2008 International Journal of Business Intelligence and Data Mining  
Changes, they use to say, are the only constant in life. Everything changes rapidly around us, and more and more key to survival is the ability to rapidly adapt to changes. This consideration applies to many aspects of our lives. Strangely enough, this nearly self-evident truth is not always considered by software engineers with the seriousness that it calls for: The assumptions we draw for our systems often do not take into due account that e.g., the run-time environments, the operational
more » ... tions, or the available resources will vary. Software is especially vulnerable to this threat, and with today's software-dominated systems controlling crucial services in nuclear plants, airborne equipments, health care systems and so forth, it becomes clear how this situation may potentially lead to catastrophes. This paper discusses this problem and defines some of the requirements towards its effective solution, which we call "New Software Development" as a software equivalent of the well-known concept of New Product Development. The paper also introduces and discusses a practical example of a software tool designed taking those requirements into account-an adaptive data integrity provision in which the degree of redundancy is not fixed once and for all at design time, but rather it changes dynamically with respect to the disturbances experienced during the run time. time, but the resulting systems are often entities whose structure is unknown and are likely to be inefficient and even error-prone. An example of this situation is given by the network software layers: We successfully decomposed the complexity of our telecommunication services into well-defined layers, each of which is specialized on a given sub-service, e.g. routing or logical link control. This worked out nicely when Internet was fixed. Now that Internet is becoming predominantly mobile, those telecommunication services require complex maintenance and prove to be inadequate and inefficient. Events such as network partitioning become the rule, not the exception [2] . This means that the system and fault assumptions on which the original telecommunication services had been designed, which were considered as permanently valid and hence hidden and hardwired throughout the system layers, are not valid anymore in this new context. Retrieving and exploiting this "hidden intelligence" [9] is very difficult, which explains the many research efforts being devoted world-wide to cross-layer optimization strategies and architectures. Societal bodies such as enterprises or even governments have followed an evolutionary path similar to that of software systems. Organizations such as enterprises or even governments have been enabled by technology so as to deal with enormous amounts of data; still, like for software systems, they evolved by trading ever increasing performance with ever more pronounced information hiding. The net result in both cases is the same: Inefficiency and error-proneness. This is no surprise, as system-wide information that would enable efficient use of resources and exploitation of economies of resources is scattered into a number of separated entities with insufficient or no communication flow among each other. This is true throughout the personnel hierarchy, up to the top: Even top managers nowadays only focus on fragmented and limited "information slices". Specialization (useful to partition complexity) rules, as it allows an enterprise to become more complex and deal with a wider market. It allows unprecedented market opportunities to be caught, hence it is considered as a panacea; but when the enterprise is observed a little closer, often we observe deficiencies. In a sense, we often find out that the enterprise looks like an aqueduct that for the time being serves successfully its purpose, but loses most of its water due to leakage in its pipelines. This leakage is often leakage of structural information-a hidden intelligence about an entity's intimate structure that once lost forbids any "cross-layer" exploitation. Consequently efficiency goes down and the system (be it an enterprise, an infrastructure, a municipality, or a state) becomes increasingly vulnerable: It risks to experience failures or looses an important property with reference to competitiveness, i.e., agility, that we define here as an entity's ability to reconfigure itself so as to maximize its ability to survive and catch new opportunities. For business entities, a component of agility is the ability to reduce time-to-market. For software systems, this agility includes adaptability, maintainability, and reconfigurability-that is, adaptive fault-tolerance support. We are convinced that this property will be recognized in the future as a key requirement for effective software development-the software equivalent of the business and engineering concept of New Product Development [18]. The tool described in this paper-an adaptive data integrity provision-provides a practical example of this vision of a "New Software Development." The structure of this paper is as follows: Section 2 introduces the problem of adaptive redundancy and data integrity. Section 3 is a brief discussion on the available data integrity provisions. A description of our tool and design issues are given in Sect. 4. In Sect. 5 we report on an analysis of the performance of our tool. Our conclusions are finally drawn in Sect. 6.
doi:10.1504/ijbidm.2008.022138 fatcat:4a6d3fslijgmrccfempxkdiycu