Empirical Laws and Foreseeing the Future of Technological Progress

António Lopes, José Tenreiro Machado, Alexandra Galhano
2016 Entropy  
The Moore's law (ML) is one of many empirical expressions that is used to characterize natural and artificial phenomena. The ML addresses technological progress and is expected to predict future trends. Yet, the "art" of predicting is often confused with the accurate fitting of trendlines to past events. Presently, data-series of multiple sources are available for scientific and computational processing. The data can be described by means of mathematical expressions that, in some cases, follow
more » ... some cases, follow simple expressions and empirical laws. However, the extrapolation toward the future is considered with skepticism by the scientific community, particularly in the case of phenomena involving complex behavior. This paper addresses these issues in the light of entropy and pseudo-state space. The statistical and dynamical techniques lead to a more assertive perspective on the adoption of a given candidate law. Entropy 2016, 18, 217 2 of 11 history, suggesting faster and more profound changes in the future, possibly accompanied by underlying economic, cultural and social changes [8] [9] [10] [11] [12] [13] . Given the apparent ubiquity of the ML (here interpreted in the broad sense of "exponential growth") [13] [14] [15] , simple questions can be raised: Does ML describe technological development accurately? Can it be used for reliable forecasting? The so-called exponential growth should be understood as an approximate empirical model for real data. During more than two decades, several authors foresaw the end of ML, arguing that technological limits were close [5, 16, 17] . Others defended that ML would survive for many years, as they envisaged the emergence of a new paradigm that could enlarge dramatically the existent technological bounds. Within such paradigm novel technologies would became available, such as quantum devices [18] [19] [20] , biological [21], molecular [22], or heterotic computing [23]. Those technologies would thereafter keep ML alive. Whatever one's opinion, either forward-looking, or conservative, we should note that forecasting technological evolution for many years ahead from now is difficult. Technological innovation means by definition something that is new and, therefore, may be inherently unpredictable [24] . However, even a rough knowledge about technological evolution could be invaluable for helping decision makers to delineate adequate policies, seeking sustainability and improvement of individual and collective living [25, 26] . In this paper we seek to contribute for the discussion of some the questions raised above. We illustrate our scheme with real data representative of technological progress in time. In that perspective, we adopt 4 performance indices: (i) the world inflation-adjusted gross domestic product (GDP), measured in 2010 billions of U.S. dollars; (ii) the performance of the most powerful supercomputers (PPS), expressed in tera FLOPS (floating-point operations per second); (iii) the number of transistors per microprocessor (TPM), and (iv) the number of U.S. patents granted (USP). Obviously, other data-series may be candidate for assessing the technological evolution. Data-series from economy, or from finance, can be thought as possible candidates since there is some relationship between them and scientific and technological progress. However, country economies evolve very slowly [27] , while financial series are extremely volatile [28] . Since they reflect a plethora of phenomena, not directly related with our main objective, we do not consider them here. We start by the usual algebraic, or "static", perspective. In a first step, we adopt nonlinear least-squares to determine different candidate models for the real data. In a second step, we interpret the data-series as random variables. We adopt a sliding window to slice the data into overlapping time intervals and we evaluate the corresponding entropy. We then develop a "dynamical" perspective and we analyze the data by means of the pseudo-state space (PSS) technique. We conjecture about the usefulness of the entropy information and the PSS paths as complementary criteria for assessing the ability of the approximated models in the perspective of forecasting. In this line of thought, the paper is organized as follows. In Section 2 we analyze the data and in Section 3 we discuss the results draw the main conclusions.
doi:10.3390/e18060217 fatcat:hxgf3h3cb5cz5ftum4w5rlghde