A new look at the statistical model identification

H. Akaike
1974 IEEE Transactions on Automatic Control  
The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identilication. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are
more » ... l competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples. I I. IXTRODUCTION X spite of the recent, development of t.he use of statistical concepts and models in almost, every field of engineering and science it seems as if the difficulty of constructing an adequate model based on the information provided by a finite number of observations is not fully recognized. Undoubtedly the subject of statistical model construction or ident.ification is heavily dependent on the results of theoret.ica1 analyses of the object. under observation. Yet. it must be realized that there is usually a big gap betn-een the theoretical results and the pract,ical procedures of identification. A typical example is the gap between the results of the theory of minimal realizations of a linear system and the identifichon of a Markovian representation of a stochastic process based on a record of finite duration. A minimal realization of a linear system is usually defined through t.he analysis of the rank or the dependence relation of the rows or columns of some Hankel matrix [l]. In a practical situation, even if the Hankel matrix is theoretically given! the rounding errors will always make the matrix of full rank. If the matrix is obtained from a record of obserrat.ions of a real object the sampling variabilities of the elements of the matrix nil1 be by far the greater than the rounding errors and also the system n-ill always be infinite dimensional. Thus it can be seen that the subject of statistical identification is essentially concerned with the art of approximation n-hich is a basic element of human intellectual activity. As was noticed by Lehman 12, p. viii], hypothesis t,esting procedures arc traditionally applied to the situations where actually multiple decision procedures are 3Iannscript received February 12, 1974; revised IIarch 2, 1974. The author is with the Institute of Statistical Mathen1atie, AIinato-ku, Tokyo, Japan. required. If the statistical identification procedure is considered as a decision procedure the very basic problem is the appropriate choice of t,he loss function. In the S e yman-Pearson theory of stat.istica1 hypothesis testing only the probabilit.ies of rejecting and accepting the correct and incorrect hypotheses, respectively, are considered t o define the loss caused by the decision. In practical situations the assumed null hypotheses are only approximations and they are almost ah-ays different from the reality. Thus the choice of the loss function in the test. theory makes its practical application logically contradictory. The rwognit,ion of this point that the hypothesis testing procedure is not adequa.tely formulated as a procedure of approximation is very important for the development of pracbically useful identification procedures.
doi:10.1109/tac.1974.1100705 fatcat:cmi5g6clujdenhzng4x6agual4