Short-term hydro-meteorological forecasting with extreme learning machines [article]

Aranildo R. Lima
2016
In machine learning (ML), the extreme learning machine (ELM), a feedforward neural network model which assigns random weights in the single hidden layer and optimizes only the weights in the output layer, has the fully nonlinear modelling capability of the traditional artificial neural network (ANN) model but is solved via linear least squares, as in multiple linear regression (MLR). Chapter 2 evaluated ELM against MLR and three nonlinear ML methods (ANN, support vector regression and random
more » ... est) on nine environmental regression problems. ELM was then developed for short-term forecasting of hydro-meteorological variables. In situations where new data arrive continually, the need to make frequent model updates often renders ANN impractical. An online learning algorithm – the online sequential extreme learning machine (OSELM) – is automatically updated inexpensively as new data arrive. In Chapter 3, OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1–3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates, with the nonlinear OSELM easily outperforming the benchmark, the online sequential MLR (OSMLR). A major limitation of OSELM is that the number of hidden nodes (HN), which controls the model complexity, remains the same as in the initial model, even when the arrival of new data renders the fixed number of HN sub-optimal. A new variable complexity online sequential extreme learning machine (VC-OSELM), proposed in Chapter 4, automatically adds or removes HN as online learning proceeds, so the model complexity self-adapts to the new data. For streamflow predictions at lead time of one day, VC-OSELM outperformed OSELM when the initial number of HN turned out to be smaller or larger than optimal. In summary, by using linear least squares instead of nonlinear optimization, ELM [...]
doi:10.14288/1.0305711 fatcat:ifa5osbj7recrcyb37ure6y26e