A COMPACT OPTIMAL LEARNING MACHINE
Main Article Content
Abstract
Artificial neural networks (ANNs) have been developed and applied to a variety of problems, such as pattern recognition, clustering, function approximation, forecasting, optimization, etc. However, existing ANNs have a high computational cost, since their learning methods are mostly based on a parameter tuning approach. Extreme learning machine (ELM) is a state-of-the-art method that generally dramatically reduces the computational cost. An analysis of the ELM method reveals that there are unsolved key factors, including inefficient hidden node construction, redundant hidden nodes, and unstable results. Therefore, we describe a new learning machine based on analytical incremental learning (AIL) in conjunction with principal component analysis (PCA). This learning machine, PCA‑AIL, inherited the advantages from the original one and solved the unsolved key factors of ELM, and also extended AIL capability to serve a multiple-output structure. PCA-AIL was implemented with a single-layer feed-forward neural network architecture, used an adaptive weight determination technique to achieve a compact optimal structure and also used objective relations to support multiple output regression tasks. PCA-AIL has two steps: objective relation estimation and multiple optimal hidden node constructions. In the first step, PCA estimated the objective relations from multiple-output residual errors. In the second step, the multiple optimal nodes were obtained from objective relations and added to the model. PCA-AIL was tested with 16 multiple-objective regression datasets. PCA-AIL mostly outperformed other methods (ELM, EM-ELM, CP-ELM, DP-ELM, PCA-ELM, EI-ELM) in terms of fast testing speed – 0.0017 second, a compact model – 19.9 nodes, an accurate performance – RMSE 0.11261, and a stable result – S.D. of RMSE 0.00911: reported in averaged.