dc.description.abstract | Network aims to optimize for minimizing the cost function and provide better
performance. This experimental optimization procedure is widely recognized as gradient descent, which is a form of iterative learning that starts from a random point
on a function and travels down its slope, in steps, until it reaches to the steepest
point which is time-consuming and slow to converge. Over the last couple of decades,
several variations of the non-iterative neural network training algorithms have been
proposed, such as Random Forest and Quicknet. However, the non-iterative neural
network training algorithms do not support online training that given a very largesized training data, one needs enormous computing resources to train neural network.
In this thesis, a non-iterative learning strategy with online sequential has been exploited. In Chapter 3, a single layer Online Sequential Sub-Network node (OS-SN)
classifier has been proposed that can provide competitive accuracy by pulling the
residual network error and feeding it back into hidden layers. In Chapter 4, a multilayer network is proposed where the first portion built by transforming multi-layer
autoencoder into an Online Sequential Auto-Encoder(OS-AE) and use OS-SN for
classification. In Chapter 5, OS-AE is utilized as a generative model that can construct new data based on subspace features and perform better than conventional
data augmentation techniques on real-world image and tabular datasets. | en_US |