2002, 24(1): 45-53.
Abstract:
It is always difficult to train a two-layer Feedforward Neural Networks (FNN) based on the cumulant match criterion because cumulants are nonlinear implicit functions of the FNN parameters. In this work, two new cumulant-based training methods for two-layer FNN are developed. In the first method, the hidden units of two-layer FNN are approximated with multiple linear systems, and further total FNN is modeled with a mixture of experts (ME) architecture. With the ME model, FNN parameters are estimated with expectation-maximization (EM) algorithm. The second method, for simplifying the two-layer FNN statistical model, proposes a simplified two-level hierarchical ME to remodel the FNN, in which hidden variables are introduced such that training total FNN is changed to training a set of single neurons. Based on training single neuron, total FNN is estimated in a simplified version with a faster convergence speed.