Improving the Generalization Ability of RBNN Using a Selective Strategy Based on the Gaussian Kernel Function
keywords: Radial Basis Neural Networks, generalization ability, selective learning, kernel functions
Radial Basis Neural Networks have been successfully used in many applications due, mainly, to their fast convergence properties. However, the level of generalization is heavily dependent on the quality of the training data. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. In this paper, a learning method is presented, that automatically selects the training patterns more appropriate to the new test sample. The method follows a selective learning strategy, in the sense that it builds approximations centered around the novel sample. This training method uses a Gaussian kernel function in order to decide the relevance of each training pattern depending on its similarity to the novel sample. The proposed method has been applied to three different domains: an artificial approximation problem and two time series prediction problems. Results have been compared to standard training method using the complete training data set and the new method shows better generalization abilities.
reference: Vol. 25, 2006, No. 1, pp. 1–15