http://scholars.ntou.edu.tw/handle/123456789/17034
Title: | Scale equalization higher-order neural networks | Authors: | Jung-Hua Wang Keng-Hsuan Wu Fu-Chiang Chang |
Issue Date: | Nov-2004 | Publisher: | IEEE | Conference: | Proceedings of the 2004 IEEE International Conference on Information Reuse and Integration, 2004. IRI 2004. Las Vegas, NV, USA |
Abstract: | This paper presents a novel approach, called scale equalization (SE), to implement higher-order neural networks. SE is particularly useful in eliminating the scale divergence problem commonly encountered in higher order networks. Generally, the larger the scale divergence is, the more the number of training steps required to complete the training process. Effectiveness of SE is illustrated with an exemplar higher-order network built on the Sigma-Pi network (SESPN) applied to function approximation. SESPN requires the same computation time as SPN per epoch, but it takes much less number of epochs to compete the training process. Empirical results are provided to verify that SESPN outperforms other higher-order neural networks in terms of computation efficiency. |
URI: | http://scholars.ntou.edu.tw/handle/123456789/17034 | DOI: | 10.1109/IRI.2004.1431529 |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.