http://scholars.ntou.edu.tw/handle/123456789/16966
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Jung-Hua Wang | en_US |
dc.contributor.author | Rau, JD | en_US |
dc.contributor.author | Peng, CY | en_US |
dc.date.accessioned | 2021-06-03T07:11:18Z | - |
dc.date.available | 2021-06-03T07:11:18Z | - |
dc.date.issued | 2000-08 | - |
dc.identifier.issn | 1083-4419 | - |
dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/16966 | - |
dc.description.abstract | This paper optimizes the performance of the GCS model [1] in learning topology and vector quantization. Each node in GCS is attached with a resource counter. During the competitive learning process, the counter of the best-matching node is increased by a defined resource measure after each input presentation, and then all resource counters are decayed by a factor or. We show that the summation of all resource counters conserves. This conservation principle provides useful clues for exploring important characteristics of GCS, which in turn provide an insight into how the GCS can be optimized. In the context of information entropy, we show that performance of GCS in learning topology and vector quantization can be optimized by using alpha = 0 incorporated with a threshold-free node-removal scheme, regardless of input data being stationary or nonstationary. The meaning of optimization is twofold: 1) for learning topology, the information entropy is maximized in terms of equiprobable criterion and 2) for Learning vector quantization, the mse is minimized in terms of equi-error criterion. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | en_US |
dc.relation.ispartof | IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | en_US |
dc.subject | competitive learning | en_US |
dc.subject | growing cell structures | en_US |
dc.subject | neural networks | en_US |
dc.subject | quantization | en_US |
dc.subject | self-creating networks | en_US |
dc.subject | short-term memory topology | en_US |
dc.title | Toward optimizing a self-creating neural network | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.1109/3477.865177 | - |
dc.identifier.pmid | 18252390 | - |
dc.identifier.isi | WOS:000089118000010 | - |
dc.relation.journalvolume | 30 | en_US |
dc.relation.journalissue | 4 | en_US |
dc.relation.pages | 586-593 | en_US |
item.cerifentitytype | Publications | - |
item.openairetype | journal article | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.fulltext | no fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Electrical Engineering | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。