Back Propagation Artificial Neural Network Structure Error Reduction by Defined Factor of Capacity and Algorithm Reinforcement Method
V. Rahmati1, M. Husainy Yar2, J. Khalilpour3, A. R. Malekijavan4
1Vahid Rahmati, Department of Electrical Engineering, Shahid Sattari University of Aeronautical Sciences and Technologies, Tehran, Iran.
2Morteza Husainy Yar, Department of Electrical Engineering, Shahid Sattari University of Aeronautical Sciences and Technologies, Tehran, Iran.
3Prof. Jafar Khalilpour, Department of Electrical Engineering, Shahid Sattari University of Aeronautical Sciences and Technologies, Tehran, Iran.
4Prof. Ali Reza Malekijavan, Department of Electrical Engineering, Shahid Sattari University of Aeronautical Sciences and Technologies, Tehran, Iran.

Manuscript received on November 02, 2014. | Revised Manuscript received on November 04, 2014. | Manuscript published on November 05, 2014. | PP: 34-39 | Volume-4 Issue-5, November 2014. | Retrieval Number: D2339094414 /2014©BEIESP
Open Access | Ethics and Policies | Cite
© The Authors. Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: This paper investigates how to reduce error and increase speed of Back propagation ANN by certain defined Capacity factor. For the years from 1965 to 1980 the use of a variety of ANNs for problem solving was relented significantly because of limitations in one layer networks that weren’t good enough for enhancements of a specific issue, although there were low expectancies for even simple tasks and mathematical operations. Multi-layer networks have a serious covenant to improve this privation by more effective error reduction for example by least squares error method and a better learning factor like the one that is considered in MLP which is modified, enhanced version of Perception network that has provided a better chance of using these networks for intelligent signal processing. But the purpose of this paper is not showing capabilities of these networks alone but to consider error reduction while the weighting equations both satisfy ordinary task of algorithm and at the same time reduces presumptions of errors by a predetermined Capacitance factor that is not very anomalous to other bunch of clustering pedagogy styles anent the other types of ANNs. Unlike a single layer network with many limitations in learning, approximating and estimating a mapping function, multi-layer networks are well prepared for estimation of any uniformly continues subordination with tunable accuracy. Hidden layer in many applications does the job of enhancement, but sometimes poly-layer methods are used for this error reduction separately by some factor definitions (and new hidden parts that paper adds to gets error reduced) that paper tries to measure for exact improvements which were envisaged in design process. And as a result understand how to use Capacity factor for BPANN algorithm, and error reduction in general that holds convergence, speed improvement and error smoothing at the same time.
Keywords: BPANN enhancement, Error smoothing, MLP, Intelligent signal processing