A Hybrid Method of Backpropagation and Particle Swarm Optimization for Enhancing Accuracy Performance

Widiartha, I. Made and Gunawan, Anak Agung Ngurah and Sanjaya, E. R. Ngurah Agus and Sari, Kartika (2023) A Hybrid Method of Backpropagation and Particle Swarm Optimization for Enhancing Accuracy Performance. Current Journal of Applied Science and Technology, 42 (6). pp. 10-18. ISSN 2457-1024

[thumbnail of Gunawan4262023CJAST97654.pdf] Text
Gunawan4262023CJAST97654.pdf - Published Version

Download (831kB)

Abstract

Aims: Backpropagation is an algorithm for adjusting the weight of neural networks in the training stage. The performance of backpropagation has proven superior in optimizing the weight of neural networks; however, this method needs improvement in the initiation stage, where the random process creates local optimal solution. Applying an algorithm based on the global search is an alternative to solve the drawback of backpropagation. One global search method with superior performance is particle swarm optimization. In this research, we apply the hybridization of backpropagation and particle swarm optimization (BP-PSO) to overcome the problem of backpropagation.

Study Design: Research Papers and Short Notes.

Place and Duration of Study: Department of Informatics, Faculty of Mathematics and Natural Sciences, Udayana University, between June 2022 and November 2022.

Methodology: The dataset used in this study is a handwriting image dataset of the mathematical symbol. There are 240 symbols consisting of 180 images for training and 60 for testing. The robustness of the PSO method in obtaining the optimum global solution is expected to help backpropagation out of local optimal solutions. The application of PSO is carried out at the initial weight initialization stage of the artificial neural network. The tuning parameters of the artificial neural network are the number of neurons in the hidden layer and the value of the learning rate. There are three combinations in the number of neurons in the hidden layer, namely 10, 20, and 30. Meanwhile, the learning rate values are five different combinations, namely 0.1 to 0.9, the minimum error value is 0.01, and the maximum number of epochs is 1000. We carry out five repetitions in each test scenario.

Results: The performance results showed that PSO has succeeded in optimizing backpropagation, where the accuracy of the BP-PSO is higher than BP without optimization. The accuracy of BP-PSO is 97.2%, while the BP is 94.4%. The optimal learning rate value and the optimal number of hidden layers are 0.1 and 30 neurons, respectively.

Conclusion: The performance results showed that PSO has succeeded in optimizing backpropagation, where the accuracy of the BP-PSO is higher than BP without optimization. The optimization process of weighting the artificial neural network as the initial weight for later retraining shows a higher average accuracy, while decreasing the average number of epochs does not optimize initial weight.

Item Type: Article
Subjects: OA STM Library > Multidisciplinary
Depositing User: Unnamed user with email support@oastmlibrary.com
Date Deposited: 24 Mar 2023 12:26
Last Modified: 17 Jul 2024 09:49
URI: http://geographical.openscholararchive.com/id/eprint/365

Actions (login required)

View Item
View Item