Skip to main navigation Skip to search Skip to main content

Sparse Optimization Mechanism and Application of Recurrent Neural Network via Fractional-Order Gradient Descent Learning

  • Northwestern Polytechnical University Xian
  • South China University of Technology

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Fractional calculus, leveraging its hereditary and infinite memory properties, offers promising applications in information processing and control systems. Recurrent neural networks (RNNs) have also attracted widespread attention in areas such as time series prediction and temporal system modeling. This paper proposes a theoretical result of RNNs based on a fractional-order (FO) gradient descent method and a sparsity mechanism, which not only enhances the stability and sparsity of the network, but also improves its generalization capability. Specifically, the Caputo derivative is firstly employed to define the fractional-order gradient of the error function, which is applied in the backpropagation training of RNNs. Secondly, the smoothing group L1/2 regularization (SGL1/2) is introduced to successfully overcome the oscillation problem of the error function caused by the traditional group L1/2 regularization (GL1/2), and the network architecture is optimized in terms of the redundant nodes and redundant weights of remaining nodes tend to zero, which further improves the sparsity of the network. Finally, numerical simulation results verify the correctness and effectiveness of the proposed algorithm.

Original languageEnglish
Title of host publication2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798331599614
DOIs
StatePublished - 2025
Event2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025 - Macau, China
Duration: 7 Nov 202510 Nov 2025

Publication series

Name2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025

Conference

Conference2025 International Conference on Cyber-Physical Social Intelligence, CPSI 2025
Country/TerritoryChina
CityMacau
Period7/11/2510/11/25

Keywords

  • Caputo derivative
  • Fractional calculus
  • Recurrent neural networks
  • Smoothing group L regularization

Fingerprint

Dive into the research topics of 'Sparse Optimization Mechanism and Application of Recurrent Neural Network via Fractional-Order Gradient Descent Learning'. Together they form a unique fingerprint.

Cite this