Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization

Qian Zhu, Qian Kang, Tao Xu, Dengxiu Yu, Zhen Wang

Research output: Contribution to journalArticlepeer-review

Abstract

In this study, we present a deterministic convergence analysis of Gated Recurrent Unit (GRU) networks enhanced by a smoothing L1 regularization technique. While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling, they remain prone to overfitting, particularly under noisy or limited training data. Traditional L1 regularization, despite enforcing sparsity and accelerating optimization, introduces non-differentiable points in the error function, leading to oscillations during training. To address this, we propose a novel smoothing L1 regularization framework that replaces the non-differentiable absolute function with a quadratic approximation, ensuring gradient continuity and stabilizing the optimization landscape. Theoretically, we rigorously establish three key properties of the resulting smoothing L1-regularized GRU (SL1-GRU) model: (1) monotonic decrease of the error function across iterations, (2) weak convergence characterized by vanishing gradients as iterations approach infinity, and (3) strong convergence of network weights to fixed points under finite conditions. Comprehensive experiments on benchmark datasets-spanning function approximation, classification (KDD Cup 1999 Data, MNIST), and regression tasks (Boston Housing, Energy Efficiency)-demonstrate SL1-GRUs superiority over baseline models (RNN, LSTM, GRU, L1-GRU, L2-GRU). Empirical results reveal that SL1-GRU achieves 1.0%–2.4% higher test accuracy in classification, 7.8%–15.4% lower mean squared error in regression compared to unregularized GRU, while reducing training time by 8.7%–20.1%. These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability, and they strongly corroborate the theoretical calculations. The proposed framework not only resolves the non-differentiability challenge of L1 regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.

Original languageEnglish
Pages (from-to)1855-1879
Number of pages25
JournalComputers, Materials and Continua
Volume83
Issue number2
DOIs
StatePublished - 2025

Keywords

  • Gated recurrent unit
  • convergence
  • regularization

Fingerprint

Dive into the research topics of 'Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization'. Together they form a unique fingerprint.

Cite this