Matrix Gaussian Mechanisms for Differentially-Private Learning

Jungang Yang, Liyao Xiang, Jiahao Yu, Xinbing Wang, Bin Guo, Zhetao Li, Baochun Li

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

The wide deployment of machine learning algorithms has become a severe threat to user data privacy. As the learning data is of high dimensionality and high orders, preserving its privacy is intrinsically hard. Conventional differential privacy mechanisms often incur significant utility decline as they are designed for scalar values from the start. We recognize that it is because conventional approaches do not take the data structural information into account, and fail to provide sufficient privacy or utility. As the main novelty of this work, we propose Matrix Gaussian Mechanism (MGM), a new $ (\epsilon,\delta)$(ϵ,δ)-differential privacy mechanism for preserving learning data privacy. By imposing the unimodal distributions on the noise, we introduce two mechanisms based on MGM with an improved utility. We further show that with the utility space available, the proposed mechanisms can be instantiated with optimized utility, and has a closed-form solution scalable to large-scale problems. We experimentally show that our mechanisms, applied to privacy-preserving federated learning, are superior than the state-of-the-art differential privacy mechanisms in utility.

Original languageEnglish
Pages (from-to)1036-1048
Number of pages13
JournalIEEE Transactions on Mobile Computing
Volume22
Issue number2
DOIs
StatePublished - 1 Feb 2023

Keywords

  • data mining
  • data privacy
  • Differential privacy
  • machine learning

Fingerprint

Dive into the research topics of 'Matrix Gaussian Mechanisms for Differentially-Private Learning'. Together they form a unique fingerprint.

Cite this