TY - JOUR
T1 - Matrix Gaussian Mechanisms for Differentially-Private Learning
AU - Yang, Jungang
AU - Xiang, Liyao
AU - Yu, Jiahao
AU - Wang, Xinbing
AU - Guo, Bin
AU - Li, Zhetao
AU - Li, Baochun
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/2/1
Y1 - 2023/2/1
N2 - The wide deployment of machine learning algorithms has become a severe threat to user data privacy. As the learning data is of high dimensionality and high orders, preserving its privacy is intrinsically hard. Conventional differential privacy mechanisms often incur significant utility decline as they are designed for scalar values from the start. We recognize that it is because conventional approaches do not take the data structural information into account, and fail to provide sufficient privacy or utility. As the main novelty of this work, we propose Matrix Gaussian Mechanism (MGM), a new $ (\epsilon,\delta)$(ϵ,δ)-differential privacy mechanism for preserving learning data privacy. By imposing the unimodal distributions on the noise, we introduce two mechanisms based on MGM with an improved utility. We further show that with the utility space available, the proposed mechanisms can be instantiated with optimized utility, and has a closed-form solution scalable to large-scale problems. We experimentally show that our mechanisms, applied to privacy-preserving federated learning, are superior than the state-of-the-art differential privacy mechanisms in utility.
AB - The wide deployment of machine learning algorithms has become a severe threat to user data privacy. As the learning data is of high dimensionality and high orders, preserving its privacy is intrinsically hard. Conventional differential privacy mechanisms often incur significant utility decline as they are designed for scalar values from the start. We recognize that it is because conventional approaches do not take the data structural information into account, and fail to provide sufficient privacy or utility. As the main novelty of this work, we propose Matrix Gaussian Mechanism (MGM), a new $ (\epsilon,\delta)$(ϵ,δ)-differential privacy mechanism for preserving learning data privacy. By imposing the unimodal distributions on the noise, we introduce two mechanisms based on MGM with an improved utility. We further show that with the utility space available, the proposed mechanisms can be instantiated with optimized utility, and has a closed-form solution scalable to large-scale problems. We experimentally show that our mechanisms, applied to privacy-preserving federated learning, are superior than the state-of-the-art differential privacy mechanisms in utility.
KW - data mining
KW - data privacy
KW - Differential privacy
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85147252959&partnerID=8YFLogxK
U2 - 10.1109/TMC.2021.3093316
DO - 10.1109/TMC.2021.3093316
M3 - 文章
AN - SCOPUS:85147252959
SN - 1536-1233
VL - 22
SP - 1036
EP - 1048
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 2
ER -