TY - JOUR
T1 - Incentive-Driven Deep Reinforcement Learning for Content Caching and D2D Offloading
AU - Zhou, Huan
AU - Wu, Tong
AU - Zhang, Haijun
AU - Wu, Jie
N1 - Publisher Copyright:
© 1983-2012 IEEE.
PY - 2021/8
Y1 - 2021/8
N2 - Offloading cellular traffic via Device-to-Device communication (or D2D offloading) has been proved to be an effective way to ease the traffic burden of cellular networks. However, mobile nodes may not be willing to take part in D2D offloading without proper financial incentives since the data offloading process will incur a lot of resource consumption. Therefore, it is imminent to exploit effective incentive mechanisms to motivate nodes to participate in D2D offloading. Furthermore, the design of the content caching strategy is also crucial to the performance of D2D offloading. In this paper, considering these issues, a novel Incentive-driven and Deep Q Network (DQN) based Method, named IDQNM is proposed, in which the reverse auction is employed as the incentive mechanism. Then, the incentive-driven D2D offloading and content caching process is modeled as Integer Non-Linear Programming (INLP), aiming to maximize the saving cost of the Content Service Provider (CSP). To solve the optimization problem, the content caching method based on a Deep Reinforcement Learning (DRL) algorithm, named DQN is proposed to get the approximate optimal solution, and a standard Vickrey-Clarke-Groves (VCG)-based payment rule is proposed to compensate for mobile nodes' cost. Extensive real trace-driven simulation results demonstrate that the proposed IDQNM greatly outperforms other baseline methods in terms of the CSP's saving cost and the offloading rate in different scenarios.
AB - Offloading cellular traffic via Device-to-Device communication (or D2D offloading) has been proved to be an effective way to ease the traffic burden of cellular networks. However, mobile nodes may not be willing to take part in D2D offloading without proper financial incentives since the data offloading process will incur a lot of resource consumption. Therefore, it is imminent to exploit effective incentive mechanisms to motivate nodes to participate in D2D offloading. Furthermore, the design of the content caching strategy is also crucial to the performance of D2D offloading. In this paper, considering these issues, a novel Incentive-driven and Deep Q Network (DQN) based Method, named IDQNM is proposed, in which the reverse auction is employed as the incentive mechanism. Then, the incentive-driven D2D offloading and content caching process is modeled as Integer Non-Linear Programming (INLP), aiming to maximize the saving cost of the Content Service Provider (CSP). To solve the optimization problem, the content caching method based on a Deep Reinforcement Learning (DRL) algorithm, named DQN is proposed to get the approximate optimal solution, and a standard Vickrey-Clarke-Groves (VCG)-based payment rule is proposed to compensate for mobile nodes' cost. Extensive real trace-driven simulation results demonstrate that the proposed IDQNM greatly outperforms other baseline methods in terms of the CSP's saving cost and the offloading rate in different scenarios.
KW - content caching
KW - D2D offloading
KW - deep reinforcement learning
KW - real mobility trace
KW - reverse auction
UR - http://www.scopus.com/inward/record.url?scp=85110636399&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2021.3087232
DO - 10.1109/JSAC.2021.3087232
M3 - 文章
AN - SCOPUS:85110636399
SN - 0733-8716
VL - 39
SP - 2445
EP - 2460
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 8
M1 - 9448092
ER -