TY - JOUR
T1 - Deep Reinforcement Learning Approaches for Content Caching in Cache-Enabled D2D Networks
AU - Li, Lixin
AU - Xu, Yang
AU - Yin, Jiaying
AU - Liang, Wei
AU - Li, Xu
AU - Chen, Wei
AU - Han, Zhu
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2020/1
Y1 - 2020/1
N2 - Internet of Things (IoT) technology suffers from the challenge that rare wireless network resources are difficult to meet the influx of a huge number of terminal devices. Cache-enabled device-to-device (D2D) communication technology is expected to relieve network pressure with the fact that the requesting contents can be easily obtained from nearby users. However, how to design an effective caching policy becomes very challenging due to the limited content storage capacity and the uncertainty of user mobility pattern. In this article, we study the jointly cache content placement and delivery policy for the cache-enabled D2D networks. Specifically, two potential recurrent neural network approaches [the echo state network (ESN) and the long short-term memory (LSTM) network] are employed to predict users' mobility and content popularity, so as to determine which content to cache and where to cache. When the local cache of the user cannot satisfy its own request, the user may consider establishing a D2D link with the neighboring user to implement the content delivery. In order to decide which user will be selected to establish the D2D link, we propose the novel schemes based on deep reinforcement learning to implement the dynamic decision making and optimization of the content delivery problems, aiming at improving the quality of experience of overall caching system. The simulation results suggest that the cache hit ratio of the system can be well improved by the proposed content placement strategy, and the proposed content delivery approaches can effectively reduce the request content delivery delay and energy consumption.
AB - Internet of Things (IoT) technology suffers from the challenge that rare wireless network resources are difficult to meet the influx of a huge number of terminal devices. Cache-enabled device-to-device (D2D) communication technology is expected to relieve network pressure with the fact that the requesting contents can be easily obtained from nearby users. However, how to design an effective caching policy becomes very challenging due to the limited content storage capacity and the uncertainty of user mobility pattern. In this article, we study the jointly cache content placement and delivery policy for the cache-enabled D2D networks. Specifically, two potential recurrent neural network approaches [the echo state network (ESN) and the long short-term memory (LSTM) network] are employed to predict users' mobility and content popularity, so as to determine which content to cache and where to cache. When the local cache of the user cannot satisfy its own request, the user may consider establishing a D2D link with the neighboring user to implement the content delivery. In order to decide which user will be selected to establish the D2D link, we propose the novel schemes based on deep reinforcement learning to implement the dynamic decision making and optimization of the content delivery problems, aiming at improving the quality of experience of overall caching system. The simulation results suggest that the cache hit ratio of the system can be well improved by the proposed content placement strategy, and the proposed content delivery approaches can effectively reduce the request content delivery delay and energy consumption.
KW - Actor-critic learning
KW - caching
KW - deep Q-learning network
KW - prediction
KW - recurrent neural network (RNN)
UR - http://www.scopus.com/inward/record.url?scp=85078310196&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2019.2951509
DO - 10.1109/JIOT.2019.2951509
M3 - 文章
AN - SCOPUS:85078310196
SN - 2327-4662
VL - 7
SP - 544
EP - 557
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 1
M1 - 8891760
ER -