构蜂窝网络中基于深度强化学习的下行链路功率分配算法 |
Deep reinforcement learning based downlink power allocation algorithm in dense heterogeneous cellular networks |
|
DOI: |
中文关键词: 密集异构蜂窝网络;功率分配;深度强化学习;深度神经网络;能量效率 |
英文关键词:dense heterogeneous cellular network; power allocation; deep reinforcement learning; deep neural network; energy efficiency |
基金项目:江苏省自然科学基金(BK20181392)和南京邮电大学江苏省通信与网络技术工程研究中心开放课题资助项目 |
作者 | 单位 | 周凡 | 南京邮电大学通信与信息工程学院,江苏南京210003 | 王鸿 | 南京邮电大学通信与信息工程学院,江苏南京210003 | 宋荣方 | 南京邮电大学通信与信息工程学院,江苏南京210003;南京邮电大学江苏省通信与网络技术工程研究中心,江苏南京210003 |
|
摘要点击次数: 3125 |
全文下载次数: 2642 |
中文摘要: |
针对密集异构蜂窝网络系统的下行链路,提出了一种基于深度强化学习的功率分配算法,旨在最大化系统能量效率。首先,基于蜂窝网络的下行链路模型对系统能量效率进行了建模;其次,构建了含有两层隐藏层的深度Q网络(DQN)作为行为状态值函数,用以优化系统能量效率。最后,仿真结果表明,所提的深度Q学习算法相较于贪婪算法、Q学习算法能够获得更高的系统能量效率,且在收敛速度和稳定性方面有显著提高,此外,通过改变学习速率来观察模型的性能找到了最佳学习速率。 |
英文摘要: |
To solve the problem of maximizing the energy efficiency of dense heterogeneous cellular network systems, a power allocation algorithm based on deep reinforcement learning was used to maximize the system energy efficiency. Firstly, the system energy efficiency is modeled based on a downlink model in a cellular network. Then, a deep Q network (DQN) with two hidden layers is constructed as a function of the operating state value to optimize the system energy efficiency. Simulation results show that the deep Q learning algorithm can achieve higher system energy efficiency than the greedy algorithm and the traditional Q learning algorithm. Meanwhile, the convergence speed and the stability are improved. Thus,the best learning rate can be found for observing the performance of the model by changing the learning rate. |
查看全文 查看/发表评论 下载PDF阅读器 |