Article,

Cognitive Radio Resource Scheduling using Multi-Agent Q-Learning for LTE

, and .
International Journal of Computer Networks & Communications (IJCNC), 14 (02): 77-95 (March 2022)
DOI: 10.5121/ijcnc.2022.14205

Abstract

In this paper, we propose, implement, and test two novel downlink LTE scheduling algorithms. The implementation and testing of these algorithms were in Matlab, and they are based on the use of Reinforcement Learning (RL), more specifically, the Q-learning technique for scheduling two types of users. The first algorithm is called a Collaborative scheduling algorithm, and the second algorithm is called a Competitive scheduling algorithm. The first type of the scheduled users is the Primary Users (PUs), and they are the licensed subscribers that pay for their service. The second type of the scheduled users is the Secondary Users (SUs), and they could be un-licensed subscribers that don't pay for their service, device-to-device communications, or sensors. Each user whether it’s a primary or secondary is considered as an agent. In the Collaborative scheduling algorithm, the primary user agents will collaborate in order to make a joint scheduling decision about allocating the resource blocks to each one of them, then the secondary user agents will compete among themselves to use the remaining resource blocks. In the Competitive scheduling algorithm, the primary user agents will compete among themselves over the available resources, then the secondary user agents will compete among themselves over the remaining resources. Experimental results show that both scheduling algorithms converged to almost 90% utilization of the spectrum, and provided fair shares of the spectrum among users.

Tags

Users

  • @laimbee

Comments and Reviews