Traffic Management in Open RAN Networks with Multi-Connectivity Using Machine Learning
Open RAN, Reinforcement Learning, Dual Connectivity, ns-3, 5G.
Simultaneously serving varied applications with distinct requirements, such as enhanced Mobile Broadband (eMBB) and Extended Reality (XR) applications, presents substantial challenges for resource management in mobile networks, particularly in dynamic smart tourism and city contexts. Conventional Multi-Connectivity approaches, based on fixed traffic splitting policies between the Master Node and Secondary Node, prove inadequate in the face of demand variations, frequently causing resource underutilization or considerable deterioration in the quality of service for priority applications. This study proposes a dynamic traffic management approach grounded in the Open RAN architecture, employing a Reinforcement Learning algorithm (Multi-Armed Bandit - MAB) integrated as an xApp within the Near-Real Time RAN Intelligent Controller (Near-RT RIC). The proof-of-concept methodological approach combines the ns-3 network simulator with a real implementation of the O-RAN Software Community's Near-RT RIC, establishing communication via the E2 interface and the E2SM-KPM and E2SM-RC service models. Exploratory simulations demonstrate that static offloading policies reduce the support capacity for XR users from 29 to 19 connections in high-load eMBB scenarios, highlighting the need for real-time network control and management. The xApp presented in this dissertation aims to achieve the system’s maximum capacity by using the MAB to determine the optimal traffic split among the network nodes.