Structured Reinforcement Learning in NextG Cellular Networks

Abstract

Next-generation (NextG) cellular networks face growing demands for intelligent, real-time control, driven by softwarized Open Radio Access Networks (O-RAN) and increasingly heterogeneous user applications. In this talk, we present EdgeRIC, a real-time RAN Intelligent Controller (RIC) co-located with the Distributed Unit (DU) in the O-RAN architecture, enabling sub-millisecond, AI-optimized decision making at the network edge. We develop a constrained reinforcement learning (CRL) framework for real-time RAN control and show that such policies can be trained with only a logarithmic increase in complexity relative to traditional reinforcement learning. To achieve scalable inference, we introduce structured learning based on threshold and Whittle index policies, which provide both low-complexity learning and fast, interpretable decision making. Focusing on media streaming, we prove the optimality of a threshold policy and propose a soft-threshold natural policy gradient (NPG) algorithm that prioritizes users based on video buffer occupancy. This approach achieves inference times of approximately 10 μs and improves user quality of experience by more than 30%. We further exploit Whittle indexability to simplify resource allocation under heterogeneous service constraints, training neural networks to compute constrained Whittle indices that enforce ultra-low latency or high-throughput guarantees. Implemented on EdgeRIC, these policies make allocation decisions within 20 μs per user and deliver strong performance across standardized 3GPP service.

Bio

Srinivas Shakkottai received his PhD in Electrical and Computer Engineering from the University of Illinois at Urbana–Champaign in 2007, followed by postdoctoral experience in Management Science and Engineering at Stanford University. He joined Texas A&M University in 2008, where he is currently the Debbie and Dennis Segers ’75 Professor in the Department of Electrical and Computer Engineering, with a courtesy appointment in the Department of Computer Science and Engineering.  His research interests span multi-agent learning and game theory, reinforcement learning, communication and information networks, networked markets, and data collection and analytics. He co-directs the Learning and Emerging Network Systems (LENS) Laboratory and the RELLIS Spectrum Innovation Laboratory (RSIL), as well as the Initiative for Connected Intelligence at Texas A&M University. He has served as an Associate Editor for IEEE/ACM Transactions on Networking and IEEE Transactions on Wireless Communications.

Dr. Shakkottai is a recipient of the Defense Threat Reduction Agency (DTRA) Young Investigator Award and the NSF CAREER Award, along with research awards from Cisco and Google. His work has received honors at venues such as ACM MobiHoc, ACM e-Energy, and the International Conference on Learning Representations. At Texas A&M University, he has also received the Outstanding Professor Award, the Select Young Faculty Fellowship, the Engineering Genesis Award (twice), and the Dean of Engineering Excellence Award.

 

 

Share this event

facebook linked in twitter email

Event Contact: Iam-Choon Khoo

 
 

About

The School of Electrical Engineering and Computer Science was created in the spring of 2015 to allow greater access to courses offered by both departments for undergraduate and graduate students in exciting collaborative research fields.

We offer B.S. degrees in electrical engineering, computer science, computer engineering and data science and graduate degrees (master's degrees and Ph.D.'s) in electrical engineering and computer science and engineering. EECS focuses on the convergence of technologies and disciplines to meet today’s industrial demands.

School of Electrical Engineering and Computer Science

The Pennsylvania State University

207 Electrical Engineering West

University Park, PA 16802

814-863-6740

Department of Computer Science and Engineering

814-865-9505

Department of Electrical Engineering

814-865-7039