Scalable Reinforcement Learning for Linear-Quadratic Control of Networks

ACC(2024)

引用 0|浏览16
暂无评分
摘要
Distributed optimal control is known to be challenging and can becomeintractable even for linear-quadratic regulator problems. In this work, westudy a special class of such problems where distributed state feedbackcontrollers can give near-optimal performance. More specifically, we considernetworked linear-quadratic controllers with decoupled costs and spatiallyexponentially decaying dynamics. We aim to exploit the structure in the problemto design a scalable reinforcement learning algorithm for learning adistributed controller. Recent work has shown that the optimal controller canbe well approximated only using information from a κ-neighborhood ofeach agent. Motivated by these results, we show that similar results hold forthe agents' individual value and Q-functions. We continue by designing analgorithm, based on the actor-critic framework, to learn distributedcontrollers only using local information. Specifically, the Q-function isestimated by modifying the Least Squares Temporal Difference for Q-functionsmethod to only use local information. The algorithm then updates the policyusing gradient descent. Finally, we evaluate the algorithm through simulationsthat indeed suggest near-optimal performance.
更多
查看译文
关键词
Scalable,Linear Quadratic Gaussian,Learning Algorithms,Gradient Descent,Local Information,Optimal Control,Distributed Control,Reinforcement Learning Algorithm,Linear Quadratic Regulator,Near-optimal Performance,Value Function,Undirected,Global Status,Local State,Symmetric Matrix,Closed-loop System,Current Control,System Matrix,Neighborhood Size,Distributed Algorithm,Spatial Decay,Network Of Agents,Distributed Learning,Lyapunov Equation,Trajectory Length,Graph Metrics,Policy Gradient,Global Cost
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要