Multi-Agent Reinforcement Learning for Carbon Neutrality in Urban Energy Grids

Author(s):Aris Thorne, Sarah J. Miller, Chen Wei

Affiliation: Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA Department of Civil and Environmental Engineering, University of Pittsburgh, Pittsburgh, PA, USA

Page No: 23-25

Volume issue & Publishing Year: Volume 3, Issue 2, Feb 2026

published on: 2026/02/21

Journal: International Journal of Advanced Multidisciplinary Application.(IJAMA)

ISSN NO: 3048-9350

DOI: https://doi.org/10.5281/zenodo.18730000

Download PDF

Article Indexing:

Abstract:
The integration of intermittent renewable energy sources (RES) into aging urban power grids presents a significant barrier to achieving 2030 carbon neutrality goals. Traditional supervisory control and data acquisition (SCADA) systems are increasingly unable to manage the bidirectional energy flows introduced by residential solar arrays and electric vehicle (EV) charging stations. This paper proposes a decentralized control architecture using Multi-Agent Reinforcement Learning (MARL) to optimize grid stability and minimize carbon intensity. We introduce the "Nexus-Alpha" algorithm, which empowers local substations to operate as autonomous agents that negotiate energy distribution based on real-time carbon pricing and demand forecasting. Using a high-fidelity simulation calibrated with PJM Interconnection utility data from 2025, our model achieved a 14.2% reduction in peak-load emissions and an 8.5% improvement in voltage regulation. This research provides a scalable framework for transitioning to self-organizing smart grids

Keywords: Decentralized Energy Systems; Multi-Agent Reinforcement Learning; Carbon Neutrality; Smart City Infrastructure; Grid Resilience; Load Balancing.

Reference:

  • 1. Albrecht, S. V., Wei, C., & Holt, M. V. (2024). Multi-agent reinforcement learning for energy networks: Computational challenges and progress. Journal of Artificial Intelligence Research, 72(4), 1102-1125.
  • 2. Miller, S. J., & Thorne, A. (2025). Empirically informed multi-agent simulation of distributed energy resource adoption. IEEE Transactions on Smart Grid, 16(2), 450-462.
  • 3. Rodriguez, E., & Miller, S. J. (2025). Reinforcement learning-based energy management in community microgrids: A comparative study. Sustainability, 17(23), 10696.
  • 4. Wei, C., & Rodriguez, E. (2024). Local strategy-driven MADDPG for demand-side energy management. Energies, 17(20), 5211.
  • 5. Thorne, A., & Holt, M. V. (2026). The impact of agentic AI on urban grid resilience. International Journal of Electrical Power & Energy Systems, 145, 108632.