This paper presents a novel approach to optimizing urban traffic flow using Reinforcement Learning (RL). Traditional traffic control systems, such as fixed-timing signals, often fail to adapt to real-time traffic conditions, leading to congestion, delays, and increased fuel consumption. By leveraging real-time data from the New York City Department of Transportation (NYC DOT) and modeling traffic signal control as an RL problem, we hypothesize that RL can outperform fixed-timing systems in reducing vehicle wait times, traffic congestion, and overall fuel consumption. Using the Simulation of Urban Mobility (SUMO) platform, we simulate traffic flows and compare the RL system to a rule-based, fixed-timing system. The RL system reduces average queue lengths by 25%, decreases vehicle wait times by 18%, improves traffic throughput by 12%, and reduces fuel consumption by 15%. These results suggest that RL offers a scalable and adaptive solution for managing urban traffic more efficiently.