logo
ResearchBunny Logo
Introduction
The proliferation of private vehicles in metropolitan areas like London, Hong Kong, and Kuala Lumpur has created significant challenges, including severe traffic congestion and a high demand for parking spaces. A 2017 survey revealed that drivers in some urban areas spend about 25 minutes daily searching for parking, leading to increased fuel consumption, carbon emissions, and further traffic jams. This paper addresses these issues by proposing a dynamic pricing strategy integrated with real-time parking data. Dynamic pricing, adjusting prices based on market demand, offers a potential solution to manage traffic flow during peak and off-peak hours by charging higher fees during peak times and lower fees during off-peak times. The proposed deep reinforcement learning-based dynamic pricing (DRL-DP) model is designed to learn optimal pricing strategies through continuous interaction with a simulated parking environment. The model's objective is to balance maximizing profit for parking vendors with optimizing parking space utilization and mitigating traffic congestion.
Literature Review
The literature review examines existing research on dynamic pricing and smart parking solutions. Dynamic pricing has significantly impacted e-commerce due to the availability of vast data on user behavior. Several approaches have been explored, including time-based pricing, market segmentation, and dynamic marketing. Research utilizing data mining and multi-agent reinforcement learning for dynamic pricing in e-commerce is also discussed. The review also covers the application of reinforcement learning in dynamic pricing, including scenarios with competing vendors and price-sensitive customers. The use of game theory models for calculating dynamic pricing under different conditions is also analyzed, highlighting the gap in the integration of dynamic pricing with AI techniques beyond e-commerce platforms. Regarding smart parking solutions, the review highlights the use of technologies like sensors and e-payment systems for real-time data collection and the development of intelligent parking systems incorporating various technologies, including dynamic resource allocation, reservation systems, and crowdsourcing approaches to improve parking efficiency and driver experience. Prior works focus on real-time algorithms for reservations or price regulation, while this research focuses on smoothing vehicle arrival rates across peak and off-peak hours.
Methodology
The proposed DRL-DP model utilizes a reinforcement learning approach to train a dynamic pricing mechanism. The model simulates a parking market environment where parking vendors (players and opponents) compete for customers. The rewards are based on the number of vehicles parked and the revenue generated. The deep learning agent learns to maximize rewards by adjusting the discount applied to parking fees. The environment is modeled as a Markov Decision Process (MDP) where the state represents the time step and parking occupancy, the action represents the discount value, and the reward represents vehicle arrival rate, parking revenue, and vehicle flow regulation results. The model uses real-world data on vehicle arrival rates and parking occupancy from two locations in Kuala Lumpur, Malaysia. Data preprocessing involves extracting vehicle arrival rates, stay times, and identifying visitor types (short-term, medium-term, long-term). SARIMAX (Seasonal Autoregressive Integrated Moving Average with exogenous factors) is used to forecast future vehicle arrival rates. The DRL-DP model incorporates modules for the parking operator, driver, grid system (representing the geographical area), pricing engine, arrival process, reward module, vehicle flow regulation (using the Simple Moving Average method to smooth out peak hour demand), and decision maker. The driver entity's decision-making process considers factors such as distance to destination, price, and parking occupancy, influencing the rewards returned in each time step. The deep learning agent employs a deep neural network (DNN) to learn from environmental observations and actions. The Q-learning algorithm is used to update the Q-table, balancing exploration and exploitation (using an epsilon-greedy policy) to improve reward over 3000 episodes of weekly simulations. The reward function considers multiple objectives: maximizing parking occupancy, increasing parking revenue, and regulating vehicle flow.
Key Findings
The experiments evaluated the DRL-DP model under different reward approaches: occupancy, revenue, vehicle flow regulation, and a unified approach combining all three. The occupancy approach showed that maximizing occupancy alone is not effective as it neglects revenue considerations. The revenue approach yielded better results, with higher rewards obtained during training, except in cases with low-priced pricing policies and limited action space for price increases. The vehicle flow regulation approach faced difficulties in achieving higher rewards when significant competitive power disparities existed between parking vendors. The unified approach, while not achieving high accuracy due to the interconnectedness of the three reward types, still managed to achieve higher rewards over the training period. The results demonstrate that dynamic pricing can significantly improve parking revenue. The choice of action space significantly influenced the outcome. A wider range of price adjustments in the action space led to higher reward variations, while a narrower range provided stability. The optimal rewards varied depending on the pricing schemes of the players and opponents, indicating that competitive power plays a critical role. Using a low-priced pricing policy necessitates price increases to enhance revenue, while relying solely on discounts without the possibility of price increases hinders revenue growth. The SARIMAX model accurately forecasted vehicle arrival rates for the RL simulation.
Discussion
The findings suggest that a deep reinforcement learning-based dynamic pricing strategy can effectively optimize parking utilization and revenue. The model’s performance varied across different reward functions, highlighting the trade-off between occupancy, revenue, and traffic flow regulation. The experiments demonstrated that the system is capable of learning optimal pricing strategies under various scenarios, but its effectiveness is influenced by the competitive landscape and the flexibility of the pricing policy. The use of a wider action space facilitated better exploration and potentially more optimal results, while the choice of reward function directly impacts the optimization goal and the model's learned behavior. These results showcase the potential of AI-driven dynamic pricing for addressing real-world parking challenges.
Conclusion
This study successfully demonstrates a dynamic pricing mechanism for the parking industry using deep reinforcement learning. The DRL-DP model effectively optimizes parking revenue and utilization. Future work should address limitations such as the impact of on-street parking and building type on driver preferences, incorporating more sophisticated traffic models and expanding the scope to encompass different parking area types and driver demographics.
Limitations
The study's limitations include the focus on off-street parking, neglecting the influence of on-street parking availability. The model's accuracy is influenced by the accuracy of the SARIMAX forecasting model, and the simplification of driver behavior might not fully capture the complexities of real-world decision-making processes. Building type and the specific characteristics of the parking area (e.g., tourism, residential, office) were not explicitly incorporated into the model, which could influence driver behavior and therefore the effectiveness of the dynamic pricing strategy. Further research should consider these factors for increased accuracy and generalizability.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny