Abstract:
Tower cranes are critical vertical transportation equipment in large-scale construction sites, and the rationality of their layout directly affects construction efficiency, cost control, and operational safety. Traditional layout methods rely heavily on manual experience or heuristic algorithms, making it difficult to achieve globally optimal solutions under multiple constraints. To address this issue, this paper proposes a deep reinforcement learning-based approach for cooperative multi-tower crane layout optimization. The Proximal Policy Optimization (PPO) algorithm is employed to establish an interaction mechanism between intelligent agents and a simulated environment, modeling the crane layout process as a Markov Decision Process (MDP). A multi-objective reward function is designed by integrating key factors such as crane coverage, equipment cost, and collision risk, guiding the agent to achieve adaptive and rapid optimal configuration of crane positions. Experimental results demonstrate that the proposed method generates high-quality, collision-free layout schemes across three typical simulation scenarios, with operational coverage exceeding 98% in all cases, indicating excellent optimization performance and robustness. This study provides a novel approach for crane scheduling and spatial planning in construction projects, offering significant practical value and broad prospects for application and promotion.