Exact verification of deep neural networks (DNNs) is critical for safety- and reliability-sensitive manufacturing and supply chain applications, including automated defect detection in assembly lines, predictive maintenance scheduling, and digital twins. In such contexts, verification must guarantee that models behave predictably under all admissible inputs, minimizing costly downtime or defective output. This study presents a large-scale, controlled experiment to quantify how architectural and training factors influence end-to-end verification time using Mixed-Integer Linear Programming (MILP). We trained 500 feed-forward neural networks under a full-factorial design spanning four factors: hidden layer count, neurons per layer, regularization method, and number of training epochs, replicated across five random seeds. For each trained model, we measured compute times across the complete verification pipeline: model training, verification model construction, and MILP solving. Factor contributions and interactions were isolated using Linear Mixed-Effects (LME) and Logistic Regression (LR) models, enabling precise attribution of verification cost drivers. Two leading LP-based bound-propagation formulations were benchmarked, with the compact Δ-relaxation approach consistently outperforming alternatives. We further quantified the advantages of LP-based bounds over interval arithmetic, showing tighter bounds that improved neuron stability and reduced solving complexity. Finally, a new MILP solving strategy reduced average verification timeouts from 59% to 5%, offering a path toward predictable and scalable verification in industrial AI deployments.