Abstract
The demand for energy-efficient computing continues to rise as embedded systems dominate modern applications. Traditional performance metrics such as time and space complexity fail to capture computation cost when energy is the primary constraint. Energy-aware data structures are investigated by evaluating insertion, search, and deletion operations in terms of both latency and energy. Benchmarks are performed on common structures—hash maps, balanced trees, and skip lists—on a Linux platform across varied workloads and dataset scales. Energy consumption is measured using Running Average Power Limit (RAPL) counters and system-level profiling tools, with operational traces including CPU frequency, cache misses, memory usage, and I/O. The resulting dataset maps workload and system state to energy and latency outcomes. Lightweight machine learning models are trained to predict the most energy-efficient data structure and configuration for a given function and dataset size, while also identifying thresholds where a structure becomes suboptimal so that alternative choices can be made at the outset. Discrete mathematics underpins the theoretical characterization of these data structures, enabling rigorous analysis of algorithmic properties and providing a theoretical foundation for interpreting empirical trends. Results demonstrate that energy consumption frequently diverges from execution time; cache locality, branch prediction, and memory access patterns largely determine observed differences. Machine learning integration enables predictive guidance in data structure selection, reducing energy consumption while maintaining acceptable performance with low cost.