Industry Landscape: AI and ML
AI and ML have come to fill a central role in business operations for organizations in a variety of industries. Whether it’s a bank monitoring for fraud, a retailer projecting sales, a hospital striving for more accurate diagnoses, or a mid-size manufacturer implementing predictive maintenance on its assembly lines, organizations of all types and sizes rely on AI to tease out patterns that might be invisible to people. AI models in general—and deep learning (DL) models in particular—are generally more accurate when there is more data available to train them. This appetite for data drives the need for larger, more capable storage, faster networking, and more performant servers to find more value in data—data which must also be kept secure.
Businesses often rely on on-premises servers rather than cloud implementations for a variety of reasons. For AI and ML, these reasons often center on data gravity and lower latency for using AI models. It is often faster and easier to bring AI training functions closer to the data rather than bear the cost and time of moving large amounts of data to centralized compute. Working with data close to where it resides can also reduce the latency for training AI, which can speed up the process. Moreover, regulatory requirements and data-sovereignty laws can also be compelling reasons to keep data on premises, depending on an organization’s industry and location. In all cases, performance is a core requirement for businesses when dealing with their data and tapping the analytical value that data contains through AI and ML.
The performance demands of workloads like analytics mean that the data infrastructure must be tuned to meet service-level agreements (SLAs). The interplay of processor, memory size, network bandwidth, and storage subsystems is critical. One prominent tool for comparing server performance for this interplay is benchmark results. Because benchmarks produce numeric results, comparisons between competing systems can feel straight forward.