Meeting the Storage Needs of AI

How AI is changing the storage consumption landscape

When it comes to storage for AI applications, the key issue isn’t that these apps consume more storage than other applications — they don’t. The key issue is that they consume storage differently. With AI applications, data moves from storage to AI processing or I/O. It also moves between different storage systems and media at different points in its lifecycle.
I/O is primarily tied to throughput, regardless of the type of storage or storage media the data is stored on. AI’s three modes — machine learning, deep machine learning and neural networks — each ingest and process data differently and, therefore, have distinctive I/O requirements. A look at each reveals how AI applications are changing storage consumption.

Speed is key to storage consumption

Machine learning, the most common AI, can potentially use millions of data points to make predictions or decisions based on human-derived algorithms. The accuracy of an AI app’s outcomes is tied to the number of data points ingested within a specified timeframe. More data points lead to more accurate predictions and decisions.
Time is the limiting factor: If a prediction or decision is required in “n” milliseconds, the speed at which the machine learning algorithm can ingest and examine the data points will determine the quality of the outcome. GPUs and high-performance computing have eliminated most processing bottlenecks. That leaves the storage I/O having to keep up with the machine learning algorithm.
Deep learning applications draw on millions, or even billions, of data points, and they make multiple passes on the same data. This exacerbates the I/O bottleneck problem.

    Leave a Reply

    Your email address will not be published. Required fields are marked *