As computing technology and data algorithms have advanced over the years, the ways in which technology has been applied to real world challenges have grown more automated and autonomous. This has given rise to a completely new set of computing workloads for Machine Learning which drives Artificial Intelligence applications (aka AI / ML).
AI / ML can be applied across a broad spectrum of applications and industries. Financial analysis with real-time analytics is used for predicting investments and drives the FinTech industrys needs for high performance computing. Real-time image recognition is a key enabler for self-driving vehicles, while facial recognition is used by law enforcement across the globe. Manufacturing uses image recognition technology to spot defects in materials, organizations such as NOAA use satellite imagery to spot changes in weather, while social media platforms use image recognition to tag photos of friends and family.
What is common among these uses cases is the need for a high level of parallel computing power, coupled with a high-performance low latency architecture to enable parallel processing of data in real-time across the compute cluster. The "training" phase of machine learning is critical and can take an excessively long time, especially as the training data set grows exponentially to enable deep learning for AI.
With storage performance now recognized as a critical component of AI/ML application performance, the next step is to identify the ideal storage platform. Non-Volatile Memory Express (NVMe) based storage systems have gained traction as the storage media of choice to deliver the best throughput and latency. Shared NVMe storage systems unlock the performance of NVMe, and offer a strong alternative to using local NVMe SSDs inside of GPU nodes.
The Rise of GPUs for AI / ML
GPUs were originally created for high performance image creation, and are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them much more efficient than general purpose CPUs for algorithms where the processing of large blocks is done in parallel. For this reason, GPUs have found strong adoption in the AI / ML use case as they allow for a high degree of parallel computing and current AI focused applications have been optimized to run on GPU based computing clusters.
With the powerful compute performance of GPUs, the bottleneck moves to other areas of the AI / ML architecture. For example, the volume of data required to feed machine learning requires massive parallel read access to shared files from the storage subsystem across all nodes in the GPU cluster. This creates a performance challenge that NVMe shared storage systems are ideally suited to address.
Shared NVMe Storage for High Performance Machine Learning (ML)
One of benefits of shared NVMe storage is the ability to create even deeper neural networks due to the inherent high performance of shared storage, opening the door for future models that cannot be achieved today with non-shared NVMe storage solutions.
Today, there are storage solutions that offer patented architectures built from the ground up to leverage NVMe. The key to performance and scalability is the separation of control and data path operations between the the storage controller software and the host-side agents. The storage controller software provides centralized control and management, while the agents manage data path operations with direct access to shared storage volumes.
While AI / ML workloads are run exclusively on the GPUs within the cluster, that doesn't mean that CPUs have been eliminated from the GPU clusters completely. The operating system and drivers still leverage the CPUs, but while the machine learning training is in progress, the CPU is relatively idle. This provides the perfect opportunity for an NVMe based storage architecture to leverage the idle CPU computing capacity for a high performance distributed storage approach.
With NVMe protocol supporting exponentially more connections per SSD, the storage agents use RDMA to give each GPU node a direct connection to the drives. This approach enables the agents to perform up to 90% of the data path operations between the GPU nodes and storage, reducing latency to be on par with local SSDs.
In this scenario, running the NVMe based storage agent on the idle CPU cores of the GPU nodes enables the NVMe based storage to deliver 10x better performance than competing all-flash solutions, while leveraging existing compute resources that are already installed and available to use.
Read Part 2: Local versus Shared Storage for Artificial Intelligence (AI) and Machine Learning (ML)
The Latest
In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...