Everpure has announced Evergreen One for AI, a performance-backed consumption model for artificial intelligence (AI) that extends to use of its FlashBlade//Exa high-performance storage. Meanwhile, the company – known as Pure Storage until recently – has announced the beta release of its Datastream automated AI pipeline appliance. 

Evergreen One for AI differs from existing flexible capacity offers in the Everpure range by providing use of FlashBlade//Exa and service-level agreements (SLA) based on graphics processing unit (GPU) count. The aim here is to ensure that the storage environment provides the throughput to keep GPU resources fully utilised.

FlashBlade//Exa, Everpure’s highest-performance platform, was previously excluded from the Evergreen One consumption model

Exa aims at AI and high-performance computing (HPC) workloads that demand extremely high throughput, likely in customers between large enterprise users of AI and the hyperscalers.

At its launch, FlashBlade//Exa introduced an architecture to the Pure product line in which metadata and bulk storage are disaggregated with different hardware and protocols in use.

Kaycee Lai, vice-president for AI with Everpure, said Evergreen One for AI shifted the financial and operational risk away from the customer. “Specifically, we have an offering which we call Evergreen One for AI,” he said. “The big difference for AI is that we set the performance level of the offering based on the number of GPUs that you have … it is an SLA-backed performance guarantee.”

Evergreen One and Flex are Pure Storage’s pay-as-you-go procurement models, while Forever involves upfront purchase with built-in upgrades.

Automating the RAG pipeline

Everpure also announced the beta availability of Datastream. First previewed in late 2024, Datastream is a “single SKU” appliance that integrates Nvidia GPUs with Everpure storage. It is designed to tackle the “data readiness” challenge, said Lai. This refers to the oft-cited statistic that data teams spend 80% of their time preparing unstructured data for use.

The appliance automates the retrieval-augmented generation (RAG) pipeline, which includes ingest, curation and vectorisation of data. By providing an integrated hardware and software stack, Everpure aims to provide an “easy button” for enterprises building chatbots or autonomous agents, he said.

The software capability behind Datastream was built in-house, though it can connect to third-party data sources including Dell, HP and NetApp environments, as well as cloud-resident data. This flexibility allows the appliance to act as a central hub for AI readiness regardless of where the data lives.

“Today, people run RAG pipelines … they do the chunking, the embedding, the indexing to make sure that the data is going to be accurate and relevant so that chatbot agents can consume them in a specific format,” said Lai. “That takes up about 80% of most data teams’ time because there’s no standard tool.”

Underpinning performance

To support these launches, Everpure revealed new benchmarks intended to validate its hardware under AI stress. In MLPerf 2.0 testing, the company claimed the top spot for checkpointing – a critical function for saving the state of a model during long training runs – reporting results up to two times better than competitors such as Huawei and Vast.

The company also cited Spec Storage AI image benchmarks, where it outperformed NetApp’s AFX platform by approximately 20%, he said.

Share.
Leave A Reply

Exit mobile version