Introducing the Corespan 5090 — Now AvailableLearn More →
Resource article4 min read

Disaggregated NVMe Scratch Pad: Breaking the GPU Memory Barrier

Corespan’s disaggregated NVMe scratch pad creates a shared, high-performance storage tier that extends GPU memory, enabling scalable AI workloads with better utilization and predictable performance.

Corespan Team

In our previous post, we explored the conceptual benefits of using a disaggregated NVMe scratchpad to offload GPU vRAM. This article takes the next step and looks at what enterprise architects actually need to see: how the hardware behaves under load, how it scales, and how it keeps data moving without bottlenecks.

The Benefits of Using SSD/NVMe as a GPU Scratchpad

Using SSDs as a scratchpad or cache layer provides several advantages for data-heavy applications like AI training and scientific simulations.

10

PCIe slots per chassis

8

Drives per Gen5 x16 slot

4

RAID arrays per PCIe slot

01

Massive capacity

NVMe drives offer terabytes of space for datasets far larger than the GPU's internal memory can hold.

02

Direct data path

Technologies like GPUDirect Storage move data directly between NVMe storage and GPU memory with less CPU overhead.

03

Reduced vRAM bottlenecks

Use NVMe as a secondary cache for model weights or large assets to effectively extend GPU working memory.

04

Extended memory for large models

Run uncompressed models that exceed available vRAM by treating NVMe SSDs as an extension of the memory pool.

05

Improved drive lifespan

Offload frequent temporary writes to a dedicated scratch SSD instead of wearing out the primary system drive.

But concepts only get you so far. Enterprise architects and infrastructure leaders need to know how the hardware performs under the hood. Corespan focuses on measurable performance rather than marketing claims, so the important questions are serviceability, scalability, throughput, and workload control.

Zero-Downtime Serviceability for Maximum Cluster Uptime

When training large-scale AI models, taking down a compute node or an entire cluster for routine storage maintenance is an unacceptable disruption. Your infrastructure has to support continuous operation.

  • Hot-swappable architecture: Corespan supports hot-swapping at both the individual SSD/NVMe drive level and the PCIe RAID card level.
  • Continuous access: Administrators can access all NVMe and SSD drives without incurring system downtime.
  • Intelligent recognition: Dynamic enumeration ensures newly introduced hardware is recognized and provisioned into the shared pool without requiring reboots.
The business outcome is straightforward: dramatically reduced maintenance windows and higher ROI on GPU compute, because your accelerators are not left sitting idle waiting for storage nodes to reboot.

Massive Scalability & Hardware Flexibility

AI workloads scale exponentially, and your storage tier needs to scale with them without forcing rigid vendor lock-in or premature hardware obsolescence.

  • Extreme density: Each chassis features 10 PCIe slots, and Corespan supports up to eight SSD/NVMe drives per Gen5 x16 slot.
  • Independent power: The architecture includes external power capabilities to comfortably support more than 80 drives.
  • Media agnostic: Native support for U.2, U.3, and E3.S NVMe SSD form factors.
  • Simultaneous operations: The chassis supports both U.2 Gen5 x4 and U.3 Gen5 x8 drives simultaneously in the same environment.

The business outcome is future-proofed infrastructure. You can maximize your current hardware investments while still integrating the next generation of NVMe form factors exactly when workloads demand it.

Uncompromised Data Paths & Throughput

High capacity is meaningless if the data bottlenecks on its way to the GPU. Corespan's architecture is engineered to guarantee the latency and throughput required for persistent, high-write scratchpad use.

  • Direct GPU communication: Full NVMe DMA support between internal GPUs.
  • Dedicated bandwidth: Configurable oversubscribed and non-oversubscribed paths between the CPU and NVMe/SSD drives.
The business outcome is faster time-to-insight: data can bypass the CPU, dedicated lane bandwidth is preserved, and the I/O stalls that usually plague data-heavy operations are removed from the path.

Enterprise-Grade Data Management at the Edge

Disaggregated scratch data still requires robust management and redundancy, especially when dealing with mission-critical pipelines.

  • Advanced RAID support: Native support for RAID 0, RAID 1, and RAID 10 configurations.
  • Granular control: Administrators can configure up to four separate RAID arrays per single PCIe slot.

The business outcome is better risk mitigation and workload optimization. You can balance raw speed with redundancy on a per-workload or per-tenant basis inside the same chassis.


Ready to build?

Corespan's disaggregated NVMe scratch fabric is not just a conceptual shift. It is a rigorously engineered hardware solution built for the reality of modern data centers, where keeping GPUs fed with data matters just as much as raw accelerator counts.

Continue the conversation

Explore how Corespan turns GPU infrastructure into something more flexible.

If this article sparked ideas, the next step is seeing how the platform maps to your workload mix, lifecycle strategy, and operational model.