TORmem Joins OpenCAPI Consortium
TORmem is proud to announce that we are joining the OpenCAPI consortium, a new, leading group within the industry dedicated to enabling the future of low-latency disaggregated memory.
Current and emerging workloads are pushing the bounds of the local memory that is available to them on a typical server. AI and ML workloads alone require vast amounts of fast memory that are impossible or impractical to address with the legacy model of small amounts of fast memory installed within the local server, with access to large amounts of slow memory via the network.
What is needed to make memory disaggregation practical is an interface which connects the processors within the server to this external disaggregated memory, at speeds which make it equal or nearly equal to that server’s local memory. This is essential for workload performance.
TORmem and OpenCAPI
TORmem designs and manufactures disaggregated memory appliances across a range of capabilities and price points. To make these a reality, we need interconnection systems which are built to provide the performance that disaggregated memory requires. OpenCAPI’s Open Memory Interface (OMI) is a serial differential bus providing 64 gigabytes per second of bandwidth and supporting up to 256 gigabytes per channel. It is designed to provide high bandwidth, low latency access to fast memory by a CPU; these characteristics make it ideal for disaggregated memory and we are implementing it in our products, along with other new technologies such as differential DIMMs (DDIMMs).
By adopting OMI and working with each member of the OpenCAPI consortium, we will provide valuable real-world experience with disaggregated memory technologies that will then help the consortium to see further adoption of OMI for a wide range of disaggregated memory use cases across multiple industries.
At TORmem, we believe in One Memory for All, our vision of high-speed disaggregated memory at data center scale, for enterprise, cloud and HPC use cases. Decouple your memory from your servers, speeding up today’s applications and enabling tomorrow’s, while optimizing costs.