Liqid touts composable infrastructure at Dell Technologies World

We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. register today!


There’s no doubt that the use of AI is exploding in business – tripling in the last two years alone, according to Gartner.

By 2025, as market continues to mature, AI will be primary driver of infrastructure decisions, research firm says reports.

When bundled with demands driven by growing technologies such as edge computing and hybrid cloud environments, compute requirements will increase 10-fold, said Ben Bolles, executive director of project management at the house of Liquid.

Organizations with legacy infrastructure? They will be left behind, he noted.

Leave the legacy behind

This is where composable data center infrastructure can prove invaluable. With this approach, high-performance workloads and applications are decoupled from the underlying hardware, creating a cluster of data center resources where they will run most efficiently at all times.

The result: Increased performance, efficiency, agility, scalability, according to Bolles.

Liqid demonstrates the potential of composable memory at Dell Technologies World 2022, which is happening this week. The software company has partnered with Samsung and Tanzanite Silicon Solutions to model real-world composable memory scenarios via the Compute Express Link (CXL) 2.0 protocol, using Liqid Matrix Disaggregated Composable Infrastructure (CDI) software .

“With the groundbreaking performance provided by CXL, the industry will be better positioned to support and make sense of the massive wave of AI innovation expected over the next few years,” Bolles said. “By decoupling Dynamic Random Access Memory (DRAM) from the processor, CXL enables us to achieve breakthrough results in performance, infrastructure flexibility and more sustainable resource efficiency, preparing organizations to meet the architectural challenges facing industries as AI evolves at the speed of data.

According to Reportlinkerthe composable infrastructure market will grow at a compound annual growth rate of almost 25% between 2022 and 2027. According to the market intelligence platform, this is driven by increasing business analytics workloads , rising customer expectations, the implementation of methodologies such as devops , the rise of automation and standardization tools, and the growing adoption of hybrid cloud.

The Liqid Lab setup at Dell World explores the capabilities of the technology by leveraging Samsung and Tanzanite silicon and memory technologies. This illustrates clustered/tiered memory allocated across two hosts, orchestrated by Liqid Matrix CDI software. All told, Bolles said, it showcases the efficiency and flexibility needed to meet changing and growing infrastructure demands.

Bolles said Liqid Matrix bundles and composes memory in tandem with GPU and NVMe, FPGAs, persistent memory and other high-performance accelerator devices. The software provides native support for CXL, which is the open, industry-supported cache-consistent interconnect standard for processors, memory expansion, and accelerators.

This process allows for flexibility and speed in compounding precise amounts of resources into host servers and moving underutilized resources to other servers to meet workload needs.

GPU galore

Liqid also recently announced the general availability of its new Liqid ThinkTank AI system. This uses software to assign as many GPUs to a server as needed – whether or not they are physically designed to fit – enabling accelerated time to results, rapid deployment and GPU scaling , said Bolles. This can support the toughest workloads in AI workflows, from data preparation and analysis to training and inference.

Bolles pointed out that traditional static servers are ubiquitous but inefficient when it comes to deployment and scaling. They limit performance, make poor use of resources, and are difficult to balance with NVMe storage and other next-gen accelerators like FPGAs and storage-class memory.

But composable data center infrastructure allows users to manage so-called bare metal hardware resources through software, democratizing AI. Adopting CXL technology allows organizations to extract maximum value from hardware investments, Bolles said, and enables exponentially higher performance, reduced software stack complexity, lower overall system costs and other gains. efficiency and sustainability such as reducing physical and carbon footprints.

This way, users don’t have to focus on hardware maintenance; instead, they can focus on increasing time to results for target workloads.

Compound memory that spans CXL fabrics

Bolles added that Liqid’s differentiation lies in its software tool and its ability to compose memory on CXL tissues. What would normally be a complex and time-consuming process can now be done in minutes.

The Colorado-based company has gained traction with its software, having raised a $100 million Series C funding round in December 2021, co-led by Lightrock and DH Capital. Liqid Matrix software is also being used to create a $5 million supercomputer for the National Science Foundation, as well as three Department of Defense supercomputers worth $52 million. Bolles expects this growth to continue. “With the breakthrough performance provided by CXL, the industry will be better positioned to support and make sense of the massive wave of AI innovation expected over the next few years,” he said.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more about membership.

Leave a Comment