Cambodia Industry Wire
SEE OTHER BRANDS

Exploring the industries and services news of Cambodia

Pliops Showcases XDP LightningAI’s Proven Impact at AI Infra Summit 2025

  • Demos Solutions Optimized for NVIDIA and AMD GPUs Highlight Breakthroughs in Inference Efficiency
  • Joins with Tensormesh to Simplify vLLM Deployment and Accelerate GenAI Inference Across Clusters

SANTA CLARA, Calif., Sept. 09, 2025 (GLOBE NEWSWIRE) -- Today at AI Infra Summit 2025, Pliops is spotlighting the completeness and effectiveness of XDP LightningAI, its GenAI-native memory stack that’s powering inference and retrieval workloads across hyperscale and enterprise environments.

Since its unveiling earlier this year, Pliops LightningAI has rapidly gained traction as the go-to solution for long-term memory in large language models (LLMs), retrieval-augmented generation (RAG), and graph neural networks (GNNs). With rack-level simplicity and seamless integration, LightningAI is helping customers overcome memory bottlenecks and scale GenAI deployments with confidence.

“LightningAI is redefining how GenAI infrastructures scale – delivering turnkey inference solutions that unify compute and memory,” said Ido Bukspan, CEO of Pliops. “By offloading KV cache and extending long-term memory for LLMs and RAG, we’re helping models retain more context, serve more users per GPU, and reduce cost per token. This unlocks consistent performance and real-time responsiveness at scale – hallmarks of GenAI-native architecture.”

Building on this momentum, Pliops is now expanding its ecosystem through strategic collaborations and partnerships.

As a key proof point, CloudRIFT.AI is now hosting an accelerated inferencing instance powered by Pliops LightningAI, showcasing the stack’s ability to deliver high-throughput, low-latency performance for production-scale GenAI workloads.

Pliops is collaborating with Tensormesh, a leading inferencing optimization software provider, to streamline the deployment of vLLM-based models. The Tensormesh team is better known as the LMCache team, the open source project they started a year ago. By combining LightningAI’s memory acceleration with Tensormesh’s shared KV cache architecture, the partnership delivers fastest time-to-first-token and significant GPU savings – without requiring ML Ops teams. Together, the companies are showcasing how GenAI workloads can be deployed with rack-level simplicity and real-time performance across multi-GPU clusters.

“Partnering with Pliops allows us to bring real-time GenAI inference to the enterprise with unprecedented simplicity,” said Junchen Jiang, CEO at Tensormesh. “By combining our shared KV cache architecture with LightningAI’s memory acceleration, we’re enabling customers to deploy vLLM-based models faster, with fewer GPUs, and without the operational overhead that typically slows innovation.”

Redefining How Data Centers Scale in the GenAI Era: Live at the AI Infra Summit
At its booth #728 at the Summit, Pliops will host live demos and “Memory Minutes” – bite-sized sessions that highlight LightningAI’s architecture, deployment success stories, and roadmap for emerging workloads. The demos will feature solutions optimized for both NVIDIA and AMD GPUs, underscoring LightningAI’s flexibility and performance across diverse compute environments.

Summit Attendees Can Explore How LightningAI:

  • Delivers long-term memory optimized for GenAI inference and retrieval
  • Simplifies deployment with rack-level integration
  • Scales across diverse environments – from cloud-native to on-prem AI clusters
  • Powers real-world deployments, including CloudRIFT.AI’s accelerated inferencing instance
  • Supports multi-GPU architectures, including NVIDIA and AMD platforms
  • Demonstrates unified compute and memory in a turnkey GenAI inferencing solution with Viking Enterprise Solutions

For more information about Pliops, please visit www.pliops.com.

Connect with Pliops
Read Blog
About Pliops
Visit Resource Center – XDP LightningAI Solution Brief
Connect on LinkedIn
Follow on X

About Pliops
A winner of the FMS 2025 best of show award, Pliops is a technology innovator focused on making data centers run faster and more efficiently. The company’s Extreme Data Processor (XDP) radically simplifies the way data is processed and managed. Pliops overcomes I/O inefficiencies to massively accelerate performance and dramatically reduce overall infrastructure costs for data-hungry AI applications. Building on this foundation, XDP LightningAI harnesses Pliops' cutting-edge acceleration technologies to optimize GenAI workloads, delivering unmatched efficiency and scalability. FusIOnX, Pliops' tiered solution architecture, provides tailored performance enhancements to meet the evolving demands of AI-driven infrastructure.

Founded in 2017, Pliops has been recognized multiple times as one of the 10 hottest semiconductor startups. The company has raised over $200 million to date from leading investors including Koch Disruptive Technologies, State of Mind Ventures Momentum, Intel Capital, Viola Ventures, SoftBank Ventures Asia, Expon Capital, NVIDIA, AMD, Western Digital, SK hynix and Alicorn. For more information, visit www.pliops.com.

Media Contact:

Stephanie Olsen
Lages & Associates
(949) 453-8080
stephanie@lages.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/4b903692-d5e1-4f9b-a504-3933fed49c69


Primary Logo

Pliops LightningAI demo

LightningAI’s demo at AI Infra Summit exemplifies the future of GenAI infrastructure - where memory and compute converge to unlock real-time inference, lower cost-per-token, and scalable KV cache offload. Purpose-built for LLMs, RAG, and vector databases, LightningAI delivers a turnkey solution that’s already powering deployments across open-source and enterprise environments. From architecture to acceleration, it’s GenAI without compromise.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions