Cisco, Red Hat extend networking, AI integrations

Cisco, Red Hat extend networking, AI integrations

  • How to run genAI tasks such as inferencing over converged infrastructure with Nvidia GPUs, the Nvidia AI Enterprise Software Suite, VMware vSphere, and OpenShift. These CVDs support continuous integration (CI) architectures for both NetApp FlexPod with NetApp Astra Trident and FlashStack with Pure Storage Portworx for persistent storage. These validated designs also highlight sustainability capabilities featuring Cisco UCS Power and Energy Metrics Dashboard. 
  • A genAI inferencing CVD features the Intel 5th Generation CPU, Intel OpenVINO toolkit, VMware vSphere, OpenShift, and OpenShift AI. This design guide shows how OpenShift Data Foundation users can utilize local storage in Cisco UCS servers for simplified operations. 
  • Another CVD is for streamlining machine-learning operations with OpenShift AI on FlashStack with Nvidia GPUs to operationalize and accelerate model delivery with consistency and efficiency. 

“AI is pushing traditional infrastructure to its limits due to its massive datasets, specialized algorithms, and process orchestration requirements,” Brannon wrote. “Given the critical nature of the datasets and the business processes AI is being deployed to improve, IT teams are faced with significant considerations around security and infrastructure scalability to support the inevitable growth.”

As for improving cloud integration, the vendors said customers can now extend OpenShift applications to run on bare metal Cisco Unified computing System (UCS) servers. The idea is that customers can now simplify operations by eliminating the hypervisor layer to run cloud-native workloads, Brannon stated. “This helps reduce overall infrastructure costs while increasing resource utilization. With OpenShift Virtualization, legacy applications can run in VMs that can be created, managed, cloned, and live-migrated next to containerized workloads on bare-metal Cisco UCS,” Brannon wrote.

A bare-metal architecture also offers more direct hardware access, maximizing performance for demanding workloads such as high-performance computing, AI, or other latency-sensitive applications, according to Brannon.

administrator

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *