The CORD platform allows for rapid services innovation and deployment as evidenced by our success as a community to develop open source VNFs as well as integrated PoCs at a very rapid pace.
Check out presentation for further information.
Intel® RSD is a logical architecture. The key concept is to disaggregate hardware, such as compute, storage and network resources, from preconfigured servers and deploy them in sharable resource pools.
Wiwynn® Cluster Manager is a system software that makes data center easier to manage with features such as resource planning, massive firmware and OS deployment, real-time rack level visual monitoring.
For deep learning, NVIDIA GPU Cloud empowers AI researchers with performance-engineered containers featuring deep learning software such as TensorFlow, PyTorch, MXNet, TensorRT, and more. NVIDIA also provides a wide range of GPU-accelerated platforms you can use to accelerate deep learning training and inference application workloads.
William talks about what Penguin Computing can offer for AI by sharing the Application of HPC Discipline and Wiwynn Server Validation and L10/L11 Test Item / Coverage.
Check out presentation for further information
Wiwynn offers a complete GPU server lineups, which includes the 21 inch 4U Dual Socket GPU Server for OCP users, and the 19 inch 4U8G Dual Socket GPU Server for traditional 19 inch Rack user.
If you have had sufficient servers and just want to scale up your GPU capability, we have GPU Accelerator for you. The Gen1 and Gen2 of XC200 series, the 4U16X GPU Accelerator, are great choices. They are both disaggregated systems with solely GPU cards inside the system.