Yor are here.



Wiwynn Unveils Best-in-Class AI Systems Based on NVIDIA GPU Computing Platform


Exhibits Complete Training and Inference Acceleration Platforms at COMPUTEX 2018.

Taipei, May 30, 2018— Wiwynn, an innovative cloud IT infrastructure provider of high quality computing and storage products, plus rack solutions for data centers, today announced it will offer NVIDIA® Tesla® V100 32GB GPU-based XC200 and SV7400G3 systems, NVIDIA Tesla P4 GPU-based SV300G3 systems, as well as the adoption of the NVIDIA HGX-2 platform for future systems. Wiwynn’s close collaboration with NVIDIA addresses the growing demand of high computing power for deep learning and HPC as well as real-time inference for next-generation AI. Wiwynn will showcase its complete AI products at COMPUTEX TAIPEI 2018 in the Taipei International Convention Center, first floor #TF1I, from June 5 to June 8.

The Wiwynn® XC200 system supports up to 16 PCIe x16 accelerators within a 4U chassis. It serves up to four server nodes with its disaggregated and modularized design to deliver flexible configurations. Various PCIe-based accelerators can be utilized, including NVIDIA’s Tesla V100 32GB GPU, which doubles memory to improve performance for AI training, HPC, database, and graph analytics workloads, while reducing costs and complexity. Wiwynn will also showcase next-generation XC200 system for future technologies at Computex 2018.

Wiwynn® SV7400G3 and SV300G3 systems enable powerful and low latency AI training and inference respectively. SV7400G3 is a 4U 8-GPU server supporting the latest NVIDIA Tesla V100 32GB PCIe GPU for larger simulations and more efficient training. SV300G3, a dual-socket 1U multi-purpose server, combined with two NVIDIA Tesla P4 cards, on the other hand, acts as a scale-out server for real-time large scale deep learning inference with lower power consumption and better space utilization.

Wiwynn also announced it will begin building systems based on the newly announced NVIDIA HGX-2, the world’s most powerful GPU cloud server platform. Combining 16 Tesla V100 GPUs with NVIDIA NVSwitch™, HGX-2 serves as a unified 2 petaflops accelerator delivering unmatched performance for efficient training of comprehensive AI models for hyperscale data centers.

“Wiwynn takes AI as a strategic segment and develops our products for the next-generation data centers,” said Chance Lee, Senior Director of Product Management at Wiwynn. “Our customers can now benefit from our collaboration with NVIDIA by adopting the most advanced GPUs for both deep learning training and inference.”

“Wiwynn is a leader in technology and product development for cloud datacenters,” said Paresh Kharya, Group Product Marketing Manager of Accelerated Computing at NVIDIA. “Our work with Wiwynn will support researchers and data scientists with the best platforms to address their most complex AI and HPC challenges.”


About Wiwynn

Wiwynn is an innovative cloud IT infrastructure provider of high quality computing and storage products, plus rack solutions for leading data centers. We aggressively invest in next generation technologies for workload optimization and best TCO (Total Cost of Ownership). As an OCP (Open Compute Project) solution provider and platinum member, Wiwynn actively participates in advanced computing and storage system designs while constantly implementing the benefits of OCP into traditional data centers.

For more information, please visit http://www.wiwynn.com/english/company/news or contact sales@wiwynn.com
Follow Wiwynn on Facebook and Linkedin for the latest news and market trends.

PR Contact Wiwynn

Bing Wu

Wiwynn Corporation



Back to List