You are here:


Wiwynn® XC200 (4U16B GPU Accelerator)


4U16B GPU Accelerator

Disaggregated compute accelerator supports 16 PCIe 3.0 x16 accelerator cards for Deep Learning and HPC.
Where to Buy
Disaggregated Compute Accelerator for Deep Learning and HPC
Wiwynn XC200 is a disaggregated compute accelerator which supports 16 PCIe 3.0 x16 GPU/FPGA cards that are widely available on the market. Wiwynn XC200 supports various server platforms and provides the most flexible CPU to GPU ratio comparing other integrated GPU server solutions for Deep Learning and HPC.
High Scalability for Optimized Workloads
XC200 can be flexibly configured for different workloads with excellent applications scalability. The configuration of host to accelerator ratio includes 1 to 4 (total 4 hosts with 16 accelerators), 1 to 8 (total 2 hosts with 16 accelerators), or 1 to 16.
On-Chassis BMC for Easy Management
Wiwynn XC200 is designed with on-chassis BMC. Its out-of-band management port allows operators to remotely monitor temperature, voltages, power consumption through IPMI management tools. Additionally, the LED indicators provide instant check of system health and accelerators status.
Drawer Design for Easy Maintenance and Non-interrupt Serviceability
16 accelerators are installed in 4 independent drawers. It allows single operator to maintain accelerators of each drawer separately. Each drawer runs independently, so when there’s any maintenance required, the service from other drawers will not be interrupted. On top of that, PSU/Fan is hot-pluggable. All these design saves a lot of labor hours to datacenter.
Expansion Slots 16 PCIe 3.0 x16 (Dual-width cards)
Connection Speed 4 PCIe 3.0 x16 (Quad MiniSAS HD connector)
Accelerator TDP 300W
Fan 4
Management LAN One GbE RJ45 port
Remote Management IPMI v2.0 Compliant
Power Supply 3 x 3000W (2+1)
Dimensions 4U; 176 (H) x 448 (W) x 900 (D)
Category File Title Release Date Actions
Datasheet XC200 Datasheet 2018/07/19 Download