AI Computing Power
A leading AI company's computing center deploys thousands of GPU servers using a leaf-spine network architecture to support trillions of parameters in large-scale model training and intelligent inference, with a daily cross-node data exchange volume exceeding 60PB. There is an urgent need for short-range direct connections between GPU clusters, servers, and switches. However, the existing 200G solution suffers from a mismatch between "strong computing power and weak bandwidth," necessitating the use of 400G OSFP112 VR4 short-range high-density interconnect modules to overcome the transmission bottleneck while also ensuring energy efficiency and rapid deployment.
Inquire Now
Customer pain points
Computing power - bandwidth mismatch
GPU computing power is continuously upgraded, but 200G modules cannot support short-distance high-density transmission, with data synchronization delay exceeding 55 microseconds and large model training cycle extended by 30%;
High-density deployment is limited
Traditional modules are large in size and poor in interface compatibility, difficult to adapt to the compact layout of leaf-spine network and server-switch direct connection scenarios;
Energy consumption and interference issues
The power consumption of a single module is 19W, the PUE of the computer room reaches 1.45 after large-scale deployment, and the dense equipment leads to strong electromagnetic interference with a stable operation rate of only 99.6%.
Adopt TBCOM 400G OSFP112 VR4 Optical Modules
Precisely addressing the pain points of short-range connectivity
View product
01
Short-distance high-density adaptation
Designed specifically for short-distance scenarios of ≤50m, the compact OSFP112 package is adapted to leaf-ridge networks, perfectly compatible with GPU clusters and direct server-switch connections, resolving the core contradiction of computing power-bandwidth mismatch;
02
Plug-and-play deployment
Supports the PCIe 5.0 protocol, is compatible with mainstream equipment manufacturers, improves deployment efficiency by 90%, and shortens the online cycle by 25 days;
03
Low power consumption and interference immunity
VR4 parallel architecture + self-developed energy efficiency algorithm, single module power consumption reduced to 9W (27% lower), double-layer shielding structure resists electromagnetic interference, ensuring 99.999% stable operation;
04
High yield rate to control costs
With a 100% factory yield rate and a return rate of only 0.02%, equipment replacement and maintenance costs are reduced.
Effect presentation
After the module went live, it experienced zero failures for 10 consecutive months, with a stable operating rate of 99.9996%. The data center's PUE dropped to 1.27, saving over 3 million yuan in electricity costs annually and reducing overall costs by 42%. The 18-microsecond low latency supports short-distance, high-density data interaction, improving the training efficiency of large models by 80% and completely solving the matching problem between GPU computing power and bandwidth transmission.
99.9996%
Stable operating rate
300ten thousand yuan
Annual electricity cost savings
42%↓
Overall cost reduction
80%↑
Improved training efficiency of large models
Partners
Together with our global partners

Online message

  • Name*

  • Phone/Email*

  • company*

We promise that the information you provide will be kept strictly confidential and will only be used to provide you with more accurate services.