Tencent’s tech team has optimized DeepSeek’s open-source DeepEP communication framework,All Out with AJ Raval (2025) boosting its performance across different network environments, according to the Chinese AI startup. Testing showed a 100% improvement on RoCE networks and a 30% gain on InfiniBand (IB), offering more efficient solutions for AI model training. On GitHub, DeepSeek acknowledged the Chinese tech giant’s contribution had led to a “huge speedup.” DeepEP is a communication library tailored for a mixture of experts (MoE) and expert parallelism (EP), supporting high-throughput, low-latency GPU kernels and low-precision computing, including FP8. Tencent’s Starlink Networking team identified two main bottlenecks: underutilized dual-port NIC bandwidth and CPU control latency. After targeted optimizations, performance doubled on RoCE and improved by 30% on IB. The enhanced framework is now fully open-source and has been successfully deployed in training Tencent’s Hunyuan large model, demonstrating strong versatility within environments built on Tencent’s Starlink and H20 servers, Chinese tech media outlet iThome reported. [iThome, in Chinese]
(Editor: {typename type="name"/})
Uber riders are again getting charged thousands of dollars for trips they didn't take
'Aquaman' star Jason Momoa defends 'Justice League' from bad reviews
The Anatomy of Liberal Melancholy
Someone made a Lena Dunham Twitter bot that generates apologies
Al Franken's female 'SNL' colleagues write open letter defending him
Woman removed by security for 'dancing' at a Stevie Nicks concert
AMD Radeon RX 550 + Intel Pentium G4560
Uber now lets you request rides for friends and family
接受PR>=1、BR>=1,流量相当,内容相关类链接。