Multi-Plane Network Topology ( deepseek v3)

Multi-Plane Network Topology for Scalable AI Clusters

Core Architecture (Left – Green Sections)

Topology Structure

  • Adopts 2-Tier Fat-Tree (FT2) architecture for reduced latency and cost efficiency compared to 3-Tier
  • Achieves massive scale connections at much lower cost than 3-tier architectures

Multi-Plane Design

  • 8-Plane Architecture: Each node contains 8 GPUs and 8 IB NICs
  • 1:1 Mapping: Dedicates specific GPU-NIC pairs to separate planes

NIC Specifications

  • Hardware: 400G InfiniBand (ConnectX-7)
  • Resilience: Multi-port connectivity ensures robustness against single-port failures

Maximum Scalability

  • Theoretically supports up to 16,384 GPUs within the 2-tier structure

Advantages (Center – Purple Sections)

Cost Efficiency: Connects massive scale at much lower cost compared to 3-tier architectures

Ultra-Low Latency: Fewer network hops ensure rapid data transfer, ideal for latency-sensitive AI models like MoE

Traffic Isolation: Independent communication lanes (planes) prevent congestion or faults in one lane from affecting others

Proven Performance: Validated in large-scale tests with 2048 GPUs, delivering stable and high-speed communication

Challenges (Right – Orange Sections)

Packet Ordering Issues: Current hardware (ConnectX-7) has limitations in handling out-of-order data packets

Cross-Plane Delays: Moving data between different network planes requires extra internal forwarding, causing higher latency during AI inference

Smarter Routing Needed: Standard traffic methods (ECMP) are inefficient for AI; requires Adaptive Routing that intelligently selects the best path based on network traffic

Hardware Integration: Future hardware should build network components directly into main chips to remove bottlenecks and speed up communication


Summary

This document presents a multi-plane network topology using 2-tier Fat-Tree architecture that scales AI clusters up to 16,384 GPUs cost-effectively with ultra-low latency. The 8-plane design with 1:1 GPU-NIC mapping provides traffic isolation and resilience, though challenges remain in packet ordering and cross-plane communication. Future improvements require smarter routing algorithms and deeper hardware-network integration to optimize AI workload performance.

#AIInfrastructure #DataCenterNetworking #HPC #InfiniBand #GPUCluster #NetworkTopology #FatTree #ScalableComputing #MLOps #AIHardware #DistributedComputing #CloudInfrastructure #NetworkArchitecture #DeepLearning #AIatScale

Mixture-of-Experts (MoE) DeepSeek-v3

Image Interpretation: DeepSeek-v3 Mixture-of-Experts (MoE)

This image outlines the key technologies and performance efficiency of the DeepSeek-v3 model, which utilizes the Mixture-of-Experts (MoE) architecture. It is divided into the architecture diagram/cost table on the left and four key technical features on the right.

1. DeepSeekMoE Architecture (Left Diagram)

The diagram illustrates how the model processes data:

  • Separation of Experts: Unlike traditional MoEs, it distinguishes between Shared Experts (Green) and Routed Experts (Blue).
    • Shared Experts: Always active to handle common knowledge.
    • Routed Experts: Selectively activated by the Router to handle specific, specialized features.
  • Workflow: When an input (ut) arrives, the Router selects the top-$K$ experts (Top-Kr). The system processes the input through both shared and selected routed experts in parallel and combines the results.

2. Four Key Technical Features (Right Panel)

This section explains how DeepSeek-v3 overcomes the limitations of existing MoE models:

  • Load Balancing without Auxiliary Loss:
    • Problem: Standard MoEs often use “auxiliary loss” to balance expert usage, which can degrade performance.
    • Solution: It uses learnable bias terms in the router to ensure balance. This bias only affects “dispatching” (where data goes) and not the actual “weights” (calculation values), preserving model quality.
  • Shared Expert Design:
    • Concept: Keeping one or a few experts always active for general tasks allows the routed experts to focus purely on complex, specialized tasks.
    • Benefit: Reduces redundancy and improves the capacity utilization of experts.
  • Hardware-Aware Dual-Pipe Parallelism:
    • Efficiency: It fully overlaps All-to-All communication with computation, minimizing idle time.
    • Optimization: “Node-local expert routing” is used to minimize slow data transfers between different nodes.
  • FP8 Mixed-Precision Training:
    • Speed & Cost: Utilizes the tensor cores of modern GPUs (Hopper/Blackwell) for full FP8 (8-bit floating point) training. This drastically lowers both training and inference costs.

3. Cost Efficiency Comparison (Table 2)

The comparison highlights the massive efficiency gain over dense models:

  • DeepSeek-V3 MoE (671B parameters): Despite having the largest parameter count, its training cost is extremely low at 250 GFLOPS/Token.
  • LLaMa-405B Dense (405B parameters): Although smaller in size, it requires ~10x higher cost (2448 GFLOPS/Token) compared to DeepSeek-v3.
  • Conclusion: DeepSeek-v3 achieves “high performance at low cost” by massively scaling the model size (671B) while keeping the actual computation equivalent to a much smaller model.

Summary

  1. Hybrid Structure: DeepSeek-v3 separates “Shared Experts” for general knowledge and “Routed Experts” for specialized tasks to maximize efficiency.
  2. Optimized Training: It achieves high speed and balance using “Load Balancing without Auxiliary Loss” and “FP8 Mixed-Precision Training.”
  3. Extreme Efficiency: Despite a massive 671B parameter size, it offers roughly 10x lower training costs per token compared to similar dense models (like LLaMa-405B).

#DeepSeek #AI #MachineLearning #MoE #MixtureOfExperts #LLM #DeepLearning #TechTrends #ArtificialIntelligence #ModelArchitecture

With Gemini

Multi-Head Latent Attention – Changes

Multi-Head Latent Attention (MLA) Interpretation

This image is a technical diagram explaining the structure of Multi-Head Latent Attention (MLA).

๐ŸŽฏ Core Concept

MLA is a mechanism that improves the memory efficiency of traditional Multi-Head Attention.

Traditional Approach (Before) vs MLA

Traditional Approach:

  • Stores K, V vectors of all past tokens
  • Memory usage increases linearly with sequence length

MLA:

  • Summarizes past information with a fixed-size Latent vector (c^KV)
  • Maintains constant memory usage regardless of sequence length

๐Ÿ“Š Architecture Explanation

1. Input Processing

  • Starts from Input Hidden State (h_t)

2. Latent Vector Generation

  • Latent c_t^Q: For Query of current token (compressed representation)
  • Latent c_t^KV: For Key-Value (cached and reused)

3. Query, Key, Value Generation

  • Query (q): Generated from current token (h_t)
  • Key-Value: Generated from Latent c_t^KV
    • Creates Compressed (C) and Recent (R) versions from c_t^KV
    • Concatenates both for use

4. Multi-Head Attention Execution

  • Performs attention computation with generated Q, K, V
  • Uses BF16 (Mixed Precision)

โœ… Key Advantages

  1. Memory Efficiency: Compresses past information into fixed-size vectors
  2. Faster Inference: Reuses cached Latent vectors
  3. Information Preservation: Maintains performance by combining compressed and recent information
  4. Mixed Precision Support: Utilizes FP8, FP32, BF16

๐Ÿ”‘ Key Differences

  • v_t^R from Latent c_t^KV is not used (purple box on the right side of diagram)
  • Value of current token is directly generated from h_t
  • This enables efficient combination of compressed past information and current information

This architecture is an innovative approach to solve the KV cache memory problem during LLM inference.


Summary

MLA replaces the linearly growing KV cache with fixed-size latent vectors, dramatically reducing memory consumption during inference. It combines compressed past information with current token data through an efficient attention mechanism. This innovation enables faster and more memory-efficient LLM inference while maintaining model performance.

#MultiHeadLatentAttention #MLA #TransformerOptimization #LLMInference #KVCache #MemoryEfficiency #AttentionMechanism #DeepLearning #NeuralNetworks #AIArchitecture #ModelCompression #EfficientAI #MachineLearning #NLP #LargeLanguageModels

With Claude

Deepseek

With Claude
The evolution pipeline of the Deepseek model consists of three major stages:

Stage 1: V3-Base โ†’ R1-Zero

  • Direct application of Reinforcement Learning (RL)
  • Proceeds without Supervised Fine-tuning (SFT)
  • Adopts learning approach toward exact reward
  • Performs basic data classification tasks

Stage 2: R1-Zero โ†’ R1

  • Utilizes cold-start data for learning
  • Implements multi-stage training pipeline
  • Conducts foundational learning with initial data
  • Applies systematic multi-stage learning process

Stage 3: R1 โ†’ R1-Distill-(XXX)

  • Model optimization through knowledge distillation
  • Smaller models achieve excellent performance through SFT alone
  • Continuous model tuning through evaluations
  • Performance enhancement through learning with other models

This pipeline demonstrates a comprehensive approach to model development, incorporating various advanced AI training techniques and methodologies to achieve optimal performance at each stage.