High-Speed Interconnect

This image compares five major high-speed interconnect technologies:

NVLink (NVIDIA Link)

  • Speed: 900GB/s (NVLink 4.0)
  • Use Case: GPU core-to-HBM, AI/HPC with NVIDIA GPUs
  • Features: NVIDIA proprietary, dominates AI/HPC market
  • Maturity: Mature

CXL (Compute Express Link)

  • Speed: 128GB/s
  • Use Case: Memory pooling, data center, general data center memory
  • Features: Supported by Intel, AMD, NVIDIA, Samsung; PCIe-based with chip-to-chip focus
  • Maturity: Maturing

UALink (Ultra Accelerator Link)

  • Speed: 800GB/s (estimated, UALink 1.0)
  • Use Case: AI clusters, GPU/accelerator interconnect
  • Features: Led by AMD, Intel, Broadcom, Google; NVLink alternative
  • Maturity: Early (2025 launch)

UCIe (Universal Chiplet Interconnect Express)

  • Speed: 896GB/s (electrical), 7Tbps (optical, not yet available)
  • Use Case: Chiplet-based SoC, MCM (Multi-Chip Module)
  • Features: Supported by Intel, AMD, TSMC, NVIDIA; chiplet design focus
  • Maturity: Early stage, excellent performance with optical version

CCIX (Cache Coherent Interconnect for Accelerators)

  • Speed: 128GB/s (PCIe 5.0-based)
  • Use Case: ARM servers, accelerators
  • Features: Supported by ARM, AMD, Xilinx; ARM-based server focus
  • Maturity: Low, limited power efficiency

Summary: All technologies are converging toward higher bandwidth, lower latency, and chip-to-chip connectivity to address the growing demands of AI/HPC workloads. The effectiveness varies by ecosystem, with specialized solutions like NVLink leading in performance while universal standards like CXL focus on broader compatibility and adoption.

With Claude

The Evolution of “Difference”

This image is a conceptual diagram showing how the domain of “Difference” is continuously expanded.

Two Drivers of Difference Expansion

Top Flow: Natural Emergence of Difference

  • ExistenceMultiplicityInfluenceChange
  • The process by which new differences are continuously generated naturally in the universe and natural world.

Bottom Flow: Human Tools for Recognizing Difference

  • Letters & DigitsComputation & MemoryComputing MachineArtificial Intelligence (LLM)
  • The evolution of tools that humans have developed to interpret, analyze, and process differences.

Center: Continuous Expansion Process of Difference Domain

The interaction between these two drivers creates a process that continuously expands the domain of difference, shown in the center:

Emergence of Difference

  • The stage where naturally occurring new differences become concretely manifest
  • Previously non-existent differences are continuously generated

↓ (Continuous Expansion)

Recognition of Difference

  • The stage where emerged differences are accepted as meaningful through human interpretation and analytical tools
  • Newly recognized differences are incorporated into the realm of distinguishable domains

Final Result: Expansion of Differentiation & Distinction

Differentiation & Distinction

  • Microscopically: More sophisticated digital and numerical distinctions
  • Macroscopically: Creation of new conceptual and social domains of distinction

Core Message

The natural emergence of difference and the development of human recognition tools create mutual feedback that continuously expands the domain of difference.

As the handwritten note on the left indicates (“AI expands the boundary of perceivable difference”), particularly in the AI era, the speed and scope of this expansion has dramatically increased. This represents a cyclical expansion process where new differences emerging from nature are recognized through increasingly sophisticated tools, and these recognized differences in turn enable new natural changes.

With Claude

PIM processing-in-memory

This image illustrates the evolution of computing architectures, comparing three major computing paradigms:

1. General Computing (Von Neumann Architecture)

  • Traditional CPU-memory structure
  • CPU and memory are separated, processing complex instructions
  • Data and instructions move between memory and CPU

2. GPU Computing

  • Collaborative structure between CPU and GPU
  • GPU performs simple mathematical operations with massive parallelism
  • Provides high throughput
  • Uses new types of memory specialized for AI computing

3. PIM (Processing-in-Memory)

The core focus of the image, PIM features the following characteristics:

Core Concept:

  • “Simple Computing” approach that performs operations directly within new types of memory
  • Integrated structure of memory and processor

Key Advantages:

  • Data Movement Minimization: Reduces in-memory copy/reordering operations
  • Parallel Data Processing: Parallel processing of matrix/vector operations
  • Repetitive Simple Operations: Optimized for add/multiply/compare operations
  • “Simple Computing”: Efficient operations without complex control logic

PIM is gaining attention as a next-generation computing paradigm that can significantly improve energy efficiency and performance compared to existing architectures, particularly for tasks involving massive repetitive simple operations such as AI/machine learning and big data analytics.

With Claude

Temperate Prediction in DC (II) – The start and The Target

This image illustrates the purpose and outcomes of temperature prediction approaches in data centers, showing how each method serves different operational needs.

Purpose and Results Framework

CFD Approach – Validation and Design Purpose

Input:

  • Setup Data: Physical infrastructure definitions (100% RULES-based)
  • Pre-defined spatial, material, and boundary conditions

Process: Physics-based simulation through computational fluid dynamics

Results:

  • What-if (One Case) Simulation: Theoretical scenario testing
  • Checking a Limitation: Validates whether proposed configurations are “OK or not”
  • Used for design validation and capacity planning

ML Approach – Operational Monitoring Purpose

Input:

  • Relation (Extended) Data: Real-time operational data starting from workload metrics
  • Continuous data streams: Power, CPU, Temperature, LPM/RPM

Process: Data-driven pattern learning and prediction

Results:

  • Operating Data: Real-time operational insights
  • Anomaly Detection: Identifies unusual patterns or potential issues
  • Used for real-time monitoring and predictive maintenance

Key Distinction in Purpose

CFD: “Can we do this?” – Validates design feasibility and limits before implementation

  • Answers hypothetical scenarios
  • Provides go/no-go decisions for infrastructure changes
  • Design-time tool

ML: “What’s happening now?” – Monitors current operations and predicts immediate future

  • Provides real-time operational intelligence
  • Enables proactive issue detection
  • Runtime operational tool

The diagram shows these are complementary approaches: CFD for design validation and ML for operational excellence, each serving distinct phases of data center lifecycle management.

With Claude

Temperate Prediction in DC

Overall Structure

Top: CFD (Computational Fluid Dynamics) based approach Bottom: ML (Machine Learning) based approach

CFD Approach (Top)

  • Basic Setup:
    • Spatial Definition & Material Properties: Physical space definition of the data center and material characteristics (servers, walls, air, etc.)
    • Boundary Conditions: Setting boundary conditions (inlet/outlet temperatures, airflow rates, heat sources, etc.)
  • Processing:
    • Configuration + Physical Rules: Application of physical laws (heat transfer equations, fluid dynamics equations, etc.)
    • Heat Flow: Heat flow calculations based on defined conditions
  • Output: Heat + Air Flow Simulation (physics-based heat and airflow simulation)

ML Approach (Bottom)

  • Data Collection:
    • Real-time monitoring through Metrics/Data Sensing
    • Operational data: Power (Kw), CPU (%), Workload, etc.
    • Actual temperature measurements through Temperature Sensing
  • Processing: Pattern learning through Machine Learning algorithms
  • Output: Heat (with Location) Prediction (location-specific heat prediction)

Key Differences

CFD Method: Theoretical calculation through physical laws using physical space definitions, material properties, and boundary conditions as inputs ML Method: Data-driven approach that learns from actual operational data and sensor information for prediction

The key distinction is that CFD performs simulation from predefined physical conditions, while ML learns from actual operational data collected during runtime to make predictions.

With Claude