AI Core Internals (1+4)

This image is a diagram titled “AI Core Internals (1+4)” that illustrates the core components of an AI system and their interconnected relationships.

The diagram contains 5 main components:

  1. Data – Located in the top left, represented by database and document icons.
  2. Hardware Infra – Positioned in the top center, depicted with a CPU/chipset icon with radiating connections.
  3. Foundation(AI) Model – Located in the top right, shown as an AI network node with multiple connection points.
  4. Energy Infra – Positioned at the bottom, represented by wind turbine and solar panel icons.
  5. User Group – On the far right, depicted as a collection of diverse people icons in various colors.

The arrows show the flow and connections between components:

  • From Data to Hardware Infrastructure
  • From Hardware Infrastructure to the AI Model
  • From the AI Model to end users
  • From Energy Infrastructure to Hardware Infrastructure (power supply)

This diagram visually explains how modern AI systems integrate data, computing hardware, AI models, and energy infrastructure to deliver services to end users. It effectively demonstrates the interdependent ecosystem required for AI operations, highlighting both the technical components (data, hardware, models) and the supporting infrastructure (energy) needed to serve diverse user communities.

With Claude

800V HVDC

AI Data Center: Server-Side Power Management Transition from AC to DC

Traditional AC Server Power Management (Upper Section)

AC Power Distribution Chain

  1. 6.6kV to 380V AC: Primary voltage step-down transformation
  2. UPS (Outage Fast Recovery): Backup power for short-term outages
  3. Distribution Cutoff, Regulation: Power distribution control and voltage regulation
  4. AC to DC for Server: Final AC-DC conversion at server level
  5. Output: AC 380V (KW level)

New DC Server Power Management Technology (Lower Section)

DC Power Distribution Chain

  1. AC to DC Conv 800V HVDC: Direct high-voltage DC conversion
  2. ESS (Energy Storage System): Integrated energy storage solution
  3. Digital Control: Advanced digital power management
  4. DC to DC Down for Server: DC-DC step-down conversion for servers
  5. Output: HVDC 800V (MW level)

Key Technology Advantages of DC Transition

Power Quality Enhancement

  • PF Up, Harmonics Dn: Improved power factor and reduced harmonic distortion

Advanced Backup Capability

  • Long time Backup Peak Shaving: Extended backup duration with intelligent peak load management

Operational Efficiency

  • Lower Loss, High Density, Easy Control: Reduced conversion losses, compact footprint, simplified control architecture

Scalable Power Delivery

  • High Power Usage Available: Enhanced power capacity to meet AI server demands

Server-Side Power Management Transformation

This diagram illustrates the technological shift in server-side power management from traditional AC distribution (KW-level) to advanced DC distribution (MW-level), specifically designed to address the high-power requirements and efficiency demands of AI data centers. The DC approach eliminates multiple AC-DC conversion stages, resulting in improved efficiency and better power management capabilities.

With Claude

Platform

This image is a diagram titled “Platform” that explains three types of platform business models.

3 Platforms:

  1. Information Sharing Platform (Sharing Information / Who? Information)
    • Users share information, but the “Who?” question raises concerns about who actually provides the information and who captures its value
    • Highlights the structural issue where users provide content for free while platforms monopolize advertising revenue
  2. Platform-Controlled Work (Work for Platform / Controlled by Platform)
    • Workers ostensibly work for the platform but are actually controlled by the platform
    • Reflects the reality of platform labor where workers are classified as “independent contractors” but are actually dependent on the platform’s algorithms, fee policies, and rating systems
    • Represents the unequal power relationships found in the gig economy
  3. Platform Usage (Using Platform)
    • Users actively utilize the platform to create new value
    • Shows high user satisfaction with the “I LOVE THIS” indicator
    • Represents a positive relationship where users proactively leverage platform tools

Bottom Integrated Concept:

  • “Together” → “on the platform” → “make values”

Key Message: This diagram demonstrates that platforms are not neutral technologies but embody different power relationships and value distribution structures. The second type particularly critiques the structural problems of platform labor, revealing that despite the surface narrative of “creating value together,” unequal power relationships actually exist. This is a critical visualization that analyzes the various interests and power structures hidden behind the platform economy.

With Claude

Prediction with data

This image illustrates a comparison between two approaches for Prediction with Data.

Left Side: Traditional Approach (Setup First Configuration)

The traditional method consists of:

  • Condition: 3D environment and object locations
  • Rules: Complex physics laws
  • Input: 1+ cases
  • Output: 1+ prediction results

This approach relies on pre-established rules and physical laws to make predictions.

Right Side: Modern AI/Machine Learning Approach

The modern method follows these steps:

  1. Huge Data: Massive datasets represented in binary code
  2. Machine Learning: Pattern learning from data
  3. AI Model: Trained artificial intelligence model
  4. Real-Time High Resolution Data: High-quality data streaming in real-time
  5. Prediction Anomaly: Final predictions and anomaly detection

Key Differences

The most significant difference is highlighted by the question “Believe first ??” at the bottom. This represents a fundamental philosophical difference: the traditional approach starts by “believing” in predefined rules, while the AI approach learns patterns from data to make predictions.

Additionally, the AI approach features “Longtime Learning Verification,” indicating continuous model improvement through ongoing learning and validation processes.

The diagram effectively contrasts rule-based prediction systems with data-driven machine learning approaches, showing the evolution from deterministic, physics-based models to adaptive, learning-based AI systems.

With Claude

Server Room Workload

This diagram illustrates a server room thermal management system workflow.

System Architecture

Server Internal Components:

  • AI Workload, GPU Workload, and Power Workload are connected to the CPU, generating heat

Temperature Monitoring Points:

  • Supply Temp: Cold air supplied from the cooling system
  • CoolZone Temp: Temperature in the cooling zone
  • Inlet Temp: Server inlet temperature
  • Outlet Temp: Server outlet temperature
  • Hot Zone Temp: Temperature in the heat exhaust zone
  • Return Temp : Hot air return to the cooling system

Cooling System:

  • The Cooling Workload on the left manages overall cooling
  • Closed-loop cooling system that circulates back via Return Temp

Temperature Delta Monitoring

The bottom flowchart shows how each workload affects temperature changes (ΔT):

  • Delta temperature sensors (Δ1, Δ2, Δ3) measure temperature differences across each section
  • This data enables analysis of each workload’s thermal impact and optimization of cooling efficiency

This system appears to be a data center thermal management solution designed to effectively handle high heat loads from AI and GPU-intensive workloads. The comprehensive temperature monitoring allows for precise control and optimization of the cooling infrastructure based on real-time workload demands.

With Claude