Dynamic Voltage and Frequency Scaling (in GPU)

This image illustrates the DVFS (Dynamic Voltage and Frequency Scaling) system workflow, which is a power management technique that dynamically adjusts CPU/GPU voltage and frequency to optimize power consumption.

Key Components and Operation Flow

1. Main Process Flow (Top Row)

  • Workload InitWorkload AnalysisDVFS Policy DecisionClock Frequency AdjustmentVoltage AdjustmentWorkload ExecutionWorkload Finish

2. Core System Components

Power State Management:

  • Basic power states: P0~P12 (P0 = highest performance, P12 = lowest power)
  • Real-time monitoring through PMU (Power Management Unit)

Analysis & Decision Phase:

  • Applies dynamic power consumption formula using algorithms
  • Considers thermal limits in analysis
  • Selects new power state (High: P0-P2, Low: P8-P10)
  • P-State changes occur within 10μs~1ms

Frequency Adjustment (PLL – Phase-Locked Loop):

  • Adjusts GPU core and memory clock frequencies
  • Typical range: 1,410MHz~1,200MHz (memory), 1,000MHz~600MHz (core)
  • Adjustment time: 10-100 microseconds

Voltage Adjustment (VRM – Voltage Regulator Module):

  • Adjusts voltage supplied to GPU core and memory
  • Typical range: 1.1V (P0) to 0.8V (P8)
  • VRM stabilizes voltage within tens of microseconds

3. Real-time Feedback Loop

The system operates a continuous feedback loop that readjusts P-states in real-time based on workload changes, maintaining optimal balance between performance and power efficiency.

4. Execution Phase

The GPU executes workloads at new frequency and voltage settings, with asynchronous adjustments based on frequency and voltage changes. After completion, the system transitions to low-power states (e.g., P10, P12) to conserve energy.


Summary: Key Benefits of DVFS

DVFS technology is for AI data centers as it optimizes GPU efficiency management to achieve maximum overall power efficiency. By intelligently scaling thousands of GPUs based on AI workload demands, DVFS can reduce total data center power consumption by 30-50% while maintaining peak AI performance during training and inference operations, making it essential for sustainable and cost-effective AI infrastructure at scale.

With Claude

Evolutions and THE NEXT?

This illustration depicts the evolution of human-machine interaction in four stages:

  1. Manual Tools – A human uses basic tools, representing traditional manual labor.
  2. Machine Operation – A worker operates a mechanical machine, indicating the industrial age.
  3. Programmed Automation – A robotic system with a CPU chip functions automatically based on human-developed programs.
  4. AI Collaboration – An AI-powered robot with a GPU chip works interactively with a human, showcasing the era of intelligent collaboration.

This is from “https://eeumee.net/2025/05/28/machine-changes/

New Expert with AI

Diagram Overview

This diagram illustrates the structural transformation of the professional services market in the AI era.

Current Situation (Left Side)

Users pay for three levels of professional services:

  • A+ Expert: Top-tier expertise and specialized knowledge
  • Expert: Mid-level professional services
  • Agent: Basic professional task handling

AI Era Transformation (Right Side)

Market Polarization:

  • A+ Expert Retained: “keep” – Highest-level human expertise remains essential
  • Mid-tier Replacement: “Replace” – Expert and Agent roles substituted by AI systems
  • Cost Concentration: Payment structure shifts from 3 categories → 2 categories

Key Implications

  1. Economic Efficiency: Reduced costs for mid-tier professional services
  2. Market Polarization: Premium human experts vs. AI systems structure
  3. Enhanced Accessibility: Democratization of professional services through AI
  4. Structural Transformation: Fundamental reshaping of professional service industries

Economic Impact

  • Winners: A+ Experts (strengthened monopolistic position), AI service providers, general consumers
  • Disrupted: Mid-tier professionals (Expert and Agent levels)
  • Market Change: Structural reorganization and pricing transformation in professional services

Conclusion

This diagram effectively demonstrates not just job displacement, but the economic restructuring of professional service markets, showing how AI-driven substitution leads to cost structure changes and market bipolarization.

With Claude

Machine Changes

This image titled “Machine Changes” visually illustrates the evolution of technology and machinery across different eras.

The diagram progresses from left to right with arrows showing the developmental stages:

Stage 1 (Left): Manual Labor Era

  • Tool icons (wrench, spanner)
  • Hand icon
  • Worker icon Representing basic manual work using simple tools.

Stage 2: Mechanization Era

  • Manufacturing equipment and machinery
  • Power-driven machines Depicting the industrial revolution period with mechanized production.

Stage 3 (Blue section): Automation and Computer Era

  • Power supply systems
  • CPU/processor chips
  • Computer systems
  • Programming code Representing automation through electronics and computer technology.

Stage 4 (Purple section): AI and Smart Technology Era

  • Robots
  • GPU processors
  • Artificial brain/AI
  • Interactive interfaces Representing modern smart technology integrated with artificial intelligence and robotics.

Additional Insight: The transition from the CPU era to the GPU era marks a fundamental shift in what drives technological capability. In the CPU era, program logic was the critical factor – the sophistication of algorithms and code determined system performance. However, in the GPU era, training data has become paramount – the quality, quantity, and diversity of data used to train AI models now determines the intelligence and effectiveness of these systems. This represents a shift from logic-driven computation to data-driven learning.

Overall, this infographic captures humanity’s technological evolution: Manual Labor → Mechanization → Automation → AI/Robotics, highlighting how the foundation of technological advancement has evolved from human skill to mechanical power to programmed logic to data-driven intelligence.

With Claude

Personal(User/Expert) Data Service

System Overview

The Personal Data Service is an open expert RAG service platform based on MCP (Model Context Protocol). This system creates a bidirectional ecosystem where both users and experts can benefit mutually, enhancing accessibility to specialized knowledge and improving AI service quality.

Core Components

1. User Interface (Left Side)

  • LLM Model Selection: Users can choose their preferred language model or MoE (Mixture of Experts)
  • Expert Selection: Select domain-specific experts for customized responses
  • Prompt Input: Enter specific questions or requests

2. Open MCP Platform (Center)

  • Integrated Management Hub: Connects and coordinates all system components
  • Request Processing: Matches user requests with appropriate expert RAG systems
  • Service Orchestration: Manages and optimizes the entire workflow

3. LLM Service Layer (Right Side)

  • Multi-LLM Support: Integration with various AI model services
  • OAuth Authentication: Direct user selection of paid/free services
  • Vendor Neutrality: Open architecture independent of specific AI services

4. Expert RAG Ecosystem (Bottom)

  • Specialized Data Registration: Building expert-specific knowledge databases through RAG
  • Quality Management System: Ensuring reliability through evaluation and reputation management
  • Historical Logs: Continuous quality improvement through service usage records

Key Features

  1. Bidirectional Ecosystem: Users obtain expert answers while experts monetize their knowledge
  2. Open Architecture: Scalable platform based on MCP standards
  3. Quality Assurance: Expert and answer quality management through evaluation systems
  4. Flexible Integration: Compatibility with various LLM services
  5. Autonomous Operation: Direct data management and updates by experts

With Claude

GPU Server Room : Changes

Image Overview

This dashboard displays the cascading resource changes that occur when GPU workload increases in an AI data center server room monitoring system.

Key Change Sequence (Estimated Values)

  1. GPU Load Increase: 30% → 90% (AI computation tasks initiated)
  2. Power Consumption Rise: 0.42kW → 1.26kW (3x increase)
  3. Temperature Delta Rise: 7°C → 17°C (increased heat generation)
  4. Cooling System Response:
    • Water flow rate: 200 LPM → 600 LPM (3x increase)
    • Fan speed: 600 RPM → 1200 RPM (2x increase)

Operational Prediction Implications

  • Operating Costs: Approximately 3x increase from baseline expected
  • Spare Capacity: 40% cooling system capacity remaining
  • Expansion Capability: Current setup can accommodate additional 67% GPU load

This AI data center monitoring dashboard illustrates the cascading resource changes when GPU workload increases from 30% to 90%, triggering proportional increases in power consumption (3x), cooling flow rate (3x), and fan speed (2x). The system demonstrates predictable operational scaling patterns, with current cooling capacity showing 40% remaining headroom for additional GPU load expansion. Note: All values are estimated figures for demonstration purposes.

Note: All numerical values are estimated figures for demonstration purposes and do not represent actual measured data.

With Claude