Human Control

Human-Centered AI Decision-Making System

This diagram illustrates a human-in-the-loop AI system where humans maintain control over critical decision-making processes.

System Components

Top Process Flow:

  • Data QualityAnalysisDecision
  • Sequential workflow with human oversight at each stage

Bottom Control Layer:

  • AI Works in the central processing area
  • Ethics Human Rules (left side) – Human-defined ethical guidelines
  • Probability Control (right side) – Human oversight of AI confidence levels

Human Control Points:

  • Human Intent feeds into the system at the beginning
  • Final Decision remains with humans at the end
  • Human Control emphasized as the foundation of the entire system

Key Principles

  1. Human Agency: People retain ultimate decision-making authority
  2. AI as Tool: AI performs analysis but doesn’t make final decisions
  3. Ethical Oversight: Human-defined rules guide AI behavior
  4. Transparency: Probability controls allow humans to understand AI confidence
  5. Accountability: Clear human responsibility throughout the process

Summary: This represents a responsible AI framework where artificial intelligence enhances human decision-making capabilities while ensuring humans remain in control of critical choices and ethical considerations.

With Claude

Transmission Rate vs Propagation Speed

Key Concepts

Transmission Rate

  • Amount of data processable per unit time (bps – bits per second)
  • “Processing speed” concept – how much data can be handled simultaneously
  • Low transmission rate causes Transmission Delay
  • “Link is full, cannot send data”

Propagation Speed

  • Speed of signal movement through physical media (m/s – meters per second)
  • “Travel speed” concept – how fast signals move
  • Slow propagation speed causes Propagation Delay
  • “Arrives late due to long distance”

Meaning of Delay

Two types of delays affect network performance through different principles. Transmission delay is packet size divided by transmission rate – the time to push data into the link. Propagation delay is distance divided by propagation speed – the time for signals to physically travel.

Two Directions of Technology Evolution

Bandwidth Expansion (More Data Bandwidth)

  • Improved data processing capability through transmission rate enhancement
  • Development of high-speed transmission technologies like optical fiber and 5G
  • No theoretical limits – continuous improvement possible

Path Optimization (More Fast, Less Delay)

  • Faster response times through propagation delay improvement
  • Physical distance reduction, edge computing, optimal routing
  • Fundamental physical limits exist: cannot exceed speed of light (c = 3×10⁸ m/s)
  • Actual media is slower due to refractive index (optical fiber: ~2×10⁸ m/s)

Network communication involves two distinct “speed” concepts: Transmission Rate (how much data can be processed per unit time in bps) and Propagation Speed (how fast signals physically travel in m/s). While transmission rate can be improved infinitely through technological advancement, propagation speed faces an absolute physical limit – the speed of light – creating fundamentally different approaches to network optimization. Understanding this distinction is crucial because transmission delays require bandwidth solutions, while propagation delays require path optimization within unchangeable physical constraints.

With Claude

Small Errors in AI

Four Core Characteristics of AI Tasks (Left)

AI systems have distinctive characteristics that make them particularly vulnerable to error amplification:

  • Big Volume: Processing massive amounts of data
  • Long Duration: Extended computational operations over time
  • Parallel Processing: Simultaneous execution of multiple tasks
  • Interdependencies: Complex interconnections where components influence each other

Small Error Amplification (Middle)

Due to these AI characteristics, small initial errors become amplified in two critical ways:

  • Error Propagation & Data Corruption: Minor errors spread throughout the system, significantly impacting overall data quality
  • Delay Propagation & Performance Degradation: Small delays accumulate and cascade, severely affecting entire system performance

Final Impact (Right)

  • Very High Energy Cost: Errors and performance degradation result in exponentially higher energy consumption than anticipated

Key Message

The four inherent characteristics of AI (big volume, long duration, parallel processing, and interdependencies) create a perfect storm where small errors can amplify exponentially, ultimately leading to enormously high energy costs. This diagram serves as a warning about the critical importance of preventing small errors in AI systems before they cascade into major problems.

With Claude

AI DC Energy Optimization

Core Technologies for AI DC Power Optimization

This diagram systematically illustrates the core technologies for AI datacenter power optimization, showing power consumption breakdown by category and energy savings potential of emerging technologies.

Power Consumption Distribution:

  • Network: 5% – Data transmission and communication infrastructure
  • Computing: 50-60% – GPUs and server processing units (highest consumption sector)
  • Power: 10-15% – UPS, power conversion and distribution systems
  • Cooling: 20-30% – Server and equipment temperature management systems

Energy Savings by Rising Technologies:

  1. Silicon Photonics: 1.5-2.5% – Optical communication technology improving network power efficiency
  2. Energy-Efficient GPUs & Workload Optimization: 12-18% (5-7%) – AI computation optimization
  3. High-Voltage DC (HVDC): 2-2.5% (1-3%) – Smart management, high-efficiency UPS, modular, renewable energy integration
  4. Liquid Cooling & Advanced Air Cooling: 4-12% – Cooling system efficiency improvements

This framework presents an integrated approach to maximizing power efficiency in AI datacenters, addressing all major power consumption areas through targeted technological solutions.

With Claude

Human & Data with AI

Data Accumulation Perspective

History → Internet: All knowledge and information accumulated throughout human history is digitized through the internet and converted into AI training data. This consists of multimodal data including text, images, audio, and other formats.

Foundation Model: Large language models (LLMs) and multimodal models are pre-trained based on this vast accumulated data. Examples include GPT, BERT, CLIP, and similar architectures.

Human to AI: Applying Human Cognitive Patterns to AI

1. Chain of Thoughts

  • Implementation of human logical reasoning processes in the Reasoning stage
  • Mimicking human cognitive patterns that break down complex problems into step-by-step solutions
  • Replicating the human approach of “think → analyze → conclude” in AI systems

2. Mixture of Experts

  • AI implementation of human expert collaboration systems utilized in the Experts domain
  • Architecting the way human specialists collaborate on complex problems into model structures
  • Applying the human method of synthesizing multiple expert opinions for problem-solving into AI

3. Retrieval-Augmented Generation (RAG)

  • Implementing the human process of searching existing knowledge → generating new responses into AI systems
  • Systematizing the human approach of “reference material search → comprehensive judgment”

Personal/Enterprise/Sovereign Data Utilization

1. Personal Level

  • Utilizing individual documents, history, preferences, and private data in RAG systems
  • Providing personalized AI assistants and customized services

2. Enterprise Level

  • Integrating organizational internal documents, processes, and business data into RAG systems
  • Implementing enterprise-specific AI solutions and workflow automation

3. Sovereign Level

  • Connecting national or regional strategic data to RAG systems
  • Optimizing national security, policy decisions, and public services

Overall Significance: This architecture represents a Human-Centric AI system that transplants human cognitive abilities and thinking patterns into AI while utilizing multi-layered data from personal to national levels to evolve general-purpose AI (Foundation Models) into intelligent systems specialized for each level. It goes beyond simple data processing to implement human thinking methodologies themselves into next-generation AI systems.

With Claude