NEW Power

This image titled “NEW POWER” illustrates the paradigm shift in power structures in modern society.

Left Side (Past Power Structure):

  • Top: Silhouettes of people representing traditional hierarchical organizational structures
  • Bottom: Factories, smokestacks, and workers symbolizing the industrial age
  • Characteristic: “Quantity” (volume/scale) centered power

Center (Transition Process):

  • Top: Icons representing databases and digital interfaces
  • Bottom: Technical elements symbolizing networks and connectivity
  • Characteristic: “Logic” based systems

Right Side (New Power Structure):

  • Top: Grid-like array representing massive GPU clusters – the core computing resources of the AI era
  • Bottom: Icons symbolizing AI, cloud computing, data analytics, and other modern technologies
  • Characteristic: “Quantity?” (The return of quantitative competition?) – A new dimension of quantitative competition in the GPU era

This diagram illustrates a fascinating return in power structures. While efficiency, innovation, and network effects – these ‘logical’ elements – were important during the digital transition period, the ‘quantitative competition’ has returned as the core with the full advent of the AI era.

In other words, rather than smart algorithms or creative ideas, how many GPUs one can secure and operate has once again become the decisive competitive advantage. Just as the number of factories and machines determined national power during the Industrial Revolution, the message suggests that we’ve entered a new era of ‘quantitative warfare’ where GPU capacity determines dominance in the AI age.

With Claude

“Vectors” than definitions.

This image visualizes the core philosophy that “In the AI era, vector-based thinking is needed rather than simplified definitions.”

Paradigm Shift in the Upper Flow:

  • Definitions: Traditional linear and fixed textual definitions
  • Vector: Transformation into multidimensional and flexible vector space
  • Context: Structure where clustering and contextual relationships emerge through vectorization

Modern Approach in the Lower Flow:

  1. Big Data: Complex and diverse forms of data
  2. Machine Learning: Processing through pattern recognition and learning
  3. Classification: Sophisticated vector-based classification
  4. Clustered: Clustering based on semantic similarity
  5. Labeling: Dynamic labeling considering context

Core Insight: In the AI era, we must move beyond simplistic definitional thinking like “an apple is a red fruit” and understand an apple as a multidimensional vector encompassing color, taste, texture, nutritional content, cultural meaning, and more. This vector-based thinking enables richer contextual understanding and flexible reasoning, allowing us to solve complex real-world problems more effectively.

Beyond simple classification or definition, this presents a new cognitive paradigm that emphasizes relationships and context. The image advocates for a fundamental shift from rigid categorical thinking to a nuanced, multidimensional understanding that better reflects how modern AI systems process and interpret information.

With Claude

High-Speed Interconnect

This image compares five major high-speed interconnect technologies:

NVLink (NVIDIA Link)

  • Speed: 900GB/s (NVLink 4.0)
  • Use Case: GPU core-to-HBM, AI/HPC with NVIDIA GPUs
  • Features: NVIDIA proprietary, dominates AI/HPC market
  • Maturity: Mature

CXL (Compute Express Link)

  • Speed: 128GB/s
  • Use Case: Memory pooling, data center, general data center memory
  • Features: Supported by Intel, AMD, NVIDIA, Samsung; PCIe-based with chip-to-chip focus
  • Maturity: Maturing

UALink (Ultra Accelerator Link)

  • Speed: 800GB/s (estimated, UALink 1.0)
  • Use Case: AI clusters, GPU/accelerator interconnect
  • Features: Led by AMD, Intel, Broadcom, Google; NVLink alternative
  • Maturity: Early (2025 launch)

UCIe (Universal Chiplet Interconnect Express)

  • Speed: 896GB/s (electrical), 7Tbps (optical, not yet available)
  • Use Case: Chiplet-based SoC, MCM (Multi-Chip Module)
  • Features: Supported by Intel, AMD, TSMC, NVIDIA; chiplet design focus
  • Maturity: Early stage, excellent performance with optical version

CCIX (Cache Coherent Interconnect for Accelerators)

  • Speed: 128GB/s (PCIe 5.0-based)
  • Use Case: ARM servers, accelerators
  • Features: Supported by ARM, AMD, Xilinx; ARM-based server focus
  • Maturity: Low, limited power efficiency

Summary: All technologies are converging toward higher bandwidth, lower latency, and chip-to-chip connectivity to address the growing demands of AI/HPC workloads. The effectiveness varies by ecosystem, with specialized solutions like NVLink leading in performance while universal standards like CXL focus on broader compatibility and adoption.

With Claude

The Evolution of “Difference”

This image is a conceptual diagram showing how the domain of “Difference” is continuously expanded.

Two Drivers of Difference Expansion

Top Flow: Natural Emergence of Difference

  • ExistenceMultiplicityInfluenceChange
  • The process by which new differences are continuously generated naturally in the universe and natural world.

Bottom Flow: Human Tools for Recognizing Difference

  • Letters & DigitsComputation & MemoryComputing MachineArtificial Intelligence (LLM)
  • The evolution of tools that humans have developed to interpret, analyze, and process differences.

Center: Continuous Expansion Process of Difference Domain

The interaction between these two drivers creates a process that continuously expands the domain of difference, shown in the center:

Emergence of Difference

  • The stage where naturally occurring new differences become concretely manifest
  • Previously non-existent differences are continuously generated

↓ (Continuous Expansion)

Recognition of Difference

  • The stage where emerged differences are accepted as meaningful through human interpretation and analytical tools
  • Newly recognized differences are incorporated into the realm of distinguishable domains

Final Result: Expansion of Differentiation & Distinction

Differentiation & Distinction

  • Microscopically: More sophisticated digital and numerical distinctions
  • Macroscopically: Creation of new conceptual and social domains of distinction

Core Message

The natural emergence of difference and the development of human recognition tools create mutual feedback that continuously expands the domain of difference.

As the handwritten note on the left indicates (“AI expands the boundary of perceivable difference”), particularly in the AI era, the speed and scope of this expansion has dramatically increased. This represents a cyclical expansion process where new differences emerging from nature are recognized through increasingly sophisticated tools, and these recognized differences in turn enable new natural changes.

With Claude

PIM processing-in-memory

This image illustrates the evolution of computing architectures, comparing three major computing paradigms:

1. General Computing (Von Neumann Architecture)

  • Traditional CPU-memory structure
  • CPU and memory are separated, processing complex instructions
  • Data and instructions move between memory and CPU

2. GPU Computing

  • Collaborative structure between CPU and GPU
  • GPU performs simple mathematical operations with massive parallelism
  • Provides high throughput
  • Uses new types of memory specialized for AI computing

3. PIM (Processing-in-Memory)

The core focus of the image, PIM features the following characteristics:

Core Concept:

  • “Simple Computing” approach that performs operations directly within new types of memory
  • Integrated structure of memory and processor

Key Advantages:

  • Data Movement Minimization: Reduces in-memory copy/reordering operations
  • Parallel Data Processing: Parallel processing of matrix/vector operations
  • Repetitive Simple Operations: Optimized for add/multiply/compare operations
  • “Simple Computing”: Efficient operations without complex control logic

PIM is gaining attention as a next-generation computing paradigm that can significantly improve energy efficiency and performance compared to existing architectures, particularly for tasks involving massive repetitive simple operations such as AI/machine learning and big data analytics.

With Claude