Human data

This updated image titled “Data?” presents a deeper philosophical perspective on data and AI.

Core Concept:

Human Perception is Limited

  • Compared to the infinite complexity of the real world, the scope that humans can perceive and define is constrained
  • The gray area labeled “Human perception is limited” visualizes this boundary of recognition

Two Dimensions of AI Application:

  1. Deterministic Data
    • Data domains that humans have already defined and structured
    • Contains clear rules and patterns that AI can process in predictable ways
    • Represents traditional AI problem-solving approaches
  2. Non-deterministic Data
    • Data from domains that humans haven’t fully defined
    • Raw data from the real world with high uncertainty and complexity
    • Areas where AI must discover and utilize patterns without prior human definitions

Key Insight: This diagram illustrates that AI’s true potential extends beyond simply solving pre-defined human problems. While AI can serve as a tool that opens new possibilities by transcending human cognitive boundaries and discovering complex patterns from the real world that we haven’t yet defined or understood, there remains a crucial human element in this process. Even as AI ventures into unexplored territories of reality beyond human-defined problem spaces, humans still play an essential role in determining how to interpret, validate, and responsibly apply these AI-discovered insights. The diagram suggests a collaborative relationship where AI expands our perceptual capabilities, but human judgment and decision-making remain fundamental in guiding how these expanded possibilities are understood and utilized.

With Claude

Road to AI

This image shows a flowchart titled “Road to AI” that illustrates the step-by-step process of AI development.

Main Stages:

  1. Digitization – Starting from a globe icon, data is converted into digital format (binary code)
  2. Central Processing Area – Data is processed through network structures, where two key processes occur in parallel:
    • Verification – Confirming data accuracy
    • Tuning – Improving the model through “Higher Resolution” and “More Relative Data”
  3. AI System – Finally implemented as an AI robot

Development Phases (Right Side):

  • “Easy First, Everybody Know” – Starting with simple tasks that everyone can understand
  • “Again & Again” – Iterative improvement process
  • “More Difficult & Auto Decision” – Advanced stage with complex and automated decision-making

This diagram visually represents how AI development progresses from simple data digitization, through continuous verification and tuning processes, and gradually evolves into sophisticated AI systems capable of complex automated decision-making. The process emphasizes the iterative nature of AI development, moving from basic, universally understood concepts to increasingly complex autonomous systems.

With Claude

Sovereign AI Foundation Model

This diagram illustrates the concept of “Sovereign AI Foundation Model” and explains why it’s necessary.

Structure Analysis

Left Side (Infrastructure Elements):

  • Data
  • Hardware Infrastructure (Hardware Infra)
  • Energy Infrastructure (Energy Infra)

These three elements are connected to the central Foundation AI Model.

Why Sovereign AI is Needed (Four boxes on the right)

  1. Sovereignty & Security
    • Securing national AI technology independence
    • Data security and technological autonomy
    • Digital Sovereignty, National Security, Avoid Tech-Colonization, Data Jurisdiction, On-Premise Control.
  2. Industrial Competitiveness
    • Strengthening AI-based competitiveness of national industries
    • Gaining advantages in technological hegemony competition
    • Ecosystem Enabler, Beyond ‘Black Box’, Deep Customization, Innovation Platform, Future Industries.
  3. Cultural & Linguistic Integrity
    • Developing AI models specialized for national language and culture
    • Preserving cultural values and linguistic characteristics
    • Cultural Context, Linguistic Nuance, Mitigate Bias, Preserve Identity, Social Cohesion.
  4. National Data Infrastructure
    • Systematic data management at the national level
    • Securing data sovereignty
    • Data Standardization, Break Data Silos, High-Quality Structured Data, AI-Ready Infrastructure, Efficiency & Scalability.

Key Message

This diagram systematically presents why each nation should build independent AI foundation models based on their own data, hardware, and energy infrastructure, rather than relying on foreign companies’ AI models. It emphasizes the necessity from the perspectives of technological sovereignty, competitiveness, cultural identity, and data independence.

The diagram essentially argues that nations need to develop their own AI capabilities to maintain control over their digital future and protect their national interests in an increasingly AI-driven world.

WIth Claude

AI Model Optimization

This image shows a diagram illustrating three major AI model optimization techniques.

1. Quantization

  • The process of converting 32-bit floating-point numbers to 8-bit integers
  • A technique that dramatically reduces model size while maintaining performance
  • Significantly decreases memory usage and computational complexity

2. Pruning

  • The process of removing less important connections or neurons from neural networks
  • Transforms complex network structures into simpler, more efficient forms
  • Reduces model size and computation while preserving core functionality

3. Distillation

  • A technique that transfers knowledge from a large model (teacher model) to a smaller model (student model)
  • Reproduces the performance of complex models in lighter, more efficient models
  • Greatly improves efficiency during deployment and execution

All three techniques are essential methods for optimizing AI models to be more efficiently used in real-world environments. They are particularly crucial technologies when deploying AI models in mobile devices or edge computing environments.

With Claude

Overcome the Infinite

Overcome the Infinite – Game Interface Analysis

Overview

This image presents a philosophical game interface titled “Overcome the Infinite” that chronicles the evolutionary journey of human civilization through four revolutionary stages of innovation.

Game Structure

Stage 1: The Start of Evolution

  • Icon: Primitive human figure
  • Description: The beginning of human civilization and consciousness

Stage 2: Recording Evolution

  • Icon: Books and writing materials
  • Innovation: The revolution of knowledge storage through numbers, letters, and books
  • Significance: Transition from oral tradition to written documentation, enabling permanent knowledge preservation

Stage 3: Connect Evolution

  • Icon: Network/internet symbols with people
  • Innovation: The revolution of global connectivity through computers and the internet
  • Significance: Worldwide information sharing and communication breakthrough

Stage 4: Computing Evolution

  • Icon: AI/computing symbols with data centers
  • Innovation: The revolution of computational processing through data centers and artificial intelligence
  • Significance: The dawn of the AI era and advanced computational capabilities

Progress Indicators

  • Green and blue progress bars show advancement through each evolutionary stage
  • Each stage maintains the “∞ Infinite” symbol, suggesting unlimited potential at every level

Philosophical Message

“Reaching the Infinite Just only for Human Logics” (Bottom right)

This critical message embodies the game’s central philosophical question:

  • Can humanity truly overcome or reach the infinite through these innovations?
  • Even if we approach the infinite, it remains constrained within the boundaries of human perception and logic
  • Represents both technological optimism and humble acknowledgment of human limitations

Theme

The interface presents a contemplative journey through human technological evolution, questioning whether our innovations truly bring us closer to transcending infinite boundaries, or merely expand the scope of our human-limited understanding.

With Claude

AI Core Internals (1+4)

This image is a diagram titled “AI Core Internals (1+4)” that illustrates the core components of an AI system and their interconnected relationships.

The diagram contains 5 main components:

  1. Data – Located in the top left, represented by database and document icons.
  2. Hardware Infra – Positioned in the top center, depicted with a CPU/chipset icon with radiating connections.
  3. Foundation(AI) Model – Located in the top right, shown as an AI network node with multiple connection points.
  4. Energy Infra – Positioned at the bottom, represented by wind turbine and solar panel icons.
  5. User Group – On the far right, depicted as a collection of diverse people icons in various colors.

The arrows show the flow and connections between components:

  • From Data to Hardware Infrastructure
  • From Hardware Infrastructure to the AI Model
  • From the AI Model to end users
  • From Energy Infrastructure to Hardware Infrastructure (power supply)

This diagram visually explains how modern AI systems integrate data, computing hardware, AI models, and energy infrastructure to deliver services to end users. It effectively demonstrates the interdependent ecosystem required for AI operations, highlighting both the technical components (data, hardware, models) and the supporting infrastructure (energy) needed to serve diverse user communities.

With Claude

Prediction with data

This image illustrates a comparison between two approaches for Prediction with Data.

Left Side: Traditional Approach (Setup First Configuration)

The traditional method consists of:

  • Condition: 3D environment and object locations
  • Rules: Complex physics laws
  • Input: 1+ cases
  • Output: 1+ prediction results

This approach relies on pre-established rules and physical laws to make predictions.

Right Side: Modern AI/Machine Learning Approach

The modern method follows these steps:

  1. Huge Data: Massive datasets represented in binary code
  2. Machine Learning: Pattern learning from data
  3. AI Model: Trained artificial intelligence model
  4. Real-Time High Resolution Data: High-quality data streaming in real-time
  5. Prediction Anomaly: Final predictions and anomaly detection

Key Differences

The most significant difference is highlighted by the question “Believe first ??” at the bottom. This represents a fundamental philosophical difference: the traditional approach starts by “believing” in predefined rules, while the AI approach learns patterns from data to make predictions.

Additionally, the AI approach features “Longtime Learning Verification,” indicating continuous model improvement through ongoing learning and validation processes.

The diagram effectively contrasts rule-based prediction systems with data-driven machine learning approaches, showing the evolution from deterministic, physics-based models to adaptive, learning-based AI systems.

With Claude