Massive simple parallel computing

This diagram presents a systematic framework that defines the essence of AI LLMs as “Massive Simple Parallel Computing” and systematically outlines the resulting issues and challenges that need to be addressed.

Core Definition of AI LLM: “Massive Simple Parallel Computing”

Massive: Enormous scale with billions of parameters Simple: Fundamentally simple computational operations (matrix multiplications, etc.) Parallel: Architecture capable of simultaneous parallel processing Computing: All of this implemented through computational processes

Core Issues Arising from This Essential Nature

Big Issues:

  • Black-box unexplainable: Incomprehensibility due to massive and complex interactions
  • Energy-intensive: Enormous energy consumption inevitably arising from massive parallel computing

Essential Requirements Therefore Needed

Very Required:

  • Verification: Methods to ensure reliability of results given the black-box characteristics
  • Optimization: Approaches to simultaneously improve energy efficiency and performance

The Ultimate Question: “By What?”

How can we solve all these requirements?

In other words, this framework poses the fundamental question about specific solutions and approaches to overcome the problems inherent in the essential characteristics of current LLMs. This represents a compressed framework showing the core challenges for next-generation AI technology development.

The diagram effectively illustrates how the defining characteristics of LLMs directly lead to significant challenges, which in turn demand specific capabilities, ultimately raising the critical question of implementation methodology.

With Claude

3 Key on the AI era

This diagram illustrates the 3 Core Technological Components of AI World and their surrounding challenges.

AI World’s 3 Core Technological Components

Central AI World Components:

  1. AI infra (AI Infrastructure) – The foundational technology that powers AI systems
  2. AI Model – Core algorithms and model technologies represented by neural networks
  3. AI Agent – Intelligent systems that perform actual tasks and operations

Surrounding 3 Key Challenges

1. Data – Left Area

Data management as the raw material for AI technology:

  • Data: Raw data collection
  • Verified: Validated and quality-controlled data
  • Easy to AI: Data preprocessed and optimized for AI processing

2. Optimization – Bottom Area

Performance enhancement of AI technology:

  • Optimization: System optimization
  • Fit to data: Data fitting and adaptation
  • Energy cost: Efficiency and resource management

3. Verification – Right Area

Ensuring reliability and trustworthiness of AI technology:

  • Verification: Technology validation process
  • Right?: Accuracy assessment
  • Humanism: Alignment with human-centered values

This diagram demonstrates how the three core technological elements – AI Infrastructure, AI Model, and AI Agent – form the center of AI World, while interacting with the three fundamental challenges of Data, Optimization, and Verification to create a comprehensive AI ecosystem.

With Claude

Road to AI

This image shows a flowchart titled “Road to AI” that illustrates the step-by-step process of AI development.

Main Stages:

  1. Digitization – Starting from a globe icon, data is converted into digital format (binary code)
  2. Central Processing Area – Data is processed through network structures, where two key processes occur in parallel:
    • Verification – Confirming data accuracy
    • Tuning – Improving the model through “Higher Resolution” and “More Relative Data”
  3. AI System – Finally implemented as an AI robot

Development Phases (Right Side):

  • “Easy First, Everybody Know” – Starting with simple tasks that everyone can understand
  • “Again & Again” – Iterative improvement process
  • “More Difficult & Auto Decision” – Advanced stage with complex and automated decision-making

This diagram visually represents how AI development progresses from simple data digitization, through continuous verification and tuning processes, and gradually evolves into sophisticated AI systems capable of complex automated decision-making. The process emphasizes the iterative nature of AI development, moving from basic, universally understood concepts to increasingly complex autonomous systems.

With Claude

With AI

This diagram illustrates the effective collaboration method with AI:

Key Components:

  1. Upper Section: User-AI-Network Connection
  • “Can You Believe?” emphasizes the need to verify and not blindly trust the outputs of AI that has learned from the internet and vast amounts of data
  • While AI has access to extensive networks/data, verification of this information’s reliability is essential
  1. Lower Section: Logical Foundation and Development
  • “Immutable Logic” forms the foundation
  • Through this logical foundation, “Good Questions” and “Understanding” with AI occur in a cyclical process
  • “More And More” represents continuous development through this process

Core Message:

  • When utilizing AI, the most crucial element is the user’s own solid logical foundation
  • Verify and evaluate AI outputs based on this immutable logic
  • Continuously develop one’s own logic and knowledge through verified information and understanding
  • While AI is a powerful tool, its outputs must be logically verified by the user

This presents an approach not of simply using AI, but of critically evaluating AI outputs through one’s logical foundation and growing together through this process.

The diagram emphasizes that successful interaction with AI requires:

  • Having your own robust logical framework
  • Critical evaluation of AI-provided information
  • Using verified insights to enhance your own understanding
  • Maintaining a balanced approach where AI serves as a tool for growth rather than an unquestioned authority

This creates a virtuous cycle where both the user’s logical foundation and their ability to effectively utilize AI continuously improve.

With Claude

A series of decisions

From Claude with some prompting
The image depicts a diagram titled “A series of decisions,” illustrating a data processing and analysis workflow. The main stages are as follows:

  1. Big Data: The starting point for data collection.
  2. Gathering Domains by Searching: This stage involves searching for and collecting relevant data.
  3. Verification: A step to validate the collected data.
  4. Database: Where data is stored and managed. This stage includes “Select Betters” for data refinement.
  5. ETL (Extract, Transform, Load): This process involves extracting, transforming, and loading data, with a focus on “Select Combinations.”
  6. AI Model: The stage where artificial intelligence models are applied, aiming to find a “More Fit AI Model.”

Each stage is accompanied by a “Visualization” icon, indicating that data visualization plays a crucial role throughout the entire process.

At the bottom, there’s a final step labeled “Select Results with Visualization,” suggesting that the outcomes of the entire process are selected and presented through visualization techniques.

Arrows connect these stages, showing the flow from Big Data to the AI Model, with “Select Results” arrows feeding back to earlier stages, implying an iterative process.

This diagram effectively illustrates the journey from raw big data to refined AI models, emphasizing the importance of decision-making and selection at each stage of the data processing and analysis workflow.

Requires for DL

From DALL-E with some prompting
The image outlines the importance of data in the era of deep learning (DL). It starts with “Data,” representing various sources and types, which feeds into “Deep Learning,” depicted by a neural network diagram. The process leads to “Result,” symbolized by charts and graphs indicating the output or findings. The central message, “Data determines the results,” stresses that the quality of data significantly impacts the outcome of deep learning processes. Below, “Data Verification” suggests the need for ensuring data accuracy, which ties into the cycle of “UPDATE” and “Analysis,” highlighting an iterative process to refine and improve deep learning applications. The phrase “What to deal with DL” hints at the challenges and considerations in managing and utilizing deep learning effectively.