Make Better Questions

This diagram titled “Make Better Questions” illustrates a methodology for effective questioning. The key concepts are:

  1. Continuous Skepticism and Updates: Personal beliefs should be continuously updated following the principle “Always be suspicious.” This suggests that our knowledge and understanding should not remain static but should evolve constantly.
  2. Fluidity of Collective Truth: “Humans Believe (Truth)” represents collectively accepted truths, which are also subject to change and interact with personal beliefs through “Nice Update,” creating a reciprocal influence.
  3. Immutable Foundations: Some basic principles (“Immutable Rule”) provide an unchanging foundation, but flexible thinking should be developed based on these foundations.
  4. Starting with Fundamentals: “Start with fundamentals” emphasizes the importance of beginning with basic principles when approaching complex questions or problems.
  5. Collaboration with AI: By utilizing this thinking framework in conjunction with AI, we can create better questions and gain richer insights.

This diagram ultimately suggests a method for optimizing interactions with AI through constant skepticism and adherence to fundamentals while maintaining flexible thinking. It emphasizes the importance of not settling for fixed beliefs but continuously learning and evolving.

With Claude

Connected in AI DC

This diagram titled “Data is Connected in AI DC” illustrates the relationships starting from workload scheduling in an AI data center.

Key aspects of the diagram:

  1. The entire system’s interconnected relationships begin with workload scheduling.
  2. The diagram divides the process into two major phases:
    • Deterministic phase: Primarily concerned with power requirements that operate in a predictable, planned manner.
    • Statistical phase: Focused on cooling requirements, where predictions vary based on external environmental conditions.
  3. The “Prophet Commander” at the workload scheduling stage can predict/direct future requirements, allowing the system to prepare power (1.1 Power Ready!!) and cooling (1.2 Cooling Ready!!) in advance.
  4. Process flow:
    • Job allocation from workload scheduling to GPU cluster
    • GPUs request and receive power
    • Temperature rises due to operations
    • Cooling system detects temperature and activates cooling

This diagram illustrates the interconnected workflow in AI data centers, beginning with workload scheduling that enables predictive resource management. The process flows from deterministic power requirements to statistical cooling needs, with the “Prophet Commander” enabling proactive preparation of power and cooling resources. This integrated approach demonstrates how workload prediction can drive efficient resource allocation throughout the entire AI data center ecosystem.

With Claude

Personal with AI

This diagram illustrates a “Personal Agent” system architecture that shows how everyday life is digitized to create an AI-based personal assistant:

Left side: The user’s daily activities (coffee, computer, exercise, sleep) are represented, which serve as the source for digitization.

Center-left: Various sensors (visual, auditory, tactile, olfactory, gustatory) capture the user’s daily activities and convert them through the “Digitization” process.

Center: The “Current State (Prompting)” component stores the digitized current state data, which is provided as prompting information to the AI agent.

Upper right (pink area): Two key processes take place:

  1. “Learning”: Processing user data from an ML/LLM perspective
  2. “Logging”: Continuously collecting data to update the vector database

This section runs on a “Personal Server or Cloud,” preferably using a personalized GPU server like NVIDIA DGX Spark, or alternatively in a cloud environment.

Lower right: In the “On-Device Works” area, the “Inference” process occurs. Based on current state data, the AI agent infers guidance needed for the user, and this process is handled directly on the user’s personal device.

Center bottom: The cute robot icon represents the AI agent, which provides personalized guidance to the user through the “Agent Guide” component.

Overall, this system has a cyclical structure that digitizes the user’s daily life, learns from that data to continuously update a personalized vector database, and uses the current state as a basis for the AI agent to provide customized guidance through an inference process that runs on-device.

with Claude

Data Explosion in Data Center

This image titled “Data Explosion in Data Center” illustrates three key challenges faced by modern data centers:

  1. Data/Computing:
    • Shows the explosive growth of data from computing servers to internet/cloud infrastructure and AI technologies.
    • Visualizes the exponential increase in data volume from 1X to 100X, 10,000X, and ultimately to 1,000,000,000X (one billion times).
    • Depicts how servers, computers, mobile devices, and global networks connect to massive data nodes, generating and processing enormous amounts of information.
  2. Power:
    • Addresses the increasing power supply requirements needed to support the data explosion in data centers.
    • Shows various energy sources including traditional power plants, wind turbines, solar panels, and battery storage systems to meet the growing energy demands.
    • Represents energy efficiency and sustainable power supply through a cyclical system indicated by green arrows.
  3. Cooling:
    • Illustrates the heat management challenges resulting from increased data processing and their solutions.
    • Explains the shift from traditional air cooling methods to more efficient server liquid cooling technologies.
    • Visualizes modern cooling solutions with blue circular arrows representing the cooling cycle.

This diagram comprehensively explains how the exponential growth of data impacts data center design and operations, particularly highlighting the challenges and innovations in power consumption and thermal management.

With Claude

DC growth

Data centers have expanded rapidly from the early days of cloud computing to the explosive growth driven by AI and ML.
Initially, growth was steady as enterprises moved to the cloud. However, with the rise of AI and ML, demand for powerful GPU-based computing has surged.
The global data center market, which grew at a CAGR of around 10% during the cloud era, is now accelerating to an estimated CAGR of 15–20% fueled by AI workloads.
This shift is marked by massive parallel processing with GPUs, transforming data centers into AI factories.

With ChatGPT

Legacy AI (Rule-based)

The image shows a diagram explaining “Legacy AI” or rule-based AI systems. The diagram is structured in three main sections:

  1. At the top: A workflow showing three steps:
    • “Analysis” (illustrated with a document and magnifying glass with charts)
    • “Prioritize” (shown as a numbered list with 1-2-3 and an upward arrow)
    • “Choose the best” (depicted with a network diagram and pointing hand)
  2. In the middle: Programming conditional statement structure:
    • “IF [ ]” section contains analysis and prioritization icons, representing the condition evaluation
    • “THEN [ ]” section includes “optimal choice” icons, representing the action to execute when the condition is true
    • “It’s Rule” label on the right indicates this is a traditional program code processing approach
  3. At the bottom: A pipeline process labeled “It’s Algorithm (Rule-based AI)” showing:
    • A series of interconnected components with arrows
    • Each component contains small icons representing analysis and rules
    • The process ends with “Serialize without duplications”

This diagram effectively illustrates the structure and workflow of traditional rule-based AI systems, demonstrating how they operate like conventional programming with IF-THEN statements. The system first analyzes data, then prioritizes information based on predefined criteria, and finally makes decisions by selecting the optimal choice according to the programmed rules. This represents the foundation of early AI approaches before the advent of modern machine learning techniques, where explicit rules rather than learned patterns guided the decision-making process.

With Claude