Data Center NOW

This image shows a data center architecture diagram titled “Data Center Now” at the top. It illustrates the key components and flow of a modern data center infrastructure.

The diagram depicts:

  1. On the left side: An “Explosion of data” icon with data storage symbols, pointing to computing components with the note “More Computing is required”
  2. In the center: Server racks connected to various systems with colored lines indicating different connections (red, blue, green)
  3. On the right side: Several technology components illustrated with circular icons and labels:
    • “Software Defined” with a computer/gear icon
    • “AI & GPU” with neural network and GPU icons and note “Big power is required”
    • “Renewable Energy & Grid Power” with solar panel and wind turbine icons
    • “Optimized Cooling /w Using Water” with cooling system icon
    • “Enhanced Op System & AI Agent” with a robotic/AI system icon

The diagram shows how data flows through processing units and connects to different infrastructure elements, emphasizing modern data center requirements like increased computing power, AI capabilities, power management, and cooling solutions.

With Claude

Data Quality

The image shows a data quality infographic with key dimensions that affect AI systems.

At the top of the image, there’s a header titled “Data Quality”. Below that, there are five key data quality dimensions illustrated with icons:

  • Accuracy – represented by a target with a checkmark. This is essential for AI models to produce correct results, as data with fewer errors and biases enables more accurate predictions.
  • Consistency – shown with circular arrows forming a cycle. This maintains consistent data formats and meanings across different sources and over time, enabling stable learning and inference in AI models.
  • Timeliness – depicted by a clock/pie chart with checkmarks. Providing up-to-date data in a timely manner allows AI to make decisions that accurately reflect current circumstances.
  • Resolution – illustrated with “HD” text and people icons underneath. This refers to increasing detailed accuracy through higher data density obtained by more frequent sampling per unit of time. High-resolution data allows AI to detect subtle patterns and changes, enabling more sophisticated analysis and prediction.
  • Quantity – represented by packages/boxes with a hand underneath. AI systems, particularly deep learning models, perform better when trained on large volumes of data. Sufficient data quantity allows for learning diverse patterns, preventing overfitting, and enabling recognition of rare cases or exceptions. It also improves the model’s generalization capability, ensuring reliable performance in real-world environments.

The bottom section features a light gray background with a conceptual illustration showing how these data quality dimensions contribute to AI. On the left side is a network of connected databases, devices, and information systems. An arrow points from this to a neural network representation on the right side, with the text “Data make AI” underneath.

The image appears to be explaining that these five quality dimensions are essential for creating effective AI systems, emphasizing that the quality of data directly impacts AI performance.

With Claude

LLM/RAG/Agentic

This image shows a diagram titled “LLM RAG Agentic” that illustrates the components and relationships in an AI system architecture.

The diagram is organized in a grid-like layout with three rows and three columns. Each row appears to represent different functional aspects of the system:

Top row:

  • Left: “Text QnA” in a blue box
  • Middle: A question mark icon with what looks like document/chat symbols
  • Right: “LLM” (Large Language Model) in a blue box with a brain icon connected to various data sources/APIs in the middle

Middle row:

  • Left: “Domain Specific” in a blue box
  • Middle: A “Decision by AI” circle/node that serves as a central connection point
  • Right: “RAG” (Retrieval-Augmented Generation) in a blue box with database/server icons

Bottom row:

  • Left: “Agentic & Control Automation” in a blue box
  • Middle: A task management or workflow icon with checkmarks and a clock
  • Right: “Agentic AI” in a blue box with UI/interface icons

Arrows connect these components, showing how information and processes flow between them. The diagram appears to illustrate how a large language model integrates with retrieval-augmented generation capabilities and agentic (autonomous action-taking) functionality to form a complete AI system.

With Claude

Data is the next of the AI

Data is the backbone of AI’s evolution.

Summary 🚀

  1. High-quality data is the key to the AI era.
    • Infrastructure has advanced, but accurate and structured data is essential for building effective AI models.
    • Garbage In, Garbage Out (GIGO) principle: Poor data leads to poor AI performance.
  2. Characteristics of good data
    • High-resolution data: Provides precise information.
    • Clear labeling: Enhances learning accuracy.
    • Structured data: Enables efficient AI processing.
  3. Data is AI’s core competitive advantage.
    • Domain-specific datasets define AI performance differences.
    • Data cleaning and quality management are essential.
  4. Key messages
    • “Data is the backbone of AI’s evolution.”
    • “Good data fuels great AI!”

Conclusion

AI’s success now depends on how well data is collected, processed, and managed. Companies and researchers must focus on high-quality data acquisition and refinement to stay ahead. 🚀

With ChatGPT

GPU vs NPU on Deep learning

This diagram illustrates the differences between GPU and NPU from a deep learning perspective:

GPU (Graphic Process Unit):

  • Originally developed for 3D game rendering
  • In deep learning, it’s utilized for parallel processing of vast amounts of data through complex calculations during the training process
  • Characterized by “More Computing = Bigger Memory = More Power,” requiring high computing power
  • Processes big data and vectorizes information using the “Everything to Vector” approach
  • Stores learning results in Vector Databases for future use

NPU (Neuron Process Unit):

  • Retrieves information from already trained Vector DBs or foundation models to generate answers to questions
  • This process is called “Inference”
  • While the training phase processes all data in parallel, the inference phase only searches/infers content related to specific questions to formulate answers
  • Performs parallel processing similar to how neurons function

In conclusion, GPUs are responsible for processing enormous amounts of data and storing learning results in vector form, while NPUs specialize in the inference process of generating actual answers to questions based on this stored information. This relationship can be summarized as “training creates and stores vast amounts of data, while inference utilizes this at the point of need.”

With Claude

Operation with LLM

This image is a diagram titled “Operation with LLM,” showing a system architecture that integrates Large Language Models (LLMs) with existing operational technologies.

The main purpose of this system is to more efficiently analyze and solve various operational data and situations using LLMs.

Key components and functions:

  1. Top Left: “Monitoring Dashboard” – Provides an environment where LLMs can interpret image data collected from monitoring screens.
  2. Top Center: “Historical Log & Document” – LLMs analyze system log files and organize related processes from user manuals.
  3. Top Right: “Prompt for chatting” – An interface for interacting with LLMs through appropriate prompts.
  4. Bottom Left: “Image LLM (multimodal)” – Represents multimodal LLM functionality for interpreting images from monitoring screens.
  5. Bottom Center: “LLM” – The core language model component that processes text-based logs and documents.
  6. Bottom Right:
    • “Analysis to Text” – LLMs analyze various input sources and convert them to text
    • “QnA on prompt” – Users can ask questions about problem situations, and LLMs provide answers

This system aims to build an integrated operational environment where problems occurring in operational settings can be easily analyzed through LLM prompting and efficiently solved through a question-answer format.

With Claude

Add with power

Add with Power: 8-Bit Binary Addition and Energy Transformation

Core Mechanism:

  1. Input: Two 8-energy binary states (both rows ending with 1)
  2. Computation Process: 1+1 = 2 (binary overflow occurs)
  3. Result:
    • Output row’s last bit changed to 0
    • Part of energy converted to heat

Key Components:

  • Two input rows with 8 binary “energies”
  • Computing symbol (+) representing addition
  • A heat generation (?) box marked x8
  • Resulting output row with modified energy state

Fundamental Principle: “All energies must be maintained with continuous energies for no error (no changes without Computing)”

This diagram illustrates:

  • Binary addition process
  • Energy conservation and transformation
  • Information loss during computation
  • Relationship between computation, energy, and heat generation

The visual representation shows how a simple 8-bit addition triggers energy transfer, with overflow resulting in heat production and a modified binary state.

WIth Claude