AI together!!

This diagram titled “AI together!!” illustrates a comprehensive architecture for AI-powered question-answering systems, focusing on the integration of user data, tools, and AI models through standardized protocols.

Key Components:

  1. Left Area (Blue) – User Side:
    • Prompt: The entry point for user queries, represented by a UI interface with chat elements
    • RAG (Retrieval Augmented Generation): A system that enhances AI responses by retrieving relevant information from user data sources
    • My Data: User’s personal data repositories shown as spreadsheets and databases
    • My Tool: Custom tools that can be integrated into the workflow
  2. Right Area (Purple) – AI Model Side:
    • AI Model (foundation): The core AI foundation model represented by a robot icon
    • MOE (Mixture Of Experts): A system that combines multiple specialized AI models for improved performance
    • Domain Specific AI Model: Specialized AI models trained for particular domains or tasks
    • External or Internet: Connection to external knowledge sources and internet resources
  3. Center Area (Green) – Connection Standard:
    • MCP (Model Context Protocol): A standardized protocol that facilitates communication between user-side components and AI models, labeled as “Standard of Connecting”

Information Flow:

  • Questions flow from the prompt interface on the left to the AI models on the right
  • Answers are generated by the AI models and returned to the user interface
  • The RAG system augments queries with relevant information from the user’s data
  • Semantic Search provides additional connections between components
  • All interactions are standardized through the MCP framework

This architecture demonstrates how personal data and custom tools can be seamlessly integrated with foundation and specialized AI models to create a more personalized, context-aware AI system that delivers more accurate and relevant responses to user queries.

With Claude

MCP #1 _ flow

MCP Overview

MCP (Model Context Protocol) is a conversion interface designed to enable LLMs (Large Language Models) to effectively interact with external resources. This protocol transforms text-format queries into API calls to access specific resources, allowing LLMs to provide more accurate and useful responses.

Key Components

  1. MCP Client: Interface that receives user questions, processes them, and returns final answers
  2. MCP Server: Server that converts text to API calls and communicates with specific resources
  3. LLM: Language model that analyzes questions and generates answers utilizing resource information

Operational Flow

  1. User submits a question to the MCP Client
  2. MCP Client forwards external resource requests to the MCP Server
  3. MCP Server transforms text-format requests into API call format
  4. MCP Server executes API calls to specific resources
  5. Resources return results to the MCP Server
  6. MCP Server provides resource information to the MCP Client
  7. LLM analyzes the question and generates an answer using all provided resources
  8. MCP Client returns the final answer to the user

Core Features

  • Provides an interface for converting text-based requests to API calls
  • Enables access to specific resource solutions
  • Integrates seamlessly with LLMs
  • Generates enhanced responses by leveraging external data sources

With Claude

Home LLM

This image shows the architecture of a “Home LLM” system, illustrating an innovative change in how home appliances are used.

Key points:

  1. Evolution from Traditional Approach: While traditional electronics came as ‘product + paper manual’ packages, this new system replaces manuals with small LLM models.
  2. Home Foundation Model: Homes are equipped with a main LLM model (“Home Foundation LLM Model”) that learns from environmental data.
  3. Knowledge Exchange: Product-specific small LLM models and the home foundation model exchange data and learning outcomes with each other.
  4. User Interface: Users can easily interact through the LLM by asking questions and giving commands, making product usage much more intuitive and convenient.
  5. AI Agent Control: Additionally, AI agents automatically optimize the control of these products, increasing efficiency.

This system presents a smart home architecture that fundamentally improves the user experience of electronic products by integrating AI and LLM technologies in the home environment.

With Claude

Experience Selling

Experience Selling: Transforming Domain Expertise into Intellectual Capital

Paradigm Shift in Knowledge Economy

Core Value Proposition

  • Transforming specialized domain experience into structured digital data
  • Converting tacit knowledge into explicit, scalable intellectual assets

AI-Powered Knowledge Transformation

  • Digitalization of expert experiences
  • Large Language Model (LLM) training on domain-specific datasets
  • Creating replicable decision-making models from individual expertise

Key Message: In the AI era, experience is no longer a limited personal resource but a dynamic, expandable intellectual asset that can be transformed, shared, and monetized globally.

New Coding

The image titled “New Coding” illustrates the historical evolution of programming languages and the emerging paradigm of AI-assisted coding.

On the left side, it shows the progression of programming languages:

  • “Bytecode” (represented by binary numbers: 0110, 1001, 1010)
  • “Assembly” (shown with a gear and conveyor belt icon)
  • “C/C++” (displayed with the C++ logo)
  • “Python” (illustrated with the Python logo)

Below these languages is text reading “Workload for understanding computers” with a blue gradient arrow, indicating how these programming approaches have strengthened our understanding of computers through their evolution.

The bottom section labeled “Using AI with LLM” shows a human profile communicating with an AI chip/processor, suggesting that AI can now code through natural language based on this historical programming experience and data.

On the right side, a large purple arrow points toward the future concepts:

  • “New Coding As you think”
  • “With AI” (in purple text)

The overall message of the diagram is that programming has evolved from low-level languages to high-level ones, and now we’re entering a new era where AI enables coding directly through human thought, speech, and logical reasoning – representing a fundamental shift in how we create software.

With Claude

Mixture of Experts

This image depicts a conceptual diagram of the “MOE (Mixture of Expert)” system, effectively illustrating the similarities between human expert collaboration structures and AI model MoE architectures.

The key points of the diagram are:

  1. The upper section shows a traditional human expert collaboration model:
    • A user presents a complex problem (“Please analyze the problem now”)
    • An intermediary agent distributes this to appropriate experts (A, B, C Experts)
    • Each expert analyzes the problem and provides solutions from their specialized domain
  2. The lower section demonstrates how this same structure is implemented in the AI world:
    • When a user’s question or command is input
    • The LLM Foundation Expert Model processes it
    • The Routing Expert Model distributes tasks to appropriate specialized models (A, B, C Expert Models)

This diagram emphasizes that human expert systems and AI MoE architectures are fundamentally similar. The approach of utilizing multiple experts’ knowledge to solve complex problems has been used in human settings for a long time, and the AI MoE structure applies this human-centered collaborative model to AI systems. The core message of this diagram is that AI models are essentially performing the roles that human experts would traditionally fulfill.

This perspective suggests that mimicking human problem-solving approaches can be effective in AI system design.

With Claude

LLM/RAG/Agentic

This image shows a diagram titled “LLM RAG Agentic” that illustrates the components and relationships in an AI system architecture.

The diagram is organized in a grid-like layout with three rows and three columns. Each row appears to represent different functional aspects of the system:

Top row:

  • Left: “Text QnA” in a blue box
  • Middle: A question mark icon with what looks like document/chat symbols
  • Right: “LLM” (Large Language Model) in a blue box with a brain icon connected to various data sources/APIs in the middle

Middle row:

  • Left: “Domain Specific” in a blue box
  • Middle: A “Decision by AI” circle/node that serves as a central connection point
  • Right: “RAG” (Retrieval-Augmented Generation) in a blue box with database/server icons

Bottom row:

  • Left: “Agentic & Control Automation” in a blue box
  • Middle: A task management or workflow icon with checkmarks and a clock
  • Right: “Agentic AI” in a blue box with UI/interface icons

Arrows connect these components, showing how information and processes flow between them. The diagram appears to illustrate how a large language model integrates with retrieval-augmented generation capabilities and agentic (autonomous action-taking) functionality to form a complete AI system.

With Claude