Facilities Data Collection Cases

From DALL-E with some prompting
The image presents different data collection configurations in facility management systems:

  1. Direct Connection: Equipment directly sends data to the network without any intermediate device.
  2. Controller: Data is collected via a PLC (Programmable Logic Controller), DDC (Direct Digital Control), or Gateway from the equipment and then sent to the network.
  3. Dedicated Meter: Specialized meters are used to collect specific data, which is then transferred directly to the network.
  4. Dedicated Meter & Controller: A setup where dedicated meters work in conjunction with a PLC/DDC/Gateway for data collection and subsequent control before networking.
  5. Internal Control System: An integrated control system manages and monitors data internally before it connects to the network.
  6. Solution System: a Standalone system that is self-contained with full functionalities for a specific operation.

This depiction emphasizes the progression from direct data routing to more complex systems involving multiple stages of data handling and integration.


Network Monitoring with AI

from DALL-E with some prompting
The image portrays a network monitoring system enhanced by AI, specifically utilizing deep learning. It shows a flow from the network infrastructure to the identification of an event, characterized by computed data with time information and severity. The “One Event” is clearly defined to avoid ambiguity. The system identifies patterns such as the time gap between events, event count, and relationships among devices and events, which are crucial for a comprehensive network analysis. AI deep learning algorithms work to process additional data (add-on data) and ambient data to detect anomalies and support predictive maintenance within the network.

AI Operation with numbers

From DALL-E with some prompting
The image illustrates an AI-based operational framework using numerical data for real-time operation, monitoring, and predictive maintenance. Data, such as temperature readings, is collected in digital form (“Get Digitals”). When operating within normal parameters (18°C to 27°C), the system maintains a “Normal Case” status. Any changes in the data trigger alerts and cautions. The AI model learns from numerical data to differentiate between normal and abnormal patterns. Upon detecting an anomaly, the system initiates a recovery process as part of predictive maintenance, aiming to address issues before they escalate.

Kernel Same-page Merging

From DALL-E with some prompting
Kernel Same-page Merging (KSM) is a feature within an operating system’s kernel that enhances memory efficiency by identifying and merging identical memory pages. Typically, this process is beneficial for duplicated pages from executable files and shared libraries, which are common across different processes. KSM is also advantageous in environments where there is a significant amount of shared data and memory-mapped files, such as virtualization systems where multiple virtual machines may be running the same operating system or similar applications. By merging these pages, KSM allows for a reduction in physical memory usage, leading to better memory management and potentially improved performance for the system.

Switching & Routing (Origin)

From DALL-E with some prompting
The image delineates the foundational aspects of network switching and routing based on their origins. Switching, historically in LANs, involved the broadcasting of packets, which modern switches now intelligently direct or block based on MAC addresses and VLAN information. Routing originally functioned to determine packet pathways over networks using IP address information. While these were once discrete tasks performed by separate devices, contemporary network technology often integrates both functions within the same hardware, allowing switches to perform some routing tasks and vice versa, reflecting the evolution and convergence of networking equipment.

Works with data

From DALL-E with some prompting
The image describes a data workflow process that involves various stages of data handling and utilization for operational excellence. “All Data” from diverse sources feeds into a monitoring system, which then processes raw data, including work logs. This raw data undergoes ETL (Extract, Transform, Load) procedures to become structured “ETL-ed Data.” Following ETL, the data is analyzed with AI to extract insights and inform decisions, which can lead to actions such as maintenance. The ultimate goal of this process is to achieve operational excellence, automation, and efficiency.

Data types

From DALL-E with some prompting
The image classifies data types and emphasizes the need for verification and response to potential errors for each type.

Computed Data: Processed by the CPU, with an implication of potential issues during computing or AI processing before storage.
Machine Data: Originating from machines or sensors, requiring error checking before storage, and awareness of possible errors.
Human Data: Entered by humans, needing verification due to the potential inclusion of wrong information, and awareness of errors.
This classification underlines the importance of ensuring accuracy in data processing and the need to be aware of and respond to potential errors.