Devices often emit vibrations and sounds during operation, which can reflect the current status of the equipment. By using artificial intelligence (AI) to recognize the sounds of device operation, abnormalities can be detected early, allowing for maintenance to be conducted promptly. This approach can reduce maintenance costs and prolong the lifespan of the equipment. This article will introduce how artificial intelligence interprets sounds to assist in health monitoring of devices, as well as the features and characteristics of the OtoSense intelligent monitoring solution introduced by ADI.
Device health monitoring through vibration and sound detection
Anyone familiar with the importance of equipment maintenance understands the significance of the sounds and vibrations emitted by devices. Proper device health monitoring through sound and vibration analysis can halve maintenance costs and double the lifespan. Real-time acoustic data and analysis represent another crucial Condition-based Monitoring (CbM) method.
Firstly, it's essential to understand what normal sounds the equipment emits. When there's a deviation in sound, it signals a potential anomaly, indicating a problem. Connecting specific sounds with particular issues in this manner is pivotal for effective monitoring.
Identifying anomalies may require several minutes of training, but correlating sound, vibration, and root causes for diagnostics can take significantly longer. Experienced technicians and engineers may possess this knowledge, but they are scarce resources. Recognizing issues solely based on sound can be quite challenging. Even with recordings, descriptive frameworks, or expert training, acquiring such specialized skills remains difficult.
Understanding human neuroscience to establish computer auditory capabilities
Over the past 20 years, the team at ADI has been dedicated to understanding how humans interpret sound and vibration. ADI's goal is to establish a system capable of learning from the sounds and vibrations of equipment, deciphering their meanings to detect anomalies and perform diagnostics.
ADI has introduced the OtoSense architecture, a device health monitoring system that supports computer hearing, enabling computers to comprehend key indicators of equipment behavior: sound and vibration. This system is applicable to any equipment, capable of real-time operation without the need for network connectivity. It has been applied in industrial settings, supporting the realization of a scalable and efficient machine health monitoring system.
The design concept of OtoSense draws inspiration from human neuroscience, where humans can learn and understand any sound they hear in a very efficient manner. Humans are capable of learning both static and transient sounds, requiring continuous adjustment of functionality and ongoing monitoring. OtoSense performs recognition at the edge close to the sensor, eliminating the need to make decisions through a network connection to remote servers. It also allows interaction and learning with experts.
Comparison and analysis between the human auditory system and OtoSense
Audition is a sense critical for human survival. It provides a holistic perception of distant, unseen events and is matured even before birth. The human process of perceiving sound can be described through four familiar steps: analog acquisition of sound, digitization, feature extraction, and interpretation. At each step, we compare the human ear to the OtoSense system.
The analog acquisition and digitization of human audition are crucial processes. Firstly, sound is captured by the eardrum and three auditory ossicles in the middle ear, using the lever principle. Impedance is then adjusted to transmit vibrations to the fluid-filled canal, where another layer of eardrum selectively shifts based on spectral components present in the signal. This, in turn, bends flexible cells that emit digital signals reflecting the degree and intensity of bending. These individual signals are then transmitted through parallel neural pathways arranged by frequency to the primary auditory cortex.
In OtoSense, this task is carried out by sensors, amplifiers, and codecs. The digitization process employs a fixed sampling rate adjustable between 250 Hz and 196 kHz. The waveform is coded with 16 bits and then stored in a buffer ranging in size from 128 to 4096.
Feature extraction in audition occurs in the primary cortex, encompassing both frequency domain characteristics such as dominant frequencies, harmonicity, and spectral shape, and time domain characteristics such as impulsions, variations in intensity, and primary frequency components within approximately a 3-second time window.
OtoSense utilizes a time window, referred to as a chunk by ADI, which moves with a fixed step size. The size and step range of this chunk are determined by the events to be identified and the sampling rate for feature extraction at the edge, ranging from 23 milliseconds to 3 seconds.
The analysis of audition occurs in the associative cortex, which integrates all perception and memory, and imparts meaning to sounds (e.g., through language), playing a core role during shaping perception. The analytical process organizes our descriptions of events, extending far beyond mere naming. Naming a project, a sound, or an event allows us to imbue it with greater and deeper significance. For experts, names and meanings enable them to better understand their surroundings.
This is why OtoSense's interaction with humans begins with neurologically-based, visual, and unsupervised sound mapping. OtoSense utilizes graphical representations of all heard sounds or vibrations, arranging them by similarity but without attempting to create rigid categories. This allows experts to organize groups displayed on the screen and name them without trying to artificially create bounded categories. They can construct semantic maps based on their own knowledge, perceptions, and expectations for OtoSense's final output.
For the same soundscape, automotive mechanics, aerospace engineers, or experts in cold forging presses, even those researching in the same field but from different companies, may categorize, organize, and label it in different ways. OtoSense, much like shaping linguistic meaning, employs the same bottom-up approach to assign meaning.
The original intention behind OtoSense's design is to learn from multiple experts and, over time, conduct increasingly complex diagnostics. A common process involves a loop between OtoSense and experts, where anomaly models and event recognition models run at the edge. These models generate outputs for the probability of potential events occurring and their anomalies.
Anomalies in sound or vibration beyond defined thresholds trigger anomaly notifications. Technicians and engineers using OtoSense can then inspect the sound and its surrounding information. Subsequently, these experts label the anomaly event, calculate new recognition models and anomaly models incorporating this new information, and push them to the edge devices.
Intelligent motor monitoring sensors for assisting in predictive maintenance of electric motors
Taking the ADI OtoSense smart motor monitoring sensor as an example, it is an AI-based, comprehensive hardware and software solution for equipment monitoring based on motor conditions. This system does not require expert manual analysis, supports the detection of nine types of mechanical and electrical faults, and does not require wires or specialized gateways, enabling quick deployment.
The ADI OtoSense smart motor sensor monitors the operation of electric motors by combining best-in-class sensing technology and advanced data analytics. It detects anomalies and defects in equipment, allowing you to predict maintenance cycles and avoid unexpected downtime.
The ADI OtoSense smart motor sensor covers the most critical diagnostics, translating data into specific operational instructions or recommendations. It provides 24/7 monitoring of three-phase asynchronous low-voltage AC motors. It presents information in a clear way, informing you about the nature of the problem and how to fix it.
The ADI OtoSense smart motor sensor offers a monitoring dashboard that visualizes detailed information about the status of each motor, aiding in comprehensive understanding of machine health diagnosis and fault detection. It also supports mobile applications, allowing users to easily set up the smart motor monitoring sensor, access deployment data, and receive notifications and alerts for key events within the application via personal computers and smartphone applications.
The ADI OtoSense smart motor monitoring sensor utilizes powerful condition-based monitoring hardware and software to optimize production environments, reduce the occurrence of failures, and achieve benefits such as lowering asset maintenance costs, extending equipment lifespans, and increasing uptime.
Supporting real-time monitoring, the ADI OtoSense smart motor monitoring sensor can monitor equipment more frequently to understand when mechanical and electrical faults begin to occur and how these issues impact the production process. It also creates a unique model for each motor to provide consistent optimization diagnostics with the process. The information provided by the smart motor monitoring sensor can be used to diagnose problems and determine their severity, enabling maintenance teams to take specific maintenance actions. By continuously monitoring motor performance and health status, better visibility into maintenance and spare parts requirements can be achieved, helping to know what to order and when to order it, thereby reducing inventory costs.
The ADI OtoSense smart motor sensor is the most accurate solution on the market for sensing and interpreting machine data. It can detect faults in the power supply system, stator windings, rotor, motor shaft balance, eccentricity, bearings, shaft alignment, cooling system, and soft/loose foot, among others. Additionally, it provides comprehensive performance metrics, indicating potential systemic issues that may result from various factors such as load changes or operational process variations.
Deployment of the ADI OtoSense smart motor sensor is quite straightforward, enabling 24/7 condition monitoring. Initially, setup can be done using the iOS/Android application to configure the sensor. Typically, the configuration process takes just a few minutes and can even be done while the motor is still running. Following setup, the learning process begins once the sensor is installed on the motor and calibrated. Simply run the sensor under normal operating conditions to facilitate learning. In the event of anomalies, real-time alerts can be received, which can be viewed on the mobile application or dashboard, helping prevent any motor failures.
Conclusion
ADI's OtoSense technology is designed to make sound and vibration expertise continuously available on any equipment, without the need for network connectivity to perform anomaly detection and event recognition. In aerospace, automotive, and industrial monitoring applications, this technology is increasingly being used for equipment health monitoring. This indicates that the technology performs well in scenarios that previously required expertise and involved embedded applications, especially for complex equipment. It has received praise and trust from industry experts and is considered an excellent tool for equipment health monitoring.