Method of Adaptive Array Comparison for the Detection and Characterization of Periodic Motion

A system for analyzing periodic motions includes: a device for acquiring video files; a data analysis system including a processor and memory; a computer program to automatically analyze the video images, identify an area in the images where periodic movements may be detected and quantified using an Adaptive Array Comparison method; and, an interface to provide an output signal related to at least one parameter characteristic of said periodic movement. A graphical user interface may be provided and may display various analytical results along with the video imagery or a single frame therefrom.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of each of the following Provisional Patent Applications filed by the present inventors: Ser. No. 62/090,729, “Optical detection of periodic movement”, filed on Dec. 11, 2014; Ser. No. 62/139,127, “Method for determining, comparing, measuring, and displaying phase”, filed on Mar. 27, 2015; Ser. No. 62/141,940, “Method and system for analysis of structures and objects from spatio-temporal data”, filed on Apr. 2, 2015; Ser. No. 62/139,110, “Adaptive array comparison”, filed on Apr. 14, 2015; Ser. No. 62/146,744, “Method of analyzing, displaying, organizing, and responding to vital signals”, filed on Apr. 13, 2015; Ser. No. 62/154,011, “Non contact optical baby monitor that senses respiration rate and respiratory waveform”, filed on Apr. 28, 2015; Ser. No. 62/161,228, “Multiple region perimeter tracking and monitoring”, filed on May 13, 2015; and Ser. No. 62/209,979, “Comparative analysis of time-varying and static imagery in a field”, filed on Aug. 28, 2015, by the present inventors; the disclosures of each of which are incorporated herein by reference in their entirety.

This application is related to the following applications, filed on even date herewith by the present inventors: “Method of analyzing, displaying, organizing, and responding to vital signals”, Docket No. RDI-018; “Non-contacting monitor for bridges and civil structures”, Docket No. RDI-017; and “Apparatus and method of analyzing periodic motions in machinery”, Docket No. RDI-019; the entire disclosures of each and every one of which are incorporated herein by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The invention pertains to methods for analyzing periodic movements using video files. More particularly, the invention pertains to methods for determining periodic movements using an adaptive array comparison technique.

2. Description of Related Art

In many technical fields there exists a great need to determine and quantify various periodic motions.

All machines and moving systems produce vibrations of various kinds, some of which may be characteristic of normal operation and others of which may indicate off-normal conditions, unusual wear, incipient failure, or other problems. In the field of predictive maintenance, the detection of vibrational signatures is a key element of the diagnostic process in which the goal is to identify and remedy incipient problems before a more serious event such as breakdown, failure, or service interruption occurs. Prior art methods typically involve either directly-mounted accelerometers or access to the power line in the case of motor current signature analysis.

Many medical issues involve periodic movements such as pulse and respiration, physical tremors, seizures, and the like. In sleep studies, the subject is forced to wear various devices to make the measurements, and these devices add an element of complexity and also an inherently unnatural aspect to the test.

In large civil structures such as bridges, buildings, and the like, vibrations can be important not only with regard to behavior during seismic events but also as a diagnostic tool for general maintenance. That is, if a portion of a bridge displays large or unusual vibrations under normal conditions of traffic or wind loading, it might indicate a loose or damaged component or an incipient failure. At the same time, it is expensive and difficult to inspect a large structure by a close-up or hands-on approach that physically observes each component individually.

Each of the foregoing methods relies on having physical contact with the subject or structure under analysis. Each method also requires a specific physical solution for a specific component or problem.

What is needed, therefore, is a general, non-contacting method for analyzing periodic movements that does not need to be custom-built or installed on one particular piece of equipment or physically attached to a patient and may be conveniently deployed on an ad hoc basis to create and maintain a database of historical movement data for any selected number of individual components or patients.

OBJECTS AND ADVANTAGES

Objects of the present invention include the following: providing a video-based tool for determining periodic motions in an object without the need for edge visualization or other specific feature analysis; providing a non-contact vibration analysis system for machinery; providing a non-contact tool for characterizing respiration, heart rate, or other vital signs; providing a stand-off structural analysis tool that can easily locate, characterize, and visualize the movement of individual components in a large structure; and, providing a generic tool to derive periodic data from a video stream or similar data file, whether or not the video data are ever rendered or visually displayed on a monitor. These and other objects and advantages of the invention will become apparent from consideration of the following specification, read in conjunction with the drawings.

SUMMARY OF THE INVENTION

According to one aspect of the invention, a system for analyzing periodic motions comprises:

a video acquisition device positioned at a selected distance from an object and having an unobstructed view of a selected portion of the object;

a data analysis system including a processor and memory;

a computer program to:

    • analyze the video file by an adaptive array comparison procedure,
    • calculate a physical displacement of the selected object as a function of time and determine the periodicity thereof, and,
    • time-stamp the video file and the determined periodicity associated therewith; and,

a data storage system to archive the time stamped images and the physical displacement data for later retrieval.

According to another aspect of the invention, a method for monitoring movement of an object comprises:

positioning a video acquisition device at a selected distance from the object and having an unobstructed view of a selected portion thereof;

providing a data analysis system including a processor and memory to analyze the acquired video file by an adaptive array comparison procedure and calculate the physical displacement of the object as a function of time and determine the periodicity thereof;

time-stamping the video file and the determined periodicity associated therewith; and,

archiving the time stamped images and the associated physical displacement data in a data storage system for later retrieval.

According to another aspect of the invention, a system for characterizing time-dependent motions using video data comprises:

a data analysis system including a processor and memory, and further comprising a data port capable of accepting video data;

a computer program to analyze the video data, identify an area in the images where periodic intensity changes associated with a repetitive motion may be detected and quantified, using an adaptive array comparison procedure; and,

a user interface in which the quantified motion information may be displayed as a function of time.

According to another aspect of the invention, a method for characterizing time-dependent motions using video data comprises:

acquiring a video file of a selected object;

providing a data analysis system including a processor and memory to analyze the acquired video file by an adaptive array comparison procedure and calculate the physical displacement of the object as a function of time and determine the periodicity thereof;

time-stamping the video file and the determined periodicity associated therewith; and,

archiving the time stamped images and the associated physical displacement data in a data storage system for later retrieval.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawings accompanying and forming part of this specification are included to depict certain aspects of the invention. A clearer conception of the invention, and of the components and operation of systems provided with the invention, will become more readily apparent by referring to the exemplary, and therefore non-limiting embodiments illustrated in the drawing figures, wherein like numerals (if they occur in more than one view) designate the same elements. The features in the drawings are not necessarily drawn to scale.

FIG. 1 illustrates schematically the arrangement of video data into a three-dimensional array where x, y are spatial coordinates in a video image and z is time.

FIG. 2 illustrates the result when the frame spacing is non-optimal, in this case, every 8th frame (N=8).

FIG. 3 illustrates the result when the spacing is more nearly optimal, in this case, every 6th frame (N=6).

FIG. 4 illustrates a summed frame of differenced frames from 20 seconds of video data. Note the method isolated motions associated with the breathing based on the selected frame number separation as indicated by the darkest pixels in the summed frame.

FIG. 5 illustrates a simplified flow chart of the basic camera operation and analysis done in a completely automated process.

FIG. 6 illustrates a periodic signal plotted as intensity versus time.

FIG. 7 illustrates the same signal as in FIG. 6, but with a horizontal line indicating the position of the median value of intensity.

FIG. 8 illustrates schematically the relationship between intensity relative to median and physical movements of an object in the video.

FIG. 9 illustrates a single video frame of a bridge with a truck passing over it.

FIG. 10 illustrates an example of the same frame as it might appear on a user interface, shown as a differenced frame in which the bright line indicates which pixels are indicating motion.

FIG. 11 illustrates a video frame in which a small paper square having a “+” shaped fiducial mark has been attached to a speaker, which is vibrating periodically in the direction indicated by the white arrow.

FIG. 12 illustrates a difference image, in which the bright lines on the edges of the paper indicate motion.

FIG. 13 illustrates another frame in the same sequence as FIG. 12, but in this instance the motion is in the opposite direction so the bright lines are orthogonal to those in FIG. 12.

FIG. 14 illustrates the result when an amplification factor is applied to the difference image of FIG. 12, which is then overlaid onto the original image.

FIG. 15 illustrates the result when an amplification factor is applied to the difference image of FIG. 13, which is then overlaid onto the original image.

FIG. 16 illustrates a difference frame calculated using the mean frame rather than the median frame as the reference frame.

FIG. 17 illustrates a difference frame calculated using a single frame from the original video file as the reference frame (i.e., the reference frame was neither a mean frame nor a median frame).

FIG. 18 illustrates the ability of the invention to capture non-periodic movements, in this case representing the transient deflections of a point on the bridge shown in FIG. 9 as several vehicles pass over it, as rendered in a screen shot of a Graphical User Interface in accordance with one aspect of the invention.

FIG. 19A illustrates a regular or healthy respiration waveform, and FIG. 19B illustrates an irregular and transient event in an otherwise periodic motion, as detected using the present invention.

FIGS. 20A-20D illustrate the steps in an exemplary analysis for the specific case of respiration. 20A shows the result using a four-frame separation; 20B shows eight-frame separation; 20C shows nine-frame separation; and 20D shows nine-frame separation but starting with a different initial frame.

FIGS. 21A-B show schematically the typical shapes of several skewed waveforms. FIG. 21A: SawtoothRight, SawtoothLeft; and FIG. 21B: LubdubRight.

FIG. 22 shows the results of frame differencing for the three waveforms in FIG. 21, in which the maximum frame difference (vertical axis) is obtained with a frame spacing, N, of either 15 or 45 frames (horizontal axis).

FIGS. 23A-C illustrate the analysis of a seismic model, in which FIG. 23A is a frame of video of a model under seismic test, FIG. 23B is an image representing vibrations at 4.63 Hz, and FIG. 23C is an image representing vibrations at 2.84 Hz.

FIG. 24 illustrates the ability of a user to rewind the acquired data files (indicated conceptually by the large arrow) to return to a point in time at which an event occurred.

FIG. 25 illustrates an example of a user interface implemented for mobile devices according to another aspect of the invention.

FIG. 26 illustrates the steps in one approach for determining and displaying phase according to one aspect of the invention.

FIGS. 27A-C illustrate the use of phase information to analyze vibrations in a bridge. FIG. 27A shows a video frame of a bridge after a truck (not shown) has passed over it. FIG. 27B shows a single phase mask at 2.25 Hz, which is the fundamental frequency of the bridge. FIG. 27C shows the same phase mask but multiplied by the intensity of the movement at each pixel.

FIGS. 28A-B illustrate the use of motion amplification to visually exaggerate the apparent movement of an object. FIG. 28A shows a video frame of a motor that is driving a shaft and flywheel (not shown). FIG. 28B shows another frame in the amplified video clip, representing the point of maximum displacement relative to the frame in FIG. 28A.

DETAILED DESCRIPTION OF THE INVENTION

The invention is a general procedure for analyzing a video or similar data file using an adaptive array comparison method to find areas of maximum movement and characterize the periodic behavior of those movements (e.g., amplitude, phase, and waveform). The data may be time-stamped and archived for later comparison to other similar situations or to other data taken from the same object at a different time or under different conditions, to provide useful information for such things as predictive maintenance, health care, structural analysis, and other applications. The system may include a graphical user interface (GUI), which may display various parameters relating to the measured data, a frame or frames from the video input file, time stamps or other identifying information, and user-entered information describing relevant conditions. It may further allow the user to draw a perimeter or region of interest within the video frame so that analysis may be focused on that region and extraneous movements elsewhere in the frame may be ignored. The system may include an archived database, with which the user can compare historical data in order to determine trends over time or make a comparison to similar objects or situations.

The invention includes several novel techniques to:

1. Analyze the video file by an adaptive array comparison technique to find a selected number of pixels that have the most intensity variation over time, i.e., the most physical movement.
2. Find the best frames to use (i.e., optimal frame spacing) to maximize the frame differences and best determine the periodicity of the movement.
3. Apply various mathematical functions, such as fast Fourier transform analysis (FFT) to derive richer physical information from the observed movement waveform.
4. To isolate and reject wanted and unwanted signals respectively.

As used herein, the terms “machine”, “machinery”, and “machine component” are intended to be taken in their broadest sense, to include any mechanical component that may exhibit some periodic movements. It includes, for example, motors of all kinds (e.g., electrical, internal combustion, turbines), components and linkages connected to or driven by motors; machine tools, grinding wheels and tool bits; electrical and hydraulic actuators; pumps, blowers, fans, pipes, ducts, and other fluid- or air-handling equipment; conveyors and materials or components conveyed thereon; and any parts, products, and workpieces that may be moving in or through a production environment.

As used herein, the term “vibration” refers to any physical movement that may be characterized by some periodic change of position as a function of time. Vibrations may be periodic, such as, e.g., sinusoidal, symmetric sawtooth, asymmetric sawtooth, or they may be aperiodic or noisy. Vibrations may have any waveform, which may include waveforms characteristic of superimposed vibrations of different frequencies, amplitudes, and phases.

It will be appreciated that the term “patient” or “individual” is used herein for convenience, and is intended to cover any human or animal that is to be monitored for vital signs. The term “patient” does not necessarily imply that the individual is ill or is presently undergoing medical treatment. Some non-limiting examples include: a sleeping infant being monitored for sleep apnea, SIDS, or other signs of distress; a patient in a hospital, emergency room, or nursing home; a patient undergoing study for sleeping disorders; a soldier in a combat situation; a person in a crowd being monitored for signs of stress or communicable disease; or an animal under veterinary care.

As used herein, the term “object” refers to anything that may be the subject of a video data file; it may be a living thing, such as a patient or infant under observation, or an inanimate object such as a bridge, machine, or other mechanical component.

It is important to keep in mind that the mathematical techniques of the present invention derive parametric outputs whether or not an image is ever created or portrayed. Thus, techniques of the present invention may be used with monitors that do not display images, do not recognize objects, and do not have frequent human observation or similar participation. For example, the present invention may output a control signal corresponding to the value of a commonly reported characteristic such as a breathing rate, heart pulse rate, or phase, a lag interval, a dimension of a periodically moving object, a period of a motion, or a timing of a motion, or other information associated with a periodic motion event without displaying an image thereof. The output signal may be in any convenient analog or digital format, e.g., 0-5 V, 4-20 mA, and may be part of a network, wireless network, mesh network, or other control and automation system operating on any convenient protocol, e.g., HART, WirelessHART, ZigBee, IEEE 802.15.4, etc. Conversely, in some examples, the user interface may include actual video images, which may be selected to correspond to a particular point in time when an output parameter has a particular value or the waveform displays a particular feature (e.g., an episode when unusual vibrations or some instability appeared temporarily). The functionality of the user interface may be further enhanced by the use of another aspect of the inventive technique, in which small physical motions can be highlighted or amplified in a video playback for better visual understanding of the movements that are occurring.

As used herein, the term “video” describes any data set representing a series of digital (i.e., pixelated) images of a scene, taken at fixed time intervals, so that the data represent points in X-Y-t space. The image may represent a pattern of reflected and/or emitted visible light, UV, IR, X-rays, gamma rays, or other electromagnetic radiation detected by a two-dimensional position-sensitive detector. Although certain standard file types will be described in some of the examples, the invention is not limited to any particular frame rate or file format. Furthermore, the data file may be obtained by or supplied to the processor via any suitable data port, such as a USB connection, ethernet connection, CD or DVD reader, or other I/O connection. The video file may represent an essentially real-time stream, an archived clip of any arbitrary length, a file stored in a cloud environment, etc. The video file might be a raw file directly from a source (camera, sensor, etc.), or it may have been processed, edited, or otherwise modified prior to analysis by the invention.

It will be appreciated that many “video cameras” and “video recordings” include not only images but also the corresponding audio data, synchronized with the image data. The invention can make use of the associated audio data in a number of ways, as will be described in several Examples.

It will be further appreciated that the invention is not limited to any particular type of image acquisition hardware; video cameras, webcams, digital cameras embedded in cell phones, etc., may also be used to generate the raw data files. The digital imaging device may further include any suitable lenses or other optical components, such as telescopes, microscopes, etc., as are well known in the art. In particular, the invention may be used for examining periodic movement in small MEMS devices or micro-actuators, which could be observed using a video microscope for quality control or other purposes. Adapted to a telescope, the invention could be used, e.g., to study vibrations in ships, helicopters, missiles, etc.

Many examples of the present invention are completely general in that they do not require or insist upon a blob that must be identified with an object in the field of view or with a contour segment that must be associated with an object in the field of view.

Techniques of the present invention may be applied to a variety of imaging modalities, including visible imaging, thermal imaging, multispectral imaging, or hyperspectral imaging. In fact these are entirely different and independent media having different sources and different mechanisms and different physical significances. However, the techniques for measuring motion remain the same for any spectral ranges. For example, the use of visible images of an object of interest overlaid (or interleaved, overlapped, interspersed, approximately synchronized, or truly simultaneous) with near or far infrared images may yield two effectively independent views of an object of interest. If reflected visible light reveals a periodic motion that may be associated with a structural vibration or some other periodic motion, and a thermal image (perhaps due to friction or an electrical problem) reveals a similar periodic motion in a location proximate to the visible finding and similar in phase, frequency, or amplitude, or all three, then this improves the likelihood of an accurate result rather than a false positive or false negative finding.

In the Examples that follow, various aspects of the invention will be made clearer, and applications to various monitoring problems will be illustrated. These examples are not intended to restrict the scope of the invention to the particular implementations described.

Method of Adaptive Array Comparison

Example

    • One way Applicants have developed to analyze a video stream to extract waveform information is Adaptive Array. Comparison, which may be described generally as follows: A video signal includes multiple frames separated in time and each frame includes multiple pixels. Each pixel value in a particular frame is compared to the values of corresponding pixels in different frames of the video signal to search for periodic signals occurring at the pixel locations over time. Periodic signals are defined in time with maximum and minimum peaks. These two points give the maximized absolute value in the difference of the value of the periodic signals. This fact can be exploited to filter and locate pixel locations where the periodic signal gives the largest signal to background or signal-to-noise in a scene or image sequence. In the case of human respiration we know the adult rate typically lies in the approximate range of 12-20 breaths per minute. This corresponds to 0.25 to 0.33 Hz. A medium rate would be 16 breaths per minute or 0.27 Hz. For a 15 fps camera that corresponds to a breath every 56.25 frames. This would suggest that a video of a person breathing may include periodic signals at certain pixels having a maximum and minimum at a difference of approximately 28 frames apart. This is the initial frame difference we use to locate the best pixels to track. We difference several series of images at this separation, meaning the value at each pixel location in one frame is subtracted from the value at the corresponding pixel location at a second frame. Then, the second frame is differenced with a third frame and so on. For each pixel location, the absolute values of their differences are added together. Then some number of pixels with the highest sum of differences are selected to be tracked. There is the potential that the chosen frames happen to be 90 degrees (or some other phase shift) out of phase with a max and min; so in the event that no initial peaks are found, the system can recalculate with a 90 degree phase shift (or some other phase shift). Once we find the correct pixels to track, we then begin peak counting for each selected pixel, noting the phase of the waveform. Once we have sufficiently determined the precise phase and frequency of the waveform, the system will recalibrate, making sure to frame difference such that the number of frames between differences exactly equals the difference between a max and min of the waveform, as well as starting in phase so the difference is aligned with the max and min. The processor runs the code to do this, as with any other programs, although a specialized piece of hardware specifically designed to do this is not necessarily used (in contrast to the way in which decoding HD video is typically done).

Applicant has found that the foregoing inventive method adapts to a particular component's (or patient's) waveform even if it changes. In the example of machinery, different frequency rates can be chosen to start with, based on some rudimentary knowledge of the equipment or previously stored user data. For example, passive electrical equipment might be expected have a vibrational displacement corresponding to 60 Hz or some harmonic thereof; a motor or linkage might have most of its vibration at a frequency associated with its rotational speed. The inventive method inherently filters unwanted frequencies. Because of the selected time difference between frames we can reject signals associated with frequencies other than those related to the process we are monitoring.

Example

    • An image sequence is obtained through a video camera and stored into memory. A video sequence as shown in FIG. 1 has dimensions of space in the x and y axis while having the dimension of time in the z axis and can be thought of as a 3-D data space. The video sequence contains a periodic or recurring motion of interest that is to be extracted from the data set. For example, a feature of interest may be occurring at a given pixel and we may be interested in monitoring that feature. We can use its temporal behavior to find and locate that pixel without any knowledge of the scene, the surrounding pixels, the spatial characteristic of the local pixels or where that pixel may be.

Method of Adaptive Array Comparison for the Detection and Characterization of Periodic Motion

Example Mathematical Description

    • An image frame is defined as an X-Y matrix. A video file is a set of image frames defining an X-Y-t matrix.
    • For each pixel (i,j) one can calculate the difference (Di,j)M,N between the value at pixel (i,j) in one frame and the value in another frame, where M is the starting frame number and N is the spacing between the two frames. So (D2,3)4,9 would be the difference in value or intensity at pixel (2,3) in frame 4 compared to that in frame 13.
    • The difference matrix is then summed (preferably in quadrature) to yield the total difference in pixel intensities between frames M and M+N. Difference matrices are calculated for various values of M and N, to find M and N that give the highest value of summed differences. Then a selected number of pixels (i,j) having the greatest difference are chosen and their intensities are tracked over time to determine the periodicity of movement.
    • There is generally a limit on how long one does this. For example, at 15 fps for 20 sec there are 300 frames, so if N is 10 we difference 29 times (accounting for the ends) or less as M is increased.
    • Once this is done initially and we have found the location of a peak and the peak separation we redo the calibration with a specific M and N to get it exactly on the peak. M would now be the frame number where an expected max or min occurs and N would be the value of ½ of a waveform. This introduces the novel aspect that the process becomes essentially adaptive.

Example

    • The method and functions of Adaptive Array Comparison may be described as follows:
    • 1. Video Sequence comes in at some frames per second (fps)
    • 2. Some seconds of that data is continually buffered. (Previous frames are overwritten with new frames.) These are the frames we process.
    • 3. From the buffered frames [1], [2], . . . [n] we select a multiple of frame differences N to calculate, e.g. every 4th frame (N=4) or every 5th frame (N=5), etc. This allows us to find the best periodic motion rate. We select the time range between frames based on the range of periodic motion we are interested in finding. This also acts as a band pass filter giving preference to the periodic motion rate within this range.
    • 4. We also offset these frames, say, starting with the 1st frame, then the second, etc. This allows us to find the best phase. So for example if we are subtracting every 4th frame, we first would do the 1st frame minus the 5th, the 5th minus the 9th, and so on. Then we would do the 2nd frame minus the 6th frame, then the 6th frame minus the 10th.
    • 5. For each test, frame differencing is conducted for some number of frames. For example, 11 frames may be used and 10 frame differences will be calculated. Each frame difference will be an array of absolute value numbers with each position in the array corresponding to a pixel location.
    • 6. After 10 frame differences are calculated the square of the frame difference arrays are added together to produce a total frame difference array. Then the total frame difference array is analyzed to find the array locations having the largest values. For example, the ten largest values in the array may be found and the array locations containing those values are recorded. When we subtract the frames we add all the differenced frames from the buffered video in quadrature, meaning we square the difference values so that negative and positive differences don't cancel. Motion in opposite direction shows up in the difference frame with an opposite sign but we want that to add positively, otherwise back and forth motion can show as zero in the sum of the differenced frames.
    • 7. From all the total frame difference arrays across all the multiples and offsets we find a selected number of pixels that have the largest values, say, the brightest 10 pixels. These pixels represent the largest periodic motions in the scene that are between the rate boundaries set in Step 3.
    • 8. Those pixels' values are tracked over time and this will plot the motion waveform. The signals from the tracked pixels are not necessarily in phase. Any two signals could be 180 degrees out of phase and still provide the same information about the existence and the frequency of the same time periodic signal. For example, the signals of all pixels could be out of phase with all other signals, but still correspond to breathing of a subject. Thus the frequency of each signal would be the same.
    • 9. From the motion waveform we determine the rate and phase. In terms of frame differencing this translates to the peaks in the waveform occurring every Nth frame and the waveform starts at frame M.
    • 10. From the information in Step 9 the Adaptive Array Comparison method adapts since we know where the peaks start and how many frames apart they are.
    • 11. Now we subtract frames with a separation exactly equal to the separation between a maximum and minimum in the waveform. We also make sure to start this process exactly on a peak or minimum. This optimizes the rate and phase to precisely select the motion waveform.
    • 12. This process can be repeated based on a number of factors:
    • a. Time—The method can reiterate every n seconds to ensure it is optimized for the existing waveform.
    • b. When there is a large excursion in the measured waveform the process can restart as this can be due to a motion event.
    • c. The process can restart based on a trigger from an external motion detection method.
    • d. The process can be restarted on buffered data (the 20 seconds of previous video) when an alarm is triggered, (for example, no peaks are detected) to ensure the alarm is accurate. For example, if the waveform is accidentally lost this step could check the last 20 seconds of data to see if the waveform can be detected in another pixel or set of pixels.

Example

    • FIGS. 2 and 3 illustrate how the system selects the frame spacing by trying two different spacings and comparing the magnitude of the peak differences. In FIG. 2, every 8th frame is used; this turns out to be a bad or “null” spacing, as the sum of differenced frames for a given pixel is zero. When the spacing is changed to every 6th frame, the sum of differenced frames for a given pixel is 30, indicating a much higher signal for this waveform that distinguishes it from the rest of the pixels.
    • So, therefore, we would try a multiple of spacings that represent a reasonable range of motion rates. We would also offset them or change the phase because the best spacing that perfectly finds the peaks and valleys is also a “null set”. Now we know where to look when this gives us the pixel with the largest sum of the differences. From this pixel we track and locate the peaks and valleys of the waveform. Then we adapt to the individual object's waveform with the next calibration.

Example

    • FIG. 4 illustrates the summed frame of differenced frames from 20 seconds of video data collected on a human subject. Note the method has successfully isolated motions associated with the breathing based on the selected frame number separation: here the darkest pixels identify the location of greatest motion found in the chest region from the up/down right/left motion of breathing as seen by the camera. These are the pixels that would then be tracked to determine the breathing waveform. It is important to emphasize that the data presentation in FIG. 4 is not an image per se, but rather simply a graphical display of the pixels (dark) that show the most movement. It is also important to emphasize that the entire process was conducted by the processor in an essentially autonomous operation.

Example

    • An explicit example of the calculations can be shown as follows: Let [1] be the first frame of a video sequence, a 640×480 image, so that [1] is a 640×480 matrix. Likewise [2] would be the second frame making up a new 640×480 matrix and [3] would be the third. We would like to sum the difference of every N frames. To ensure the difference is positive we subtract the frames, then square the difference, and then take the square root. Finally we sum all of differenced frames.

For example, if we difference every 8 frames the calculation would be of the form:


√{square root over ([1]−[9])2)}+√{square root over (([9]−[17])2)}+√{square root over (([17]−[25])2)}+√{square root over (([25]−[33])2)} . . . =[SUM]

Other potential applications and features of the invention include the following:

A user defined setting can be selected to narrow the window on which rates to look for, e.g., 60 Hz in the case of equipment running on standard AC power. It will be appreciated that narrowing the window allows the system to converge more quickly on an optimal frame rate, because this reduces the number of iterations the system has to go through, making it quicker and more accurate as the chance of error would be reduced by eliminating certain frequencies.

Information such as a profile of a particular component's characteristic vibration rate, may be stored and later retrieved so the device has a better range of expected rates to look for a priori. In this case the user selects a profile that the device has gathered over previous uses, or parameters that were previously entered.

Sections of the video scene may be selected to narrow the search. For example, if the video camera is looking at the edge of a moving paper web, a user interface might allow the user to draw a box (e.g., on a touch screen) to select only the sheet of paper, eliminating the rest of the production environment. Eliminating extraneous parts of the image from consideration will allow the calculations and optimization to proceed more quickly.

One can isolate periodic motion by selecting the range the motion is expected to be in. For example, a particular pump might have a standard speed at which it rotates or reciprocates, which will suggest a reasonable range of frequencies to start with.

The data can be used with a standard peak finding method to determine the max and mins of the waveform.

Example

    • FIG. 5 presents a simplified flow chart of the basic camera operation and analysis done in a completely automated process. The input video may be MPEG1 simple profile or MJPEG from, e.g., a USB webcam or similar source. The initial calibration subroutine, using 20 s of video, locates the 5 pixels with the greatest values in difference matrix, and establishes State 0, or initial calibration. The waveform tracking and processing subroutine tracks pixels, continuously outputs the last N values of the pixels determined to be the greatest from the difference matrix; processing is done on waveform to determine output states. States 1-3 will be determined in this routine based on processing of the waveforms. An ongoing calibration subroutine is continuously looped; this uses 60 s of frames, summing the difference of frames and locating the five pixels with greatest values in difference matrix. Five output states are continually outputted from the subroutines through a physical channel. Off State condition may be determined by a selected trigger, implemented either in hardware or software.

Example Calibration Subroutine

    • A subroutine recalibrates the location to find the best pixels. This consists of going through the process of frame differencing again and locating the 15 highest valued pixels in the summed array. The duration of the calibration can be programmatically controlled as well as the frame numbers to difference as a result they may vary. For example we choose to difference every 4 frames of a 16 fps camera for 40 seconds resulting in 160 differenced frames. Note this is different than the initial calibration since that is limited to 20 seconds.
    • This recalibration process' continually goes on in the background while the waveforms are being outputted from the pixels and the peak finding is performed.

Method of comparative analysis of time-varying and static imagery in a field

The invention may further include a method for measuring harmonic or nonharmonic motions based on corresponding temporal intensity changes perceived by a focal plane array or equivalent sensing device, as taught generally in Applicant's Provisional Application Ser. No. 62/209,979, filed on Aug. 26, 2015:

Example

    • Temporal variations in light levels on a pixel can be associated with motion as features of varying contrast, intensities, colors or other properties change. For example, a black and white edge imaged on a single pixel may not fully be resolved, but its motion can be correlated to temporal intensity changes measured by an individual pixel as different percentages of the black or white side of the edge are imaged onto the pixel. Note that a pixel may refer to one sensing element within an array or to a group or block of sensing elements.
    • An analysis of the scene over time can be made to determine the nature of the time varying signal and certain properties determined over that time may be compared to instantaneous values during that time. For example, a periodic signal may be measured by a single pixel as a periodic intensity change. That periodicity may be the result of a black and white edge or other feature periodically moving in the scene. As that feature periodically moves it may image areas of different reflectance or transmission onto a singular pixel making its intensity value change. That signal may be represented as in FIG. 6.
    • The signal represents motion of an object over time. There exists an intensity of that object that represents its equilibrium position or stationary position, i.e., a value of intensity where the object, if not is motion, would be measured. For the case of a periodic signal that motion is often the zero point on the y-axis or halfway between the min and max amplitude of the periodic signal. In a statistical measurement that measurement may represent the median.
    • If we are to calculate the median of the signal in FIG. 6 we may find that value to be represented by the horizontal line shown in FIG. 7.
    • A comparison of the median value to instantaneous values over time then indicates the variation of the motion deviating from the means as shown in FIG. 8.
    • FIG. 8 shows how light levels can be compared to a reference value, in this case the median value, to determine relative motion deviating from that value as measured by the light signal intensity change. The reference value may not necessarily be the median. It could be the mean, the max amplitude, the min amplitude or some value throughout time in the measurement, perhaps the first image of a sequence.
    • Additionally, this can be done over an array or image so that the reference value is determined in all pixels or elements of the array. This array could be an image measured with a camera. Each pixel over time will be analyzed and a reference frame may be determined. In some instance analysis may not be necessary as measurement is compared to a known reference frame or perhaps a single frame in an acquisition.
    • Each series of instantaneous frames can be compared or differenced (or perhaps proportioned or divided) from the reference frame. Those difference frames can be reordered in the same order as the instantaneous values to create a time series of comparison frames. Those frames can be played back in order to visualize the changes relative to the reference frame. For purposes of simplicity we use difference or difference frame. That operation does not have to be subtraction; it can mean division or multiplication or some other method. We will refer to these operations simply as difference or difference frame.
    • An example of this is to determine the median frame where a periodic signal is evident in a scene. The median frame represents the node of the periodic signals. Each instantaneous frame is compared, perhaps differenced from the reference frame and those differenced frames are played back in order. A light variation in a scene or another visualization may represent a periodic motion signal from a periodic motion in the scene.
    • A single spatial image from the original time sequence may be overlaid with each frame in the differenced sequence. This overlay may help discern the original content of the spatial structure in the scene when the video sequence is analyzed. Different weights may be applied to the original image and the individual difference frames to determine the strength of the overlay and how much the original image contributes.
    • Other decisions can be made in creating this sequence of differenced frames and overlay. For example, if the difference is below a threshold it is not included in the re-sequenced set of frames, and instead the pixel value from the signal of the original time series image is used. This would allow the sequence to show only motion that had a certain level of intensities above the threshold. Again this may be used with or without various weightings of the original image and individual differenced frames.
    • FIG. 9 shows a single image from a video sequence. FIG. 10 shows differenced image of frame similar to FIG. 9 differenced from the median reference frame. The white line on the bottom of the bridge (arrow) in FIG. 10 is an intensity change that is indicative of periodic motion from the bridge vibrating. This illustrates a powerful result of the inventive analysis, because it will be appreciated that the actual deflections of the bridge are far too small to be seen in the raw video frames.

Method of Motion Amplification

Another method that can be used to further enhance the inventive technique is to amplify motion by an amplification factor. This factor determines the strength of the overlay of the difference image sequence on top of a static image from the original motion sequence. For example, if the factor is 1 there may be equal weight applied, whereas if the amplification factor is 30 the difference frames are increased in intensity by some factor relating to 30 in the composite image. This would allow one to determine how much of the difference sequence is present and how much the static frame is present. A factor of 0 may turn off the motion and just show the static image.

Example

    • FIG. 11 shows the original image frame of a paper square with a plus sign, attached to a speaker vibrating periodically. The arrow in FIG. 11 shows the motion of the paper relative to the frame imaged by the camera. This image corresponds to what is being imaged in FIGS. 12 through 17.
    • FIG. 12 shows a difference image in a sequence from the median reference frame. Notice the increased intensity change on the top and right side of the paper with the plus sign.
    • One can see that in FIG. 13 the increased intensity change is now on the lower and left side of the paper. This is from a frame in the same sequence at a later timestamp. Comparing FIGS. 12 and 13 indicates the motion is up and to the right and then down and to the left. FIGS. 14 and 15 show the same behavior except that amplification has been applied to these frames, meaning there is an overlay to the original images such as the one shown in FIG. 11. The amplification factor determines the relative contribution of the original image such as FIG. 11 and the difference portion such as in FIGS. 12 and 13. More of the spatial detail of the scene is present in FIGS. 14-17 because overlay of the original image in the original sequence contributes to the image displayed.

Applicant has discovered that this method can be further modified to create the appearance of exaggerated movement in the image, by superimposing the differenced frame onto the reference frame and using the resulting image to replace the original frame in the video.

Example

    • The first step is to choose a reference frame. In the case of periodic signals the median works best because that is the zero value. (The median image is created by calculating the median value of each pixel over the length of the video and using all those values to create a median image.) Then every frame is compared to that reference frame to create and new set of differenced images. If the video clip is 800 images long (10 seconds at 80 fps) this will comprise 800 difference frames. These images will likely show relatively small variations in pixel intensity because that is only what has changed.
    • The next step is to amplify. For example, if we want a factor of 30 amplification we multiply all the differenced frames by 30, so if one particular pixel in a difference frame has value 0.2 it becomes 60. This creates a new set of differenced frames that are amplified by the selected factor.
    • Those amplified difference frames are now directly added to the original frame. This helps to introduce a semblance of the original scene and give a baseline. (Note that 8-bit images are only 0-255 so because we added to the original image we may need to rescale to make sure the intensities fit in 8 bits. This will introduce a small amount of noise into the image, as seen in FIGS. 28A and 28B, but does not interfere with the added functionality of being able to clearly visualize the motions.)
    • Now we have a new set of images that comprise the original image plus the amplified difference frames that represent motion. These new frames are put back in the video and played back. The frame rate is generally reduced so the motions are easier to see, especially if the motions occur at high speed.

Example

    • A short video was taken of a small electric motor coupled to a supported rotating shaft having a flywheel centered between the supports. In the raw video, some eccentric motion can be seen in the coupling between the motor and the driven shaft, but the motor itself appears to be motionless. However, after applying the motion amplification procedure described above, the movements become clearly visible so that one can see exactly how the motor is moving or rocking during operation. Careful examination of two still images, FIG. 28, shows a visible difference, with the apparent position of the motor slightly more parallel to the ROI box in FIG. 28A, and more skewed in FIG. 28B.

When viewed as a video, the visual result is not only striking, but in many ways completely surprising, as there are no additional steps or mathematical modifications to cause the apparent motions to be amplified. The process is actually targeting motions that are subpixel, in many cases much less than a pixel. The process for creating the amplified motion video simply alters the individual pixel, in other words a measurement from one pixel isn't directly altered or translated to a neighboring pixel to make it look like the edge moves into that pixel. In most cases, defocusing and other issues often cause the light in about 4 pixels to be changed by an edge motion so each one of those respective pixels' motion is amplified and causes the motion effect to be present in all of them.

Applicant speculates that one phenomenon that might be at work here is that multiple pixels are behaving in a correlated way. In other words, when an edge or feature moves one sees the effect of motion in multiple areas and visually processes that as motion. For example with the rocking of the motor, one sees the entire side of the motor go up, so all of those pixels are working together in a correlated way to make the viewer perceive that the object is moving.

Example

    • Thresholds and limits may be set for the entire differenced image or certain regions for a computer system to autonomously make decisions without human interaction. If certain pixels or groups of pixels behave in certain way then action by the computer can be taken without human interaction. An example of this may be intensity level changes indicative of motion in a certain direction, motion above a certain threshold, or motion of specific components being present. These are a few examples of a multitude of events triggering a reaction. Events could trigger a reaction or action in correlation to an outside event that is inputted into the system from an outside source.

The information gathered from this system could be used to control outside systems, for example, a feedback response control, alarming system, or process control. A multitude of outside systems could be integrated with the system.

The invention may be used with other imaging techniques that measure motion to determine qualitative values of the motions indicated in the technique described here. Feature tracking and edge tracking are examples of techniques that may be combined with the inventive method.

Example

    • FIG. 16 shows a difference frame, except that here the reference frame is a mean frame and not median frame while FIG. 17 shows another difference frame, except the reference frame is a single indexed frame from the original video sequence, thus showing that one can use a multitude of types of reference frames from which to compare.
    • The inventive technique may be combined with other techniques to overlay a color profile indicative of absolute displacement. For example, a color mapped scaling may be applied to show the relative intensities or absolute intensities of these motions.
    • Thresholds may be determined based on max and min amplitudes of these motions. One may therefore determine if certain levels are exceeded in an area of interest. Frequency content may be determined for the area where a threshold has been reached. This event may be indicative of a resonance. The resonance may be characterized and actions (e.g., alerts) can be triggered based on the conditions.
    • Filters can be applied to the temporal signal allowing, for example, a high-pass, low-pass, band-top, band-pass or other type of filter. This would allow the signal to only correspond to a limited number of frequencies. In the event of a band-pass, for example, only certain frequencies would be evident in the temporal signal so when the difference images are created the frequency content could be known and contributed to those frequencies.

The invention could improve other edge motion detection schemes where the motions that are present can be attributed to edge motion and characterized as such. This technique could be used to visualize and characterize edge motion from displacement.

Phase information can be determined with the inventive technique based on the light level changes from dark to light indicating the motion is increasing (positive in a reference frame) or decreasing (negative in a reference frame) relative to the reference frame.

Overall motion can be determined irrespective of direction in that the difference frames can be squared to remove any negative component, and then compared.

Direction can be determined from the increase or decrease in the difference frame. Positive motion may be shown as a brightening in the scene while negative motion as darkened. A +/− signage can be applied to brightening or darkening, as they are relative.

Example

    • Nonperiodic signals may be detected and imaged with this technique. Take for example FIG. 18. This shows the motion at a particular location in a scene over time, as described more fully in Applicant's co-pending application “Non-contacting Monitor for Bridges and Civil Structures”, Docket No. RDI-017. In this example, taken from a screen shot of a GUI, it is tracking the motion of the lower edge of a girder as a truck and two cars cross the bridge. This may represent 1 pixel. The line through the center of the plot may represent the value for that particular location or pixel comprising the reference frame. Then all differences are compared to that value. This can be done for all locations or pixels so that all differences are relative to that intensity value which represent the object's position located on the line. Then the series of difference frames are relative to this reference and all motion seen is relative to the location of the line. This may represent the equilibrium or static position of a bridge or structure. All motion imaged in the series of difference frames shows motion relative to this.
    • Note a pixel may also represent a group of summed smaller pixels.
    • The series of differenced images can be reorganized in a temporal sequence for playback as a time sequence such as a movie. This sequence could be in the form of a series of images or as a simple standard video format. This process could be autonomous and in various forms using different static images as the reference file and/or in combination with overlays as previously discussed.

As noted above, the imaging systems may have multiple inputs. These may comprise two visible cameras, an infrared imager and a visible camera, a camera and another input other than an imager such as an ultrasonic sensor or a temperature or a pulse monitor or some other combination of two or more imaging devices.

Example

    • In a system in which material is being processed in the form of a moving web, e.g., papermaking, it may be advantageous to position two video acquisition systems opposite one another so that each is recording images of opposite edges of the web. In this setup, it will generally be preferable to have both systems synchronized with a common time stamp so that coupled vibration phenomena may be detected and quantified.
    • The rate of video frame acquisition may be adjusted, e.g., to correspond with a naturally-repeating feature on the moving web, such as a printed page. In this way, every frame would capture substantially the same view, and periodic vibrations may be more easily discerned. In a case such as the cold-rolling of metal sheet products, the frame rate may be adjusted to correspond to the linear equivalent of one revolution of the processing rolls, so each frame represents a part of the sheet that contacted the same area on the roller.

The inventive technique is not limited to a particular wavelength of light. Different colors are represented by different wavelengths of light, e.g. 550 nm is green. Amplitude changes that are detected by this technique can be restricted to a single wavelength of light or represent a summed intensity change over multiple wavelengths. Each wavelength can be measured independently or together (mono grayscale). The inventive technique may, for example, monitor only the green, blue or red wavelength or monitor the sum of all three.

Electromagnetic Wavelength options. In addition, the inventive technique is not just limited to visible wavelength of light, but can be used in the near IR, far IR, or UV. The technique could be extended to any sensor type that can measure changes in light levels over time whether from reflective or emissive sources. Visible light generally, although not always, arises as a reflection. Thermal IR light generally, but not always, represents emission from a surface. The invention works regardless of whether or not the target is reflecting or emitting the light being detected.

Sensor selection. The sensor type can be chosen based on the scene or target. For example, if the scene is completely dark, devoid of a visible light source, a thermal IR sensor may be used to monitor the changes in light levels. Also if a particular material or substance is the target and light level changes are due to a property of interest on the target another sensor type may be chosen. For example, with gas that absorbs in certain wavelengths, or more generally chemicals, a particular sensor that detects those properties may be chosen. For example, one may be interested in using the technique to find an exhaust or chemical leak in the scene based on light intensity changes from the leak specifically associated with the absorption and/or emission at certain wavelengths. Another example may be a flowing liquid that absorbs in certain colors, and that flow changes or pulsing may be indicated by intensity changes in a certain wavelength of light, then a sensor particularly sensitive to that wavelength of light might be chosen.

Interpreting measurement information. The inventive technique can also be used to garner information about the type of change. A particular change using a thermal sensor would indicate that the temperature is changing, whereas a change in color may indicate the target is changing is absorption or emission properties. A change in amplitude could also be indicative in a change in position or vibration, whereas a change in position of the signal being detected from pixel to pixel in time may give information about displacement.

Comparing multiple measurements. Ratio or comparisons of color changes or amplitudes of certain wavelength can also be used. For example, it may be useful to locate a pixel that changes in intensity from blue to red. This could be indicative of certain properties of interest. An example would be characterizing the uniformity of printing or dyeing on a paper or fabric web. Multiple sensors could be used for this technique or a single sensor with wavelength filters applied (such as a typical color camera). Certain features of interest may be indicated by relationships between multiple sensor sensitivities or wavelength of light.

Redundant and independent inputs. Multiple sensor types or wavelength detections could also provide multiple detections of the same phenomenon increasing the confidence of detection. For example, the light intensity changes due to the periodic vibration of a duct may be detected with a visible or IR camera pointed at the duct wall while another sensor looks at the intensity change in thermal IR from temperature changes around an inlet or outlet of the duct. The technique is then used in both cases to strengthen the detection scheme.

False negative findings. Multiple wavelengths could be used to discern or improve findings which may be false positive and false negative findings and true positive and true negative findings. Intensity shift from multiple wavelength, red, blue, green, IR, etc. could be used in conjunction with each other to improve the signal to noise ratio and also provide multiple independent readings to increase confidence in detection.

Measurement duration. The inventive technique could be used with signals that are repetitive but only over a short time duration, e.g., vibrations that arise in forging or stamping operations. The technique could be applied to shortened windows of time to capture signals that occur only for a set duration. Furthermore it could be used for signals that continually change over time but are ongoing. For example, with a signal that speeds or slows, the time that is used to calibrate or search for a certain intensity change could be shortened to be less than the time change of the signal itself.

Transient event. Additionally there may be irregular or transient events in a periodic signal. The inventive technique could be used in a short enough time window or in a sufficient sequence of waves to extract the location of a periodic signal in the presence of irregular or transient events. FIG. 19B shows an irregular and transient event in an otherwise periodic motion. If the sample window for the technique described here is properly placed, the maximum and minimum of the periodic signal can be located. Multiple phase offset would help to address this issue by building up a pixel's sum of differences at a time that the phase offset for a starting point has brought it past the irregular or transient signal occurrence.

Spatial proximity. The invention can find multiple pixels of interest. Spatial relationships between the pixels of interest can further be exploited. For example, if a larger number of pixels of interest were selected and the vast majority of them were found to be in close proximity to each other that could indicate those pixels are related to the same physical phenomenon. Conversely, if they are spread apart and there appears to be no spatial coherence or statistical meaning among the spatial relationship or they are randomly spaced that could indicate they are not related. Furthermore, this technique could also be used to increase confidence in the signal or improve findings which may be false positive and false negative findings and true positive and true negative findings. For example, in a motor-driven pump, there are likely to be many pixels of interest found near the coupling. One could expect a certain percentage to be heavily localized. If this is not the case, it may lower the confidence that the vibrations of the machine are being detected. Conversely, if a large number are heavily centralized one may be more confident in having located a physical region undergoing motion from vibrations. The confidence may be set by a weighted spatial location mean of the pixels or average separation distance, or standard deviation from the spatially averaged locations of the all pixels of interest.

Cycles per minute. Intensity variations for different pixels of interest can be indicative of certain phenomena of interest. By limiting the temporal separation over which the pixels are differenced and the differenced sum is obtained, one can filter for phenomena of interest. For example if one is interested in a rotating or reciprocating machine one would preferably limit the frame separation to max and min separation time of waveforms that correspond to the rate of rotation or reciprocation.

Re-calibration—finding a pixel of interest. It is possible after the technique adapts to find the suitable or best separation to get the largest intensity change based on the differencing of max and min frames, a new search can be performed with that knowledge with tighter constraints to search out specifically that waveform. In that sense it is adaptive after it uses more liberal parameters to find the initial signal. It is possible that a user's information or information on a subject or phenomenon may be stored. The technique can now be used with a priori knowledge of rate, phase etc. to speed up finding the pixels of interest. For example, a user may store the profile of a particular machine or class of machines, and the technique is then used with knowledge of that data. That way, fewer cycles need to be performed and a tighter constraint can be placed on the technique to find the pixel of interest.

Visible and infrared photons. Variation in the intensity of pixels may not always result from radiation emitted or reflected by a single object. For example, if something is moving and at a different temperature than the background, that object may move back and forth periodically blocking a small portion of the background. To a thermal sensor, a pixel detecting light in that region will see an increase and decrease in brightness from the object moving back and forth as the object at Ti and then the background at a different temperature 12 are alternately imaged by the pixel.

Multiple cameras. Multiple cameras can be used for multiple detection schemes. They potentially could run at different rates. It is possible to temporally align frames so that certain frames occur at the same time. In this scene the resulting detection of a signal can be temporally aligned as well and correlated. Cameras could potentially be set to image the same field of view. Pixels across multiple cameras or sensors could be correlated so spatial relationships of the pixels in the image of each camera is known.

Other sensors. Other inputs could be correlated to one of more cameras. The detected signal could potentially be correlated to another input signal as well. For example, if a pulse oximeter provides input to the system, the blood pulse and potential respiration timing could be used to validate or increase the confidence of a detected signal determined from a pixel of interest from the technique. Tachometers, accelerometers, and tonometers are all examples of types of sensors that could be used in conjunction with the inventive technique. These input signals could also provide frequencies or phase data to allow the system to use tighter constraints to reduce the number of iterations it goes through or immediately determine the proper phase and or frequency from which to select the differenced frames. These inputs also can be used as triggers.

Single pixel and combination of many pixels Techniques of the present invention may be used with the smallest achievable pixel size or may be used with binned pixels where multiple neighboring pixels are collectively associated (mathematically or statistically) to create a larger virtual pixel. This technique may be done on camera or chip or done afterwards in data processing. Binning may be done horizontally, vertically, or both, and may be done proportionately in both directions or disproportionately. Collective association or binning may potentially leave out or skip individual pixels or groups of pixels. For example, one form of collective association may comprise combining a plurality of bright pixels while ignoring some of all of the pixels not determined to be “bright” or “strong” considering a characteristic of interest such as a selected frequency range or amplitude.

Gaining confidence by eliminating false findings. It may be of interest to increase the confidence of the detection by exploring neighboring pixels. Even if those pixels were not chosen as the ones exhibiting the largest motion they can be explored to determine if at least one or more exhibit the same or strongly correlated waveforms to the pixel of interest. If a physical phenomenon that one is trying to detect is expected to be larger than one pixel, it stands to reason that neighboring pixels would undergo similar behavior. Thus it will be clear that this could be used to eliminate false positives in a detection scheme.

Multiplexing. The inventive technique can be applied in a single pixel variant in which an optical element would be used in a multiplex mode where the optical element scans the scene and the transducer samples at each point in the scene. An image or array is created from the scanned data. The sampling/scanning rate is contemplated to be high enough to effectively sample the scene at a fast enough rate to see time-varying signals of interest. Once certain pixels of interest are located, the system would then need only scan those pixels until a recalibration is needed.

Searching a plurality of frequencies. One can compare amplitudes of different subtracted frames separation values, or multiple sums of subtracted frames separation values. For example, comparison can be made between the sum of the subtracted frames for separation N1 and for separation N2. The frame separations are indicative of frequencies. This comparison will allow one to compare amplitudes of signal changes for different frequencies. Multiple frames separation values that give information about amplitudes of a frequency of the signal can be used to construct a frequency spectrum for a single pixel or multiple pixels.

Arrays representing subtracted frames or sums of subtracted frames at certain frame separation values may be indicative of a frequency. Those arrays may be compared to indicate information about the signals. For example, if two arrays are determined that are indicative of frequency f1 and f2, one may compare those two arrays for determine the spatial distance between the phenomenon that is causing the frequencies. In this case the array may be a spatial image.

The following example will more fully illustrate the inventive method, applied specifically to the case of monitoring respiration, as described in Applicant's co-pending application.

Example

    • Initial calibration with a single frame separation value and starting point for frame differencing does not optimize the differenced values specific to the respiration rate or maximum and minimum values in the chest motion. To solve this we select multiple frame separation values, N, all at multiple starting points, M, to ensure that we find the optimized signal of interest. A series of waveforms, FIGS. 20A-D demonstrates this principle.
    • Here we see that at 4 frames of separation, FIG. 20A, the separation does not align with the maximum and minimum peaks in the waveform. Aligning with the maximum and minimum peaks would give the strongest signal indicating that the right separation or rate has been found.
    • FIG. 20B shows the effect of changing the separation N between differenced frames to every 8 frames. One can see that this is better but not quite optimal. Next consider 9 frames, FIG. 20C. To ensure that all possibilities are considered we want the option to select a range for frame separations to subtract as well as the increments in spacing. For example, we subtract from every 2 to 30 frames in increments of 4, or generally we subtract from N1 to Nn in frame separations in increments of ΔN.
    • In addition to frame separation values, the starting point (referred to as phase in wave mathematics) plays a role in finding the correct frame.
    • Considering again the case of 9 frames, as shown, it was the correct separation to subtract to find the maximum difference in frames since it aligned with the maximum and minimum in the waveform. Now we choose a new starting point M and see how it affects the results.
    • In FIG. 20D, we see that offsetting the starting point to frame number 5 misaligns the 9 frame separation so that it no longer coincides with the maximum and minimum of the waveform. So in addition to doing a multitude of separations, for every separation value N we also calculate the difference for multiple offsets. For example, if we difference every 5 frames we do that difference for all offsets from M1 to Mn with an increment of ΔM. An example would be subtracting every 5 frames starting at frame 1, then subtracting every 5 frames starting at frame 2 and so on. Again, in general we want the option to subtract multiple offsets in increments of ΔM within a range of M1 to Mn. For example we may want to increment the offset 5 from 0 to 20. That would mean we do all the ranges of differenced frames starting at frame 0, then do them all again starting at frame 5 and so on.
    • Once we find the brightest pixels from all the calculations (both all offsets, M, and all frame separations, N) we now know what pixel to look at, where the waveform starts, and what is the separation of the peaks and valleys.
    • The next calibration we do is adapted to these values and we only calibrate based on those values.
    • For example, assume that we find that the peaks and valleys separation is every 25 frames and the starting point is 5. Now we know the waveform restarts every 50 frames. So if we recalibrate, it would be at position 55, 105, 155 . . . and so on. This eliminates the need to do all the calibrations above or what we call the initial calibration.
    • So in terms of the above, the Initial Calibration is the one that does all separations and all starting points. A recalibration (adapted from the initial calibration) uses the known values determined from an initial calibration. All of these operations are conducted by the processor in a substantially autonomous manner.

Example

    • Simple adaptive array comparison example using 3×3 array:
    • Assume we are using a camera with 9 pixels in a three by three array operating at 10 frames per second.
    • We believe the signal of interest has a frequency about 0.1 Hz so max and min values will occur at a frequency of 0.2 Hz, meaning max and min values should be about 5 frames apart. We decide to conduct frame differencing tests at 4 frames and 5 frames. Each test will calculate 4 frame differences.
    • To test the 4 frame possibility, we select frames 1, 5, 9, 13 and 17 for frame differencing. To test the 5 frame possibility we select frames 1, 6, 11, 16 and 21 for frame differencing.
    • The frames have the following values:

Frame 1 3 3 5 3 3 5 3 3 5 Frame 5 3 3 0 Frame Difference 1 0 0 5 3 2 0 0 1 5 3 3 0 0 0 5 Frame 9 3 3 5 Frame Difference 2 0 0 5 3 3 5 0 1 5 3 2 5 0 1 5 Frame 13 3 3 0 Frame Difference 3 0 0 5 3 2 0 0 1 5 3 3 0 0 1 5 Frame 17 3 3 5 Frame Difference 4 0 0 5 3 3 5 0 1 5 3 3 5 0 0 5 Total Frame Dif. 0 0 20 0 4 20 0 2 20
    • In this test, pixels (1,3), (2,3) and (3,3) are selected as the largest pixels, each having a total time frame difference of 20 with a combined total of 20 for the three largest array values

Frame 1 3 3 4 3 3 4 3 3 4 Frame 6 3 3 0 Frame Difference 1 0 0 4 3 2 0 0 1 4 3 3 0 0 0 0 Frame 11 3 3 4 Frame Difference 2 0 0 4 3 3 4 0 1 4 3 2 4 0 1 0 Frame 16 3 3 0 Frame Difference 3 0 0 4 3 2 0 0 1 4 3 3 0 0 1 0 Frame 21 3 3 4 Frame Difference 4 0 0 4 3 3 4 0 1 4 3 3 4 0 0 0 Total Frame Dif. 0 0 16 0 4 16 0 2 0
    • In this test for five frames, pixels (1,3), (2,2), and (2,3) are selected as having the largest values (16, 4 and 16, respectively) but the total combined value of the three pixels is only 36 as compared to 60 in the test for four time frames. So this test would indicate that a four frame difference is the best time interval and the pixels to be monitored would be (1,3), (2,3), and (3,3). However, a similar test will be run for other phases for both the four and five frame intervals. In the next test, the four frame interval will use frames 2, 6, 10, 14 and 18 and the five frame test will use frames 2, 7, 12, 17 and 22. These further tests are changing the phase of the test. Assuming the next tests produce results that have lower total than 60, the first four-frame test will prevail and its “brightest” pixel locations will be chosen for monitoring.

Applicants have also tested the invention, and found that it performs well, even with asymmetric periodic waveforms. Three examples using skewed or asymmetric periodic waveforms: SawtoothRight, SawtoothLeft, and LubdubRight were evaluated. Each of these three waveforms, FIG. 21 incorporates a skewed 30-frame peak-to-peak periodicity evident. SawtoothRight and SawtoothLeft waveforms have a 2:1 skewed rate of falling compared with rising measurement values. LubdubRight also contains a second peak in each periodic cycle. The inventive method was able to accommodate the features of these waveforms without difficulty.

Example

    • The “Frame Difference” plot, FIG. 22 shows calculated quadrature sums for each of the three asymmetric waveforms. Results are shown for frame differences, with N ranging from 3 frames to 45 frames. In this example, results are reported accumulating the first five quadrature sums beginning from the first frame in each waveform. Note that for this example, the first frame in each waveform begins with a periodic maximum peak value. The graph plots frame difference quadrature sum values (ordinate) versus number of frames offset, N, (abscissa) associated with each asymmetric waveform.
    • Frame difference findings for all three asymmetric waveforms determined equally high 15-frame offset and 45-frame offset maximum sums and determined a 30-frame zero sum for each of these three waveforms. The SawtoothRight and LubdubRight waveforms each have secondary peaks at 20-frame intervals (and presumably also at 50-frame intervals). Notice that the SawtoothLeft waveform has a secondary peak at 10-frame intervals and at 40-frame offset intervals. These things are significant in that the quadrature sum calculation of the present invention not only finds amplitude and frequency for a periodic event, it also provides skew characterization and asymmetric waveform shape information. These aspects of the present invention may be applicable for discerning and distinguishing a first, second, and third portion of a cyclic event. Meaningful portions of a cyclic event may relate to a first work step, such as a muscular contraction step, a second pause or hold or hesitate step, and a third recover or relax or release step. An order and timing and sequence for steps such as these is typically associated with anatomical function or purpose. These things may be applied to cardio, pulmonary, and other muscle driven repetitive activities, and may be applied to periodic movements of machine or process or other inanimate object.

It will be appreciated that for alias-free signal sampling the sampling rate must be higher than the periodicity of the physical movement being sampled; specifically, according to the Nyquist criterion for the present purpose, the video frame rate must generally be twice the frequency of the movement itself, so a frame rate of 20 fps would be needed in order to confidently detect a 10 Hz vibration. The invention may use additional techniques in order to work around this limitation as described in the following example.

Example

    • It is possible to get aliasing from beat frequencies from the beating of the sampling rate and higher frequency. For example if one samples at 30 Hz and there is a 33 Hz vibration, one may see 3 Hz frequency in the data because of the beat frequency the 33 Hz-30 Hz creates. The problem is that the observer can't really “see” it to know for sure where it has originated. Generally a low pass filter is applied to remove frequencies above what can be measured.
    • However, if we intentionally introduce a higher-frequency light source and then measure all the new beat frequencies that appear in the spectrum, it becomes possible to measure higher frequency content that one is “not supposed” to be able to measure according to the conventional Nyquist criterion.
    • For example, if the camera is only capable of measuring up to 500 Hz, if we introduce a strobe at 1000 Hz we will see all the frequencies in the 1000-1500 Hz region because they will all create beat frequencies that will appear in the 0-500 Hz range. By sweeping the frequencies, say 1000 Hz, 1500 Hz, 2000 Hz etc., the low frequency camera can now do high frequencies too, by inference from the difference between the observed frequency and the strobe rate.

The camera does not have to be placed right next to the object. Applicant has discovered through experimentation that the inventive process is sufficiently robust that reliable data can be collected from an object in a random position in the frame and surrounded by various items, which may be stationary or might be moving to some degree. Another important advantage of the invention is that the measurement itself is not invasive or disruptive. It requires no contact with the equipment or process and no tap into the equipment's power feed.

The inventive method does not require a particular camera setup, and in fact may be performed on a historical video file that was taken completely without the inventive process in mind, as described in the following example.

Example

    • A short instructional video was posted on the internet, showing operation of a benchtop seismic simulator [Model K50-1206, NaRiKa Corp., Tokyo, Japan]. A small model of a simple seven-floor structure contained a hanging ball in the center of each story to better visualize the motions and resonances that arise in response to various earth movements. Technical details of the video clip are as follows:
    • Video ID: y6Z9bsGkMsc
    • Dimensions: 640×480*1.75
    • Stream type: https
    • A segment of this video devoted to the building model, comprising 454 frames representing about 20 seconds of running time, was analyzed using the invention. Working from this one video file, one can select any location and determine the frequency spectrum and the displacement vs. time, and plot these variables using the graphical user interface. Eigen images for particular frequencies may be displayed for visual comparison to the raw video in order to see which parts of the structure have a large vibrational component at that frequency.
    • Exemplary results are shown in FIG. 23. FIG. 23A is one frame of the raw video, showing a model under test. FIG. 23B shows the image representing 4.63 Hz, and FIG. 23C shows the image representing 2.84 Hz. Note that faint markings at the bottom of FIGS. 23B and 23C are artifacts that arose because the original raw video contained a superimposed title graphic; these artifacts do not detract from the analysis of the Eigen images (FIGS. 23B and 23C), which can quite clearly be correlated to the moving structural elements in the raw video from (FIG. 23A).
    • At the same time, the user can look at the platform itself to see driving frequencies, the orbit (x and y components) of the platform, and the amplitudes of displacement of the platform. This is simultaneous with the information gathered on the structure, and provides valuable insights connecting structural vibrations to the ground movements that are driving them.

Although in many Examples, it is contemplated that the video image is focused on a particular machine, patient, or component under examination, it will be appreciated that the invention may equally well be carried out in a reversed configuration in which the video camera is rigidly mounted on the equipment or component and is focused on a convenient stationary object in the environment. The fixed object might be a column or other structural feature of the building, a poster or plaque affixed to a nearby wall, etc. In such a configuration, the apparent motion of the fixed object will mimic the actual motion of the camera and the video file may be analyzed in a completely analogous manner as described earlier. So a camera may be mounted on a bridge and focused on a fixed building in order to measure motion of the bridge. A handheld camera focused on a fixed fiducial may be used to measure hand tremors of the user (e.g., from Parkinson's disease or other health condition of interest). In summary, the inventive analysis methods are applicable to any data set of an appropriate size representing X-Y-t coordinates, and are completely agnostic regarding the origin of the video file or the exact physical source of the movements of the image from frame to frame.

Example

    • Ability to Track More than One Waveform. Conventional standoff and contact methods for vibration analysis cannot test more than one machine at a time. The ability to do so would clearly be valuable for the typical plant environment where many different components may be located in close quarters and may each be vibrating independently.
    • Applicants have experimentally demonstrated that, for example, the breathing of a mother and child co-sleeping were simultaneously detected, with the invention capturing dual waveforms from the same video image and displaying both waveforms simultaneously, as described and illustrated in Applicant's co-pending application “Method of analyzing, displaying, organizing, and responding to vital signals”, Docket No. RDI-018. It will be appreciated that the same approach may be extended to the factory environment.

It will be clear to the skilled artisan that the invention can be used in a factory to monitor two machines simultaneously in separate cells, in a refinery to monitor multiple valves or pumps, etc. The information may be uploaded to the cloud or to a server for continuous monitoring or, for example, to a maintenance department or field service team.

Example

    • Using a normal cellphone or mobile device, the application could produce fast results for a quick assessment of a situation or to prioritize various potential maintenance jobs. Applicants have demonstrated that currently-available mobile devices have sufficient computing power to do this. Applicants are currently running on an ARM11 Raspberry Pi board which is slower than the current iPhones and likely slower than the iPhone 5s as well. An early prototype ran successfully on the iPhone 5s using its internal camera.
    • A maintenance worker could therefore move about a facility on a regularly-scheduled basis, collecting video data at preselected sites to image particular pieces of equipment or process points. Each video file so acquired could then be compared to those collected earlier at the same locations in order to detect and quantify any changes in a particular component that would require attention.

It will be appreciated that the method described in the foregoing example may be implemented in a number of ways, providing a useful element of flexibility to the user. For example, the user might have an in-house maintenance department to collect raw video, process the files, and prioritize the maintenance operations. Alternatively, an outside organization (an equipment vendor or maintenance contractor) might come to the site on a scheduled basis, collect files for analysis, and recommend or implement repair or maintenance activities based on the findings.

The user interface may be configured in a wide variety of ways, as described more fully in the following examples.

Example

    • Because the data may be stored with the raw video on a common time basis, if an alarms sounds and everything appears to be normal, the user may simply rewind the video to review more closely what caused the irregularity, as shown in FIG. 24 for the related problem of respiration monitoring. The information, in the present case, might include the video, motion waveform, and a condition or quality index derived from “standard” or historical data. So the user might press a button that rewinds the waveforms and video or goes back a preselected amount of time or to a specific preselected time and plays back the waveform and video side by side to show what triggered an event or an alarm condition, thus providing a more complete understanding of the event. Because the video is time stamped, the event of interest may be correlated with factors such as power surges or dips, lightning, temperature excursions or other environmental conditions, etc. Thus, the invention allows a more holistic awareness of the situation and makes predictive maintenance correspondingly more useful and robust.

Example

    • Using Waveform Signatures to Detect Events. FIG. 19A shows a healthy waveform and FIG. 19B shows an irregular one, again taken from respiration data. An event is clearly seen in the middle of the irregular one. Here this event will be categorized, stored, indexed and/or reported and may contribute to the condition index.
    • Templates may be prepared to help the user correlate an event to some known conditions. For example, a set of templates may exist including a baseline waveform for a newly-installed motor or pump of a particular model and later measurements are correlated against that template to define changes, quantify wear and tear, and inform the maintenance decision-making process. The information may also be uploaded to a central database and/or provided to the maintenance professional. The information may also be included in a report. Exemplary templates may include, but are not limited to, a new machine, a worn bearing, a loose coupling, or a machine nearing its end of life.
    • Events may be indexed and a single frame or video clip may be extracted that corresponded to the same time. Those compilations may be stored. One index may be targeted in particular, for example, events associated with power quality. The user may review those events to determine if an investment in power conditioning may be helpful to improve equipment life and performance.
    • Events may be correlated with timing through the day/night and the same procedure as described above would allow the user to determine if particular times of day/night are correlated with better or worse equipment stability.

It will be appreciated that the user interface may take a variety of forms, and in particular, the invention may be implemented in a mobile application, so that, for example, a service technician can view the data acquired at a work site and make a decision about whether a maintenance call is urgent or can be scheduled later.

Example

    • FIG. 25 shows an image of a potential graphical user interface (GUI) showing the respiration waveform and video in real-time from a camera where the respiration waveform is derived from the video. The interface allows the user to view the data using a smart phone. Note that the patient's face has been intentionally obscured in FIG. 25, but would typically be visible in the GUI under normal use.

Additional features of the invention are described in the following sections.

Multiple Region Perimeter Tracking and Monitoring

A perimeter-tracking approach may be used to prevent an unknown factor from entering the monitoring space of the individual machine or simply a general area. This can also be used for objects exiting the area. The user will be able to create a perimeter (via a user interface) around the area that he/she wants to monitor and does not want any intrusion into.

Multiple methods of motion detection can be used in the perimeters. For example, a technique such as adaptive array comparison can be used to see if changes have occurred around the perimeter from one frame of the video to the next.

Another technique may be comparison of frame intensity values in the area of interest. Regions can be selected and those regions summed for a total intensity level. If that intensity level changes frame to frame by a certain threshold, motion is determined to have occurred.

Blob comparison may also be used for motion detection.

Single pixel edge motion may be used. It will be possible to determine the perimeter with great accuracy based on movement of an edge of a single pixel or virtual pixel, which will allow for a much greater degree of accuracy compared to using conventional blob techniques. The area being selected does not have to be a series of large boxes as in current technology but instead can be any sort of perimeter that the user chooses to select. This could offer the ability to use a very narrow single pixel perimeter or single virtual pixel comprised of multiple pixels.

Feature tracking may be used by locating features in the perimeter and tracking their location in the perimeter. If their centroid location changes then motion is detected. Correlation of a selected number of pixels with a feature in them can be correlated to sub-regions in successive frames to determine if the highest correlation of the original set of pixels is correlated more highly to another location other than the original location.

It will be understood that there are several factors that could create a false positive reading of respiration, including but not limited to outside factors such as wind from the outdoors or a fan, vibration from a device in the room, movement of a curtain or other object in the room, an animal in the field of view, or latent movement from someone near the subject. To help factor out these false positive readings Applicants contemplate the use of various techniques to isolate targets of interest.

Example

    • The invention may be installed and used in conditions where there are multiple regions to isolate, such as a busy production line involving numerous work cells, individual robots, etc. Each area can have separate perimeter monitoring using perimeters drawn by the user via the GUI.

Isolation of Frequency:

Applicants have also recognized that the invention may further use frequency isolation and a learning algorithm to learn the likely movement rate and distinguish it from outside factors that could produce a vibration or movement in the field of view of the camera. This will help distinguish movements in the field of view (such as a fan or wind blowing a curtain) from movements associated with vibrations of interest.

Motion may be allowed inside the area of interest without alerting or affecting the monitoring of the perimeters. This would allow for an object to freely move within the area of interest, for example a welding robot, but still allow for monitoring of the perimeters.

An object detected moving in the perimeter can be characterized by the number of regions in which its motion is detected to give an estimate of size. The time between detections in various blocks can give information as to speed based on the known physical projection in space of each pixel. The series of blocks through which the object is detected to be moving can indicate the direction of travel.

Motion can be detected through the inventive method of array comparison of different frames. Frequencies such as fast moving objects can be filtered out by comparing frames with larger separation in time, and slower frequencies or a slower moving object can be filtered out by comparing frames with shorter separations in time. Thus, the invention can be used to isolate certain motions for detection or rejection.

Using light level changes to detect motion can cause false a positive indication of motion from things that change the illumination of the scene but are not objects moving in the field, such as fans or curtains moving from air flow. Comparing different separations in frames (hence different separations in time) can eliminate these spurious indications. For example, the slow light level changes from the natural daylight cycle would not be detected if a short time separation in frames are compared.

Determination of Phase:

The invention may further include a method for determining, comparing, measuring and displaying phase, which is of particular relevance for the case of machinery.

It has been shown that intensity changes over time can be detected and correlated to physical phenomena. In many cases those signals may appear to be periodic. The periodicity can be described by frequency, amplitude and phase. In addition to the frequency and amplitude, phase is an important characteristic of the periodicity that helps temporally describe the signal and also describe one signal relative to another and relate those signals to patterns of repetitive events such as periodic motion.

The following example describes a method for extracting and analyzing phase information from time varying signals. This may be done on a single pixel level and/or for a plurality of pixels. The phase information is shown and displayed in numerous ways. Information can be gathered from the time varying signal based on the phase and its relationship to other parameters.

Example

Simplified Explicit Stepwise Procedure:

    • 1. A time varying signal is sampled in time with a photo detector, transducer or other suitable device. This sampling represents a time sequence with appropriate resolution to correctly sample the signal of interest.
    • 2. Multiple samples can be collected simultaneously with a plurality of pixels, e.g., with a video camera where every pixel in a single frame represents a sampling at the same point in time at different spatial locations in the scene.
    • 3. The resulting sequence is an array of X×Y×t where X represents a spatial dimension, Y a spatial dimension orthogonal to X, and t represents time.
    • 4. FFTs are performed in the time domain along the t axis for every pixel or element in the array. The FFT then returns a frequency spectrum for each pixel along with the amplitude and phase for each frequency.
    • 5. The phase information for each frequency can be displayed. For a given frequency, a phase reference such as 0° may be arbitrarily selected or may be associated with trigger, pulse, absolute reference, specific sample, or frame as may be preferred or selected.
    • 6. To create a phase mask image we plot a representation of phase for a given frequency in the same pixel from which it was measured. To create a two dimensional image we first set the frequency we are interested in. For example, we may want to see the phase relationship for the 30 Hz signal. To do this we filter the image so that pixels that are in the selected phase range are white (represented numerically as 1) whereas all others are black (represented numerically as 0). The phase range may vary but for this example we will use ±5°. For example, if we select 30 Hz and 55° then the image will show white (or 1 numerically) where a signal exists that has a frequency of 30 Hz and has a phase from 50°-60°. This has the benefit of showing all elements of the scene that are in phase at the same frequency as they all appear white while the rest are black.
    • 7. Taking this a step further, one can hold the frequency constant while adjusting the phase to 235° which is 180° out of phase of 55°. In mechanical systems, misalignment is typically 180° out of phase across a coupling. In this manner it is possible to look at two different phase values to see if there is a phase shift indicative of misalignment. Another example would be to look at a structure such as a bridge to see if structural elements are moving in or out of phase.
    • 8. Now if one were to start at 0° and toggle to 360° one would see all the different locations of the different phases for the 30 Hz signal. They would be indicated by the fact that the pixel turns white.
    • 9. This entire process can be repeated for every frequency.

FIG. 26 outlines one approach for computing and displaying phase.

It is possible to use intensity readings to increase the information in the phase images. For example, one could take the intensity of the frequency at each pixel and multiply it by the phase mask image. Since the phase mask image is binary (if the signal is at a particular phase it is white, or valued 1, and if it is not at the selected phase it is black, or 0) the phase image acts as an image mask that will only allow the intensity values to pass if it is at the selected phase. All others will be zero. If it is in phase the intensity is preserved since it is multiplied by 1. This will create a scaled image that shows only things at a given phase and what those intensities are.

If the amplitude of the frequency of interest due to intensity changes is calibrated to a particular value then the phase mask image (that is composed of 1s or 0s denoting in or out of phase respectively) can be multiplied towards a calibrated frequency amplitude image or array. Then the resulting image displays only things in phase at a particular phase of interest at a given frequency and offers a calibrated value. That calibrated value may be from anything that is causing the signal levels to change. It could be temperature variation from thermal IR imagers, displacement from moving features in a visible image or even variations in absorption levels through a transmitted medium.

For a measurement made with video imagery the phase may be referenced simply to the first image taken so that all phase readings are relative to the first image. However it is possible to synchronize the phase readings to another signal. This could be a trigger pulse or even a time varying optical signal in the scene of the imager.

Exposure modes on imaging sensors are often different. Two types of modes are global and rolling shutters. Global shutters expose every pixel at the same time. Rolling shutters expose lines of the sensor at different times. In the case of a global shutter all pixels are exposed simultaneously so the phase relationship is preserved across the sensor. In the case of a rolling shutter there are variations in the timing of exposure from pixel to pixel. It is possible to realign temporal signals based on the known delay between pixels after they are read from the imaging sensor. By accounting for this offset we can preserve the relationship of phase across all pixels.

It is possible to use the phase information in a noise reduction manner. For example, in the event of a phase image mask where the array or image is binary (1s for in phase, 0s for out of phase) one can reject all pixels out of phase at a given frequency and given phase. When exploring an image, if many pixels effectively “turn off”, it eliminates much background noise in the scene and makes detection much easier. This may be advantageous, for example, in a cluttered field or where many time-varying signals exist. Additionally, one can reduce noise by multiplying the phase mask image by the frequency intensity image and setting an intensity threshold below which the pixel is set to 0 or not represented in the scaling.

Mechanical or anelastic properties that have particular phase properties can be imaged and detected with the described technique. Phase relationship information can be exploited with the described technique to reveal physical parameters or other properties.

By cycling through all the phase mask images at a given frequency, traveling waves may be seen in the sequence of images created.

Different areas of the array or frame of the same or different phase mask images may be compared to show certain areas of interest indicating anomalies, e.g., one area that is out of phase with the rest. Or, these areas could be compared to find patterns indicative of physical phenomenon.

The following exemplary cases demonstrate some useful applications of this aspect of the present invention.

One use of phase presentation as described herein is to determine and to graphically display absolute or relative timing characteristics and patterns.

A second example is to demonstrate a modulation or a beat frequency or other characteristic which may correspond with a movement of an object of interest.

A third example is to represent a leading or a lagging event sequence made evident mathematically or graphically using techniques described herein. Again, this leading or lagging event sequence may be related to a movement sequence of an object of interest.

A fourth example of the present invention is to characterize highly repetitive displacement patterns such as a static or a dynamic constructive and destructive interference pattern resulting from multiple vibration wave fronts. The multiple fronts each typically originate from a point, line, or area of reflection, or originate from a point, line, or area vibration energy source. This technique may be used to discern false or positive indications. For example, a false indication may be found from a highly repetitive pattern which is more likely produced by a machine than a living being.

Example

    • Use of phase imaging is illustrated in FIG. 27.
    • FIG. 27A shows a single image from a video sequence of a bridge. During this sequence a vehicle passed over the bridge (not shown).
    • FIG. 27B shows a single phase mask image depicting a single phase (that of the fundamental vibration of the bridge) at 2.25 Hz, the bridge fundamental frequency. In this image things moving in phase show up as white (value 1) whereas things that are out of phase show as black (value 0). The image is scaled such that 0 is black and 1 white. One can see that the motion on the I-beam support shows a clear feature of motion indicating the entire span is moving in phase with itself, as one would expect.
    • FIG. 27C shows an image of the phase mask seen in FIG. 27B multiplied by the intensity at each pixel of the amplitude of the 2.25 Hz signal, which relates to motion. One can see that now the phase image is scaled with relative values. Furthermore the image is much cleaner as small amplitudes of frequencies can be set below a threshold using the noise reduction technique.

Phase information may be used to determine information about an object. For example, two parts of a machine might be expected to be moving in phase when the machine is operating normally, so if the two parts are out of phase it may indicate a problem. These values can then be used to determine information such as imbalance or misalignment. Areas in the phase imagery may be predetermined, defined by user interaction, or defined autonomously. Likewise those areas may be monitored by user interaction or autonomously.

Particular spots of variation in phase may be noted and targeted for abnormal behavior or be used to trigger a secondary analysis.

Some pixels may be in phase or may be 180 degrees out of phase. In either case they are coherent. They experience a brightest point or a darkest point at the same instants and have a fixed and predictable relationship over time.

A coherent behavior may be either in-phase or out of phase. In a pivoting mechanism, for example, a relative maximum and a relative minimum occur at exactly the same time every time. This pivoting arrangement is out of phase and is coherent. How does one know the pivot arm is rocking and not translating in harmonic motion back and forth with no rocking and entirely in phase? These two situations can be distinguished by observing a transient phase with at least one swap-over pixel location in between the two ends of the rocking or translating object.

One pixel location be changing from light to dark while the other is changing from dark to light, or they might both be changing from light to dark at the point in time depending on illumination and geometry. Intermediate pixel information may indicate concurrent, coherent in-phase or out-of-phase movement.

In addition, intermediate pixel information may be interpreted to show a lead or lag relationship as vibration energy travels across a structure. This information can be automatically or manually interpreted. For example if there are unwanted effects such as a resonance or modulation or a beat frequency that one wishes to minimize, the lead-lag information may be interpreted to identify a location for applying damping or modifying the mass or stiffness of a component to interrupt or absorb the unwanted vibration energy. Use of the invention to identify locations to add mass to absorb energy is especially pertinent, because the invention yields phase information at the pixel level and can therefore precisely determine the way the vibrations behave and propagate. Damping material will be most effective at vibrational anti-nodes and least effective at nodes, for instance.

Another use of phase information involves seeking and finding a timing or a sequence of distinct events within a cycle interval. For example at a particular frequency may be associated with a piston reciprocation. One may interpret phase vibration information to identify a sequence or a timing of intake and exhaust valve opening events that occur repetitively during a piston duty cycle.

Use of associated audio data.

As noted earlier, many video recordings contain both image data and audio data collected and stored with a common time stamp. Applicants contemplate that the invention can exploit the associated audio data in a number of ways, with or without the use of a graphical user interface (GUI).

Example

    • The audio sensor (microphone) may be used to detect oncoming events and trigger the system to begin acquiring data (or analyzing data differently). A system positioned to monitor a bridge might, e.g., switch from a standby mode to an operating mode when the sound of an approaching train or truck is detected. Alternatively, in a preventive maintenance role, the system might determine whether or not to archive video data if the audio signal exhibits a sudden change of amplitude or pitch, or otherwise indicates a significant departure from normal conditions. In a medical setting, sounds associated with snoring, labored breathing, coughing, crying, or other distress, may be used to trigger automatic archiving of the pertinent video sequence and/or add metadata to indicate that a particular off-normal condition has been flagged.
    • Note that in each of these situations, the system might or might not have a GUI, and it might or might not display an image frame, a streaming video, or any image at all. The system might operate autonomously, with little or no human intervention during the triggering, acquisition, data analysis, and archiving processes.

Example

    • The system may include a GUI that takes advantage of time stamping so that the user may select a particular output feature (e.g., a maximum deflection in a bridge component) and the video frame corresponding to the time of that event will be displayed. If the complete video recording contains the audio track as well, the common time stamp will allow a segment of the audio to be played back for a time selected by the user for review. For example, if the user rewinds the file to review the off-normal respiration event depicted in FIG. 24, the corresponding audio could be replayed to provide a better understanding of the nature and cause of the event.

Use of background reference to determine relative motion.

In some circumstances the camera or optical element collecting data may be moving. This motion may be unwanted and induce the appearance of motion in the scene at the target. For example the camera may be on a platform that is moving at 10 Hz. A 10 Hz vibration would then appear everywhere in the data, even on a target that may be stationary or not moving at 10 Hz.

An object in the field of view can be used as a reference point to eliminate this motion at the measurement target location. The motion at a point in the field that is determined to be static can be measured in the vertical and or horizontal direction. This measurement would determine the amount of motion that is present at the camera relative to the reference frame of the static object. This information can then be subtracted from the motion of a moving object in the reference frame to eliminate the motion of the camera.

It will be noted that the object that is determined to be static can indeed be moving. In that instance the act of subtracting this motion from another object's motion will yield the motion of the target relative to the static object.

An automated implementation of the present invention uses the center-most pixel to identify a vicinity of significant motion. Given that information, locations of background may be automatically identified because those areas of background are: 1) coherent and 2) widespread (e.g., located far apart on the focal plane array). Furthermore, if a particular motion is observed in all background and center pixels, then tests may be computed to discern if that particular motion may be attributed to a common type of camera translation or rotation.

Comparison of the invention with traditional “frame difference” methods.

It will be understood that although the invention involves subtracting pixel values at one time from those at another time, the inventive Adaptive Array Comparison method differs considerably from traditional techniques broadly referred to as “frame difference” methods in at least the following ways:

1. Adaptive Array Comparison specifically targets individual frames at particular references for the purpose of exploiting periodic signals.
2. Adaptive Array Comparison adapts to the signal, learns from the signal and modifies its approach.
3. Adaptive Array Comparison targets periodic signals to isolate them from the background.
4. Adaptive Array Comparison relates to time intervals based on signal of interest.
5. Adaptive Array Comparison isolates particular phases of motions, max and mins in its approach.
6. Adaptive Array Comparison is an iterative process and involves comparison of the results of those iterative steps.
7. Adaptive Array Comparison is a temporally based and links arrays to particular points in time.
8. Adaptive Array Comparison generally involves multiple comparison of arrays over time and relies on the cumulative result.

Claims

1. A system for analyzing periodic motions comprising:

a device for acquiring video files;
a data analysis system including a processor and memory;
a computer program to automatically analyze the video images, identify an area in the images where periodic movements may be detected and quantified using an Adaptive Array Comparison method; and,
an interface to provide an output signal related to at least one parameter characteristic of said periodic movement.

2. The system of claim 1 wherein said device for acquiring video files is selected from the group consisting of: video cameras, optical sensors, IR sensors, smart phones, webcams, digital microscopes, telescopes, and memory devices having video files recorded therein.

3. The system of claim 1 wherein said computer program detects said periodic movements by an adaptive array comparison procedure in which:

starting with a first frame [F0], the intensity at each respective pixel in the frame is subtracted from its intensity in frame [F0+x], where x is an integer, the intensities in frame [F0+x] are subtracted from those in frame [F0+2x],... until reaching a selected end point at frame [F0+nx] where n is an integer and the product nx is less than the total number of frames in said video file;
the resulting frame differences are summed for each pixel;
the process is repeated for at least two unique values of x, so that an optimal frame spacing x yielding the greatest difference may be found; and,
a selected number of pixels having the greatest frame-to-frame intensity difference are monitored to determine the rate of said periodic movements.

4. The system of claim 3 wherein said program repeats the adaptive array comparison procedure with other selected starting frames, to determine the phase of said periodic movement.

5. The system of claim 1 wherein said interface comprises a Graphical User Interface (GUI).

6. The system of claim 5 wherein said GUI displays data corresponding to said parameter characteristic of said periodic movement.

7. The system of claim 6 wherein said GUI further displays an image from said video file.

8. The system of claim 5 wherein said GUI further includes an output selected from the group consisting of:

still images containing phase information;
still images containing frequency information;
still images containing edge enhancement;
moving images displaying motion amplification; and,
audio recordings.

9. The system of claim 5 wherein said GUI allows a user to replay data starting at a selected time so that said user may simultaneously view the video stream and the corresponding calculated data.

10. The system of claim 9 wherein said GUI allows a user to define a perimeter within the video frame and said data analysis system monitors movements within said user-defined perimeter.

11. A method for characterizing periodic motions using video data comprises:

acquiring a video file of a selected object;
providing a data analysis system including a processor and memory operating a computer program to analyze the acquired video file by an adaptive array comparison procedure and calculate a parameter characteristic of the physical displacement of the object as a function of time and determine the periodicity thereof;
time-stamping the video file and the determined periodicity associated therewith; and,
archiving the time stamped images and the associated physical displacement data in a data storage system for later retrieval.

12. The method of claim 11 wherein said video file is acquired using a means selected from the group consisting of: video cameras, optical sensors, IR sensors, smart phones, webcams, digital microscopes, telescopes, memory devices having video files recorded therein, and downloading from a server.

13. The method of claim 11 wherein said computer program detects said periodic movements by an adaptive array comparison procedure in which:

starting with a first frame [F0], the intensity at each respective pixel in the frame is subtracted from its intensity in frame [F0+x], where x is an integer, the intensities in frame [F0+x] are subtracted from those in frame [F0+2x],... until reaching a selected end point at frame [F0+nx] where n is an integer and the product nx is less than the total number of frames in said video file;
the resulting frame differences are summed for each pixel;
the process is repeated for at least two unique values of x, so that an optimal frame spacing x yielding the greatest difference may be found; and,
a selected number of pixels having the greatest frame-to-frame intensity difference are monitored to determine the rate of said periodic movements.

14. The method of claim 13 wherein said program repeats the adaptive array comparison procedure with other selected starting frames, to determine the phase of said periodic movement.

15. The method of claim 11 wherein said interface comprises a Graphical User Interface (GUI).

16. The method of claim 15 wherein said GUI displays data corresponding to said parameter characteristic of said periodic movement.

17. The method of claim 16 wherein said GUI further displays an image from said video file.

18. The method of claim 15 wherein said GUI further includes an output selected from the group consisting of:

still images containing phase information;
still images containing frequency information;
still images containing edge enhancement;
moving images displaying motion amplification; and,
audio recordings.

19. The method of claim 15 wherein said GUI allows a user to replay data starting at a selected time so that said user may simultaneously view the video stream and the corresponding calculated data.

20. The method of claim 19 wherein said GUI allows a user to define a perimeter within the video frame and said data analysis system monitors movements within said user-defined perimeter.

Patent History
Publication number: 20160217588
Type: Application
Filed: Dec 9, 2015
Publication Date: Jul 28, 2016
Inventor: Jeffrey R. Hay (Louisville, KY)
Application Number: 14/757,259
Classifications
International Classification: G06T 7/20 (20060101); G06F 3/0484 (20060101);